id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
189927638
pes2o/s2orc
v3-fos-license
Children Exposed to Violence: Child Custody and its Effects on Children in Intimate Partner Violence Related Cases in Hungary Violence might increase post-separation, and visitation can offer an opportunity to the perpetrator for maintaining power and control over the mother and child. In relationships where intimate partner violence (IPV) exists, it is hypothesized that fathers may continue their violent behaviors throughout visitation with children. The study uses mixed methods: After completing of a screening questionnaire (n = 593) we recruited 168 individuals from our sample with problematic child custody cases who completed an online survey. Semi-structured interviews were conducted with 30 mothers with experience of problematic child custody cases. This paper reports only the qualitative results of the research. The findings highlight how custody and visitation rights may be used as a form of custodial violence and a continuation of IPV. Problematic child custody and visitation cases were reported following separation from an abusive partner because using legal proceedings as a weapon to maintain power and control over the former partner and child. Institutions involved in custody and contact-related legal procedures do not take into consideration the violence of the abusive ex-partner as a factor when determining custody and contact arrangements, even though it may work in opposition to the child’s wellbeing. The analysis of the data shows that child custody and visitation arrangements did not reflect clear understanding of domestic violence, coercive control and the effects of these on children’s wellbeing. Fathers were reported to be able to control the everyday lives of their ex-partners and their children through lack of institutional recognition of domestic violence. Introduction This study investigates child custody in Hungary, particularly in cases where visitation of a male parent is considered contrary to the child's physical or mental well-being and safety. In some cases, violence increases post-separation, so visitation can be concerning in that it offers an opportunity to the perpetrator for maintaining power and control over the female adult victim and the child. In recent decades in Hungary, the number of divorces has increased, with the result thateither following courtapproved agreements by parents or through court decisions following litigationchildren are placed at one parent's home, with the other parent having varying visitation rights. Census data in Hungary show that in 2016, 18% of all families were single-parent families (503 thousand families). In 2016, for 87% (431 thousand families in total) of single-parent families, the mother was raising children alone, while in 13% of cases (72 thousand families) the father (Hungarian Central Statistical Office, 2016 Microcensus data). In a Hungarian representative survey, the only survey to date gathering this data in Hungary, 36% of the respondents answered that they grew up in a family where violence was present, where they feared as a child that their parents would fight, quarrel loudly, threaten each other with physical violence, or where their father had beaten their mother (Tóth 1999a;Tóth 1999b). In 40% of adversarial divorces in Hungary, children are placed with the father. In contrast, in cases when parents can agree on their own about major issues including placement, custody and/or contact arrangements, fathers become custodial parents in only 7.7% of the cases (Grád et al. 2008). Child custody is a complex phenomenon which is influenced by the parents of the child, family members, and also institutions that regulate the process of child-visitation and custodial rights. This process is even more complex if we analyze the effects of abusive relationships on child custody and visitation. Violence does not always end with the separation of a couple. In fact, the separation period can be the most dangerous part of an abusive relationship as the abusive person may start a battle for child custody to maintain power and control over the child and by that process over the mother as well (Hester 2000;Callaghan 2015). With no existing research on this topic in Hungary, this study represents a very first step to investigate the issue of abuse of power and control in child custody and visitation cases with a history of intimate partner violence (IPV), and its effects on its victims in the Hungarian context. This paper draws on the findings of qualitative research conducted with 30 mothers who self-identified as experiencing intimate partner custodial abuse. Custodial or paper abuse (Miller and Smolter 2011) the instigation of frivolous lawsuits, false reports of child abuse, and other system-related manipulationshas been recognized for some time now by practitioners and researchers as the methods perpetrators employ to continue to exert power, force contact, and financially burden their ex-partner. However, Elizabeth (2017) introduces the notion of 'custody stalking' with an equal focus on children, as a mechanism abusers use to control mothers following their separation. This is defined as a malevolent process involving fathers who use the custodial or legal process to overturn the historic patterns of sharing responsibilities and care of children in order to extend their control over the children and thus the mothers (Elizabeth 2017). Custody stalking is still not well recognized and is often invisible to professionals as well (Elizabeth 2017;Holt 2018;Hunter et al. 2018). Research Questions International literature suggests a direct correlation between adversarial divorces and IPV, as well as pointing to the detrimental effects of IPV on post-separation child custody and visitation outcomes (Bancroft and Silverman 2002;Elizabeth 2017;Holt 2011;Holt 2018). However, no research has been carried out in Hungary to explain or contextualize these experiences. Thus, in combining the Hungarian experiences of women leaving abusive relationships and the findings of research elsewhere, our main research questions are as follows: 1. Are custody and visitation rights used as a form of custodial violence and thus a continuation of IPV in Hungary; 2. How do institutions involved in custody and contactrelated legal procedures in Hungary take into consideration the violence of the abusive ex-partner as a major factor when determining custody and contact rules; Literature Review Custodial violence often occurs through the exercise of irregular visitation appointments, or by means of financial exploitation through hiding joint finances or reducing support payments (Bancroft and Silverman 2002). Due to social inequalities and domestic violence, men are typically more financially secure than women, and abusers often try to buy the goodwill of the child (Emery et al. 2005). Children exposed to violence may suffer from various behavioral and emotional problems, are more frequently referred to speech therapy (Kernic et al. 2002), more likely to be absent from school and more frequently show problematic behavior in schools, which can lead to school dropout (Byrne and Taylor 2007;Callaghan 2015). Children experiencing problematic child custody may also have impaired verbal abilities (Graham-Bermann et al. 2010) and reading skills compared with their peers (Blackburn 2008). They are also more likely to repeat classes in school (Sullivan et al. 2008). Research shows that it is not only as a direct victim of violence but as a witness to their mother's abuse that can lead children to develop serious mental and behavioral disorders such as anxiety, difficulties establishing relationships with peers, and depression (McLaughlin et al. 2012). In some cases, the childas a self-defense mechanismtakes up the role of the father and is later aggressive to or contemptuous about the mother, and this attitude may reoccur in the child's future partnerships as well (Kernic et al. 2003). Violent partners may also continue to be manipulative and maintain control over their ex-partners, thereby continuing to traumatize their children as well (Thiara and Humphreys 2017). The abusive partner also often accuses the mother of ill-treatment, alcoholism or drug use at the child guardianship office (Mullender et al. 2002), accusations which when believed can result in the child being placed in the care of the abusive parent. In this way, professional bodies such as the guardianship office, police or child welfare agencies, can be employed as weapons of institutional violence against the mother (Hester 2011;Holt 2017). In some cases, the threats or accusations that the abuser engages in to obtain custody and visitation rights becomes the primary instrument for maintaining the abuse of the mother (Holt 2018;Hunter et al. 2018). Not all such parental behaviors fall into the category of criminal acts, but they do act to undermine the mother's authority and parenting skills. These behaviors may also have the effect of hindering various relationships that the victims have (for example, with other family members, friends of children, co-workers, peer-parents, and teachers), detrimentally affecting the emotional well-being of the family and the development of the child (Kitzmann et al. 2003;Holt et al. 2008). Other adverse effects on the well-being of children from both direct and indirect abuse, arise from the ongoing fear that the abuser will be violent again, leading to an increase in anxiety levels (Coulton et al 2009), a higher rate of depression among children involved in problematic child custody cases, and their finding it difficult to maintain relationships as adults (Kernik et al. 2003). Using a divorce procedure to undermine the mother's otherwise unproblematic parental behavior may itself indicate the presence of abusive behaviour on the father's part and, as such, put his own parental skills into doubt. However, such behaviour is rarely considered to be grounds for refusing requests for custody (Humphreys et al. 2011). Abusive fathers often blame mothers for ending the relationship, and involve children in arguments regarding the divorce which may cause additional damage to the mother-child relationship in the longer term (Radford and Hester 2006). Children that are alienated from their mothers may start communicating with their mothers with anger, distrust, or a sense of shame. They may also take on the abuser's role by acting in a superior way, or may be ashamed to be in touch with their mother under any circumstances. This can have significant effects on the mother-child relationship (Lapierre et al. 2017) and may lead to serious personality disorders for the child (Bancroft and Silverman 2002). The abusive father might also try to manipulate the child during visitation periods to maintain control over the child and the mother. These mechanisms can be even more pronounced if the father has obtained legal custody over the child (Beeble et al. 2007). In the last few decades, much research has been conducted about children's exposure to domestic violence and, more recently, coercive control, which identifies potential related symptoms of post-traumatic stress disorder, a lack of social skills, and emotional and behavioral problems (Kernic et al. 2003;Överlien 2010;Holt et al. 2008;Katz 2016;Callaghan 2015). According to Bandura, who developed Social Learning Theory (Bandura 1963), children who have been exposed to domestic violence are more likely to be abusive than those who were not exposed to violence as children, a phenomenon theorized as the Intergenerational Transmission of Violence (Wallace 2005). Katz (2016) cautions however that mothers and children often provide each other with emotional support, reducing isolation and nurturing the mother-child relationship. Children can also experience coercive controla pattern of controlling behaviors and coercive strategies where the abuse targets the victim's human rights: liberty, personhood, freedom and safety, and is not necessarily dependent on whether or to what extent physical violence is presentthrough ongoing financial abuse, monitoring and isolation, resulting in limiting their familial, social and extracurricular activities (Stark 2007;Callaghan 2015;Katz 2016). If children are enrolled in coercive behaviors, they are used as tools to exert control as direct victims of controlling and coercive acts (Hardesty et al. 2015, Callaghan 2015. It is also common that children are involved in coercive control activities by the perpetrator, including isolation, blackmailing, monitoring activities, stalking, and to legitimize violent behavior (Callaghan 2015;Stark 2007). Research has also showed that children may not always witness acts of violence but are still aware of abusive behaviour (Devaney 2010;Överlien and Hydén 2009;Mullender et al. 2002;Överlien 2013), and should nonetheless be recognized by professionals as survivors of violence, not as passive witnesses of domestic abuse (Överlien and Hydén 2009). Kernic et al. (2003) argued that maternal distress can result in behavioral problems in children, while children who grow up in families affected by domestic violence have been shown to have a higher risk of mental health problems (Bogat et al. 2006;Meltzer et al. 2009;Callaghan et al. 2018), a higher risk of physical health difficulties (Bair-Merritt et al. 2006), and are at greater risk of encountering educational difficulties such early drop out or learning difficulties (Byrne and Taylor 2007;Callaghan et al. 2018). When parents separate after a prior history of domestic violence, the risk to children of periods of violence and exposure to violence increase (Campbell and Thompson 2015; Lessard and Alvarez-Lizotte 2015; Broady and Gray 2018). As system theorists argue, if we include a third person in an intimate dyad, then such a relationship can be understood as a triangulation. The ordinary way of interacting in cases involving violent relationships may provoke the child to take sides or build alliances against another sibling (Callaghan 2015;Dallos and Vetere 2012). The triangulation of children during domestic violence can result in split loyalties, scapegoating, or long-term psychological distress (Callaghan 2015;Amato and Afifi 2006). To conclude, we consider that the abuse of children often occurs during IPV-related cases as a strategy for intimidating and controlling the former partner. Failing to consider this fact during the post-separation process can risk placing the child in unsafe situations (Hester 2000;Callaghan 2015). In many cases the underlying assumption (Holt 2018;Hunter et al. 2018) of authorities seems to be that a child's best interest is served when both parents are involved in child rearing activities. Decisions based on this assumption and granting custody or visitation to the abusive parent, even where was not involved in child care before the separation, provides perpetrators with a channel to maintain coercive control over the mother (Elizabeth 2017). In this regard, custody stalking can be a form of coercive control that humiliates and punishes women after separation, and may represent a weapon with which the mother-child relationship is weakened and attacked (Katz 2016). Despite this, it is not widely recognized by child-support authorities or family lawyers and may lead to post-separation arrangements that work against mother-child care time and the mother-child bond (Elizabeth 2017). Custody stalking may result in the perpetrator obtaining generous visitation rights or even custody, with a corresponding involuntary loss of maternal care time following separation that may damage the psychological wellbeing of both mothers and children and have a detrimental effect on women's mothering relationships (Elizabeth 2017). In addition, when a mother opposes the father's award of care time in court or through legal proceedings, this can be, and often is, interpreted as alienation or hostility towards the father, and may also result in the amount of caring time for the mother being decreased (Elizabeth et al. 2010). Yet, neither of these types of malevolent attacks are widely recognized, and thus continues to hurt children and mothers. Legal procedures and bureaucratic mechanisms of the state can be identified as a form of "secondary victimization" or "secondary abuse" using blame of the mother's mothering style with hegemonic masculinity (Roberts et al. 2015;Heward-Belle 2017). Gender theorists claim that institutions, like families, are gendered and formal institutions reproduce what may be called the "gender regime" (Chung andZannettino 2005, Heward-Belle 2017). Accordingly, in an invisible manner these institutionswhich are meant to protect children and promote their well-beingintervene in the lives of the latter on behalf of fathers who use violence to control their partners. It is not rare that institutions minimize any violence, blame mothers for violence (Heward-Belle 2017), and in some cases threaten to grant the abuser sole custody (Saunders 2017). We can conclude that post-separation contact involves a potentially abusive experience for children who are exposed to domestic violence (Holt et al. 2008). As research shows, oneto two-thirds of all abused women experience post-traumatic stress disorder, low self-esteem, depression and anxiety, while during legal procedures many abused mothers develop negative attitudes toward family courts and judicial systems, and feel depressed or anxious after encountering them (Elizabeth 2017). In the following section we outline the methodology employed for the purpose of this study, before moving on to selectively present the findings, analyzing these in the context of the literature we have just reviewed in this section. Methods To test our research questions, a mixed methods research design was employed, involving data collection over three distinct yet interrelated phases. Phase one involved the administration of a survey which was designed to engage both women and men who had gone through a child custody case, this involved a 10-min-long online questionnaire that was disseminated through online social media and several online magazines with a reach of hundreds of potential respondents country-wide. This survey was completed by 593 participants, who were as part of the survey completion, invited to express interest in volunteering for phase two, which involved initially filling out a 40-50 min-long second survey that focused in detail on their child custody process and experiences of IPV and also possibly participating in a semi-structured interview. The second survey, which focused specifically on those who considered their cases to have been problematic, was completed by 168 persons, 130 of whom were considered to have experienced a problematic visitation/custody case according to the criteria of our research (i.e., the partners could not agree on the child custody of their child). Among these survey respondents, 30 agreed to participate in Phase three which involved their participation in a semi-structured interview. This paper reports only on the qualitative semi-structured interviews. Respondents in phase three of the research were female: mothers who had experienced violence during their relationships, and experienced problems with their child's custody or contact arrangements. The interviews aimed to generate insight into IPV-related custody procedures as a whole in Hungary, but also to capture in-depth and precise information from mothers about their feelings and the effects of child custody on their children. Interviewees for phase three were purposefully selected from the list of 130 consenting participants emerging from phase two, with a view to maintaining sample variability regarding interviewees' place of residence, age, employment status, and education level. The selected interviewees were mothers who had to have had at least one child with their abusive ex-partner, and they had to have been separated for at least two months prior to the interview. Their relationships could have been of any type (marriage, cohabitation, non-cohabitation). All participants, including the survey respondents and the interviewees, were informed that their responses would be kept strictly confidential (no sensitive data that could be connected to the respondent or their child such as address, age, school name, employer name, etc. would be released). All participants signed a consent form before the interview, indicating that they were voluntarily taking part in the research. They were also provided with the telephone numbers of civic organizations in case they wanted to seek help in the future. As the focus of the research was post-separation child custody and contact problems, in order to qualify for participation. Before the interview started, interviewers informed all participants about the aims, and they were told that they could stop the interview if they wished to at any time (Overlien and Hydén 2009). The interviews lasted about 60-90 min on average and they were always conducted in person in a safe place with no other companion present. The interviews were semi-structured, recorded and then transcribed verbatim. No incentives were used to recruit participants. Although all the interviews were very emotional, interviewees reported that it was good to talk about their trauma and experience and felt relieved after the conversation (Vajda 2006). The interviews were paused for a break if and when needed (Overlien and Hydén 2009). An interview guide was used during the interviews and transcripts were analyzed using NVivo 10 software. The study followed the ethical principles recommended by the Hungarian Medical Research Council's Ethics Committee, and the proposed research process and data protection plan was officially approved by the Committee. Respondents remained anonymous. Findings In the next sections we respond to our research questions by drawing on the qualitative findings, exploring if and in what way custody and visitation rights may be used as a form of custodial violence with the continuation of IPV. We also explore engagement with the institutions involved in child custody and the contact-related legal procedures in Hungary and its response to families involved in child custody and postseparation custodial abuse. Maintaining Intimate Partner Violence after Separation During the interviews, some of our interviewees formulated clear explanations about the legally enforced child custody for abused partners and children and how they perceived this mechanism of abuse by the abuser. Participant mothers suggested that the acts the father commits against them are obviously not signs of care or love, but only involve a desire to maintain power over her and her child, as these next participant quotes explain: I think he should have calmed down now. I don't say he is holding on to me, it's more that he had a property and he lost it. When such a man loses his property, his soul cannot find peace. (P.K.) He was always quarreling: he has been on the phone too: 'How are you talking to me!? Don't dare to hang up on me!' So, while the physical abuse stopped, the psychological abuse of the child increased. And it took me two years, and a lot of hard times, to understand that it's not that he wants to see the child, but rather that he wants to keep us in a state of fear. So, it was very hard work, and I realized that this is not love, it is absolutely abuse. (J.K.) As the latter quote illustrates, abuse often persists after separation when there are shared children from an abusive relationship. It is also evident that this fact is not easily identified by the authorities: the abusive ex-partner's controlmaintaining mechanisms usually take covert or invisible forms such as financial deprivation, isolation, verbal humiliation, the undermining of parental authority, repeated allegations of non-cooperation, or unpredictability regarding timekeeping. It is also a common characteristic of abusers at this point to disguise such behavior as part of the ongoing coordination of placement or visitation arrangements. Besides more covert forms of abuse, access to children presents a good opportunity for the abusive parent to continue to use or commence using physical violence, which may also involve stalking and harassment. Each of these abusive methods alone, and especially their cumulative effect, has the potential to significantly influence the life of the abused ex-partner. The overwhelming majority of the interviewees reported that children were present when the various abusive acts occurred. In some cases, the child was described as being involved as a 'passive' participant, witnessing their mother being abused by the father, while in many cases the child was alleged to be a direct victim of the father's abusive behavior. Unpredictability was also a recurrent theme, mentioned as being difficult for the children to bear. Both witnessing a father's violence and being a direct victim of physical violence are identified in this mother's recounting of events: When he picked a fight, I just wanted to run away. And the children were there too and saw this. He got so enraged that he grabbed the child and threw him on the bed and injured his spine. The children are afraid. (A.K.) Fathers were also reported to behave so that their children were in a state of complete uncertainty, undermining the basic trust the latter have in their parents. In this interview, a mother reflected on the pain the father's unpredictability had caused her children: When the children ask him, he never tells them where they are going, or what they are going to do... When they try to ask what will happen in two weeks' time... and he just doesn't answer either me or the children... so they don't like to go with him, because they are kept in uncertainty… (P.A.) Concurring with the literature, the most frequently reported behavioral problems of children exposed to violence include anxiety, depression, learning difficulties, attention difficulties and aggression (McLaughlin et al. 2012). Anxiety and fear of the father because of past memories of violence were reported in this present study to emerge as physical symptoms or mental disorders, as the following quote suggests: The children can't even begin psychotherapy because their father won't give his consent. And the Institute of Education would only admit the child after a firstinstance court decision was issued. However, the middle child has already been diagnosed with depression, while the oldest boy needs full psychological treatment: He has severe self-esteem issues. Because of this he is unable to fit in, he has no friends. (H.B.) In this study, participant mothers reported that children did not want to attend the visitation periods that they were obliged to by the authorities. Suggested reasons for this included bad memories that made them anxious about spending time with their father without the presence of the mother, but often these experiences were reported to occur during the visitation period as well. In some cases, the father was reported to not only isolate children from family members and friends, but also neglect their basic needs (like eating, learning, or keeping contact with the outside world). In such casesas shown in the quotes below-these behaviors did not change the custody or the visitation rights of the father, even though the mother notified the authorities about what had happened to her children: And, in the beginning, it was 'kids, go, you must go'. Then he was on his way, and the girls said they didn't want to go because he didn't give them food and that it was cold. He left them, locked them in the house alone. He requested that the children be placed with him but he never applied for an extension of his visitation rights. (G.R.) He took away their textbooks, didn't let them study, took away their phones, and didn't let them talk, not just with me, but with anyone. He never let them visit their friends and their friends couldn't visit them either. He kept them in seclusion and didn't even let them talk to the neighbors. (L.M.) Although malicious prosecution or abusive litigation has not been extensively researched yet, it has been identified by women and by practitioners in the field for a long time (Douglas 2017). Related to both the first and second hypothesis, as our study shows how this abusive tactic may include filing civil suits in family or civil court, starting various (and numerous) procedures with a guardianship authority against the custodial parent, or pressing criminal charges based on unfounded allegations against an ex-partner (usually the custodial parent). This form of abuse was reported in most of the interviews where the mother was the custodial parent. To further abuse the mother, abusive partners were reported to make allegations of various types (involving crimes or misdemeanors) to authorities which then kept calling the latter in for hearings and requiring them to constantly write appeals, thereby helping the father control them and manipulate authorities. One of the cases is mentioned below: We have been in litigation for 10 years. Or rather, he is suing me, as I was always the defendant and he the applicant. He has been suing me for 10 years, sometimes at the guardianship office, sometimes in court. It is terrifying that he spends his days on this. (P.L.) It was apparent from the interviews that unreliability in terms of complying with visitation times and dates that had been arranged was one of most obvious methods abusive expartners employed in order to re-establish control, and this was felt by respondents as manipulation of their and their children's everyday lives and well-being. This form of abuse (re)creates a sense and a reality of loss of control, as one of the respondents explains: ...he doesn't want to upset the old order... So the point is, he keeps us in a dependent position just like he did throughout our entire lives. He wants me to positively not know what will happen or when -he wants to decide about everything. (M.I.) Another type of control and power wielded over the expartner involved in child custody and authorities is that the abusive father can hand in a formal accusation claiming that the mother has restricted his right to see their child, and that he could not enjoy visitation rights (Khaw et al. 2018). If a child does not want to go for a visitation because they have fear or anxiety about the father, but the father has a written, legally valid visitation right, he can hand in an accusation that will result in the mother being fined for restricting his visitation rights. In this case, the legal action can undermine the victim and involve her in long court proceedings, as the following quote illuminates: ...I received a letter saying that, because I had endangered the child [by not ensuring the exercise of contact rights], not only would the child welfare center place her in foster care, but they would also press criminal charges against me because I obstructed the visitation process. And I was completely broken. After all, I've been everywhere, I asked for help from the child welfare center, but nobody helped me, and now I will go ... [to jail] or I am deemed to have committed a crime? While the father is shouting at my daughter 'I will put your mother in jail?!'" (P.K.) As cited in the literature, another method abusers regularly use to discredit mothers in child custody and placement cases is to allege that the woman is an unfit and irresponsible parent because of alcohol or drug use or insanity (Bancroft and Silverman 2002). Our findings also suggest that the use of this tactic tends to start before separation as a form of IPV, but it also paves the way for later litigation by perpetrators aimed at sole custody or placement. Even during the relationship, abusers manipulate their children with lies, hints, allusions and insinuations with this tactic (Bancroft and Silverman 2002). These accusations may alienate children from their mother (often the only protective relationship the child has in the family) and cause them to deprecate or despise her (Lapierre et al. 2017). The findings from this study suggest that if these defamations are taken as valid by the authorities or court, the child can be placed with the father, totally obstructing the mother-child relationship and often leading to the total alienation of the child, as we can see from the testimonies of our interviewee: He started a large-scale campaign against me through the children; he practically persuaded the children to mock me, to spit at me, telling them that had I ruined and devastated the entire family. He made up stories about me sleeping with strangers, forced them into fantasizing about sexual things, even going into details about what I did with men, which were not true, of course... and that I was an alcoholic and mentally ill (F.A.). The findings of this present study highlighted how the abuser often used various types of procedures in contrary to the custodial parent, or press criminal charges based on unfounded allegations against mothers to discredit them in child custody and placement cases. Institutions' and Organizations' Roles and Contributions to Custodial Abuse The existing literature already suggests that public agencies and institutions, such as the child guardianship authority, the police, and the courts, rarely consider that custody or visitation rights may work in opposition to the child's welfare or safety, even if the child has directly suffered abuse, witnessed violence, or shows clear symptoms of post-traumatic stress disorder (Bancroft and Silverman 2002;Holt 2018;Khaw et al. 2018). Our qualitative data demonstrates a pervasive lack of attention to violence on the part of authorities which handle child custody cases in any capacity. In almost all participant cases in this present study, they are reported to have failed to show any sign of recognizing the unbalanced dynamics of relationships, and the action of the perpetrator as abusive. Instead, they were reported by participants to handle such situations as if they involved mutual disagreements between partners with equal power (denying any of the effects violence can have on a victim) and equal responsibilities (implicitly or explicitly refusing to hold abusers accountable for their violent or abusive actions). A number of interviewees asserted that public agencies read the violence in the relationship as a communication problem between the couple which they together should manage. In some cases, the guardianship authority were reported to go even further, suggesting placing the child in care, claiming that both parents were too busy fighting each other and were thus unable to care for the child. This approach was experienced by participants as denying the impact of the perpetrator's violence and holding the victim at least equally responsible for the situation. This approach was also apparent in the handling of visitation cases that were reported in this study. Visitation, and especially sleep-over contact with an abusive parent, was reported as being frequently disliked by children, who asked not to go, and in some cases mothers reported that they tried to comply with this request by not forcing their children to participate in such contact. It was not uncommon for our respondents to report an absolute lack of recognition of the significant impact of the abuse on their children on the part of the public agencies entrusted with child protection tasks. As one mother describes her experience: They don't care why the children were not with him, their only point is that they weren't with him when the father was entitled to it. And after all this they summoned me to the child welfare office and told me to sit down with the girls and tell them they had to go. They said I had to tell them this one-and-a-half days must be endured so that we could have 10 or 11 days of peace. And then the children went, but they were crying. (F.A.) If women did not comply with such requests, they stated that they would be threatened with removal of children to foster care. Some participating women concluded that these threats were a way for the authorities to intimidate the mothers into giving up her attempts to protect the children from traumatic visitations because authorities felt overloaded by parents' accusations and reports 'against each other'. No visible sign that the actual merit of these accusations was being investigated was reported as apparent in most cases, thus it appears to participants that authorities assigned equal responsibility to the abusive parent who had initiated reports, and the victim who initiated reports about abusive behavior. Another recurrent theme in the interviews as the mothers reported it was how the tactics of the abuser also worked on the employees of various agencies (guardianship office, childwelfare center) who, after experiencing accusations, harassment and threats of lawsuits, felt victimized by the father. In these situations, as mothers explained the targeted employees or the authority itself wanted to get rid of the case by passing it over to another case-handler or another authority altogether (sometimes in another city), deliberately reporting a conflict of interest as shown below from some interviewee: He made a complaint against the social worker, so yesterday it was the fourth social worker who gave our case back. They were attacked in such a manner that they couldn't mentally endure it. My ex-partner also said that the judge was biased, cynical, and petty.(P.K.) I think there is an abusive person, the father; and there is an abusive office, which is the child welfare center. If I had to describe what it this like, it is the same as an abusive person. It threatens, does not pay attention to the things you do, does not understand what you say. (L.P.) As the interviews show, wearing agency employees out with (threats of) frivolous lawsuits, complaints and retaliatory procedures against them, and the harassment of employees may lead to constant changing of caseworkers and even changes in the agencies that handle the cases. At a minimum, this was reported to facilitate the perpetrator to maintain considerable power and control over the mother's time and money by forcing her to take extra time off and causing her expenses in travel and legal fees. Children exposed to violence were also reported by their mothers to be violent with their peers. These behaviors and attitudes from the testimonies also show that IPV affect the school carrier, social abilities and relationships of the child: This went so far that by third grade, my son, who had been a very good student before, became a tense child who had started to display the symptoms of a learning disorder. This is when the first report was madea teacher notified the social worker that the child was very tense and nervous. (O.D.) From our qualitative data we could show some insight into how authorities were reported to handle child custody cases in Hungary. Institutions were experienced as not taking into consideration the cases of IPV in relationships. The institutions were perceived by participants to be overloaded by parents' accusations and reports 'against each other' and often handled the cases as 'communication problems.' Participants reported that the perpetrator often used abusive techniques against the authorities as well with handing in allegations against the institutions. Authorities were also reported to use threatening techniques against the mother, frightening her that her child can be taken to foster care if she did not agree on custodial cases with the father. Discussion and Conclusion This research described in this paper on child custody in Hungary, particularly in cases where visitation of a male parent is in contrary to the child' physical or mental well-being and safety. We can conclude from the findings presented above, that custody and visitation rights may be used as a form of custodial violence and thus a continuation of IPV. Our qualitative data showed that in IPV related cases abusive fathers use child custody as a form of custodial violence and thus a continuation of IPV. Our finding also confirm that the legal institutions were not experienced as recognizing the significance of IPV in child custody cases but rather promoting visitation rights for the father, resulting in the violence remaining 'invisible' in many cases. Our interviewees mentioned that the institutions in most cases do not realize these aims of abusers and thus provide them with the opportunity to continue behaving abusively through exercising their visitation rights, or even through granting them custody, irrespective of the harm it causes. In the absence of any previous research in Hungary about IPV-related child custody and visitation experiences, a mixedmethods approach to obtain a fuller picture of the mechanisms, the process, and the participants' roles in this phenomenon was employed. This paper reports only on the qualitative interviews. Acknowledging that the sample is not intended to be representative in Hungary, the findings are nonetheless important and provide a window of understanding into this issue in this jurisdiction. The need for tools which can effectively assess the level of risk and harm potentially caused by an abusive parent should be developed in Hungary as well. The interviews we conducted suggest a lack of recognition of this need by the public agencies in Hungary. However in Hungary data on domestic violence is more or less only accessible from police and prosecution materials but unfortunately no large-scale or representative research has been conducted on intimate partner violence in the last 20 years (Tóth 2018). With this paper we would like to draw the attention of practitioners to the phenomenon of postseparation contact related intimate partner violence cases with the emphasis of focusing on the situation of children. We would also like to make an attempt to call attention of practitioners of coercive control against children in the relation of postseparation contact with the prior history of domestic violence. The need for further studies and research on coercive control against children in Hungary is also apparent. In line with previous literature, this study indicated a correlation between pre-separation IPV and post-separation abusive practices affecting children such as custody stalking (Elizabeth 2017), paper abuse (Miller and Smolter 2011), undermining maternal authority and the mother-child relationship (Bancroft and Silverman 2002). It also highlighted previous findings the children themselves can become targets of coercive controlan expression that does not even exist in Hungarian as yetlimiting their autonomy as well as their social, housing and emotional well-being and development (Stark 2007;Callaghen 2015;Katz 2016). The study's findings reinforce that institutions may pay less attention to these abusive behaviours than they would necessitate in order for the mothers and children to be safe (Elizabeth et al. 2010, Saunders 2017, Heward-Belle 2017. Considering that this is a completely new research area in Hungary with no previous study with this focus, further research is needed to fully verify that in Hungary, as in other countries (Bancroft and Silverman 2002;Elizabeth 2017;Holt 2018;Hunter et al. 2018), a history of IPV is a most likely predictor of malevolent custody and contact litigation. The study also indicates that in order to provide better support to children and mothers harmed by continued post-separation abuse, the very concept of coercive control and custody stalking may be imperative to introduce within the Hungarian professional and research community. This study also points to the need for practitioners in Hungary to include the investigation of pre-separation IPV, coercive control and custody stalking into professional language and guidelines. As none of these currently form a part of the regular training of practitioners coming to contact with victims, the potential practice initiatives may include first and foremost the creation of the necessary Hungarian expressions that are currently missing from the language to enable both victims and practitioners to describe the harmful behaviors and their effect. Developing specific training courses as well as protocols and guidelines based on the findings of this, and hopefully, future Hungarian research are potential further directions. While the link between IPV and abusive use of custody and visitation rights by violent ex-partners appear to exist, research into the gendered context of this abuse is extremely rare in Hungary. Thus, for example, research into whether and how gendered expectations by authorities towards mothers and fathers affect authorities' decisions on what constitutes abuse, control, rights and obligations could provide a basis for possible further policy and practice recommendations as according to our result the visitation rights of the father was important and foremost even though sometimes it was obviously in contrary to the child's safety and emotional, psychological well-being.
2019-06-18T14:33:09.481Z
2019-06-17T00:00:00.000
{ "year": 2019, "sha1": "ff91d04f4d2bc564d8df34f4a86a216ea7b9157e", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10896-019-00066-y.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "ff91d04f4d2bc564d8df34f4a86a216ea7b9157e", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
235287056
pes2o/s2orc
v3-fos-license
Evaluation of lighting design on Working Space (Case Study: Indo Global Mandiri Faculty office) These days when a well-lit working space yet with sustainable building are significant issues in building performance. A building that designed to be an office should have enough illumination and low energy consumption. The newest constructed and finished building designated for Faculty members of Indo Global Mandiri University (UIGM) is expected to provide the optimum well-lit working space that can give a good support for accomplishing work and task given. Therefore, an evaluation on illumination level obtained from lighting design on working space for this building should been done. Evaluation is carryout by simulation tools to find out the illumination level contour, and daylight factor. Several factors will be accounted for the evaluation such as opening, arrangement and type of luminaries, and material. The method used in this evaluation is simulation software that shows the value of illumination on working level plane and validate with field measurement. The result of this evaluation will show whether lighting design on working space in the case study meets the standard illumination for working space. Optimized daylight for minimum energy consumption in artificial lighting could be the proposed recommendation. Introduction A well-lit working space yet with sustainable building is significant issue in building performance. In working environment, productive and efficient energy-use is prioritized, and problem solving lighting design is expected to support it [1]. Illumination that gives comfort and convenience in working space must be considered in lighting design. Good lighting should provide for the needed level of visual performance, but it also determines spatial appearance, it provides for safety, and it contributes to wellbeing [2] - [4]. One of the factor that contribute to the good lighting is indicated by lighting design that meets standard of illumination required for completing certain levels of task difficulties. Evaluation is needed to ensure that lighting design is well-design for building's purposes. A good visual comfort determined by sufficiently high horizontal illuminances, properly distributed light on the workplane (appropriate illuminance uniformities) and avoided discomfort glare (from luminaries or from window) [5]. But a good visual comfort is not the only aspect to be considered while 30% of energy consumptions in an office building came from electricity used for artificial lighting [6]. Artificial lighting is responsible for a large part of an office building's electricity needs. In order to reduce this electricity consumption, important efforts have been made during the last decades and will still have to be made in the future [7]. Result from this evaluation will determined whether lighting design for the working space is need to be revised or not. And when a revised is needed, an efficiently energy consumption plan should be considered while revising lighting design. Thus revised lighting design should provide maximum visual comfort and efficient energy consumption. Literature review Daylight and artificial light are factors that contribute to the illumination level. Optimum daylight is gain from passive design from window opening and sky illumination. An artificial light is designed to be flexible thus energy consumption should be considered. Daylight also as one of the key role in occupants' visual comfort as well as developing a sustainable environment [8]. Artificial lighting design is defined by illumination method, type, pattern, and planning calculation. [9]. There are several standard for illumination level for working space. It depends what kind of working space that designated. Illumination level for office working space is range between 300-350 lux according to Indonesian Nasional Standard (SNI) [9]. To obtain a visual comfort, the minimum illumination level should be achieved. Illumination level depends on flux from daylight and artificial light. Daylight flux affect by several factors which are orientation, opening, and sky condition. While the artificial light obtain from electrical lighting luminaires and arrangement. Working space in office building should well-lit as one of the requirement of visual comfort that support work performance. Working in a well-lit environment will increase the durability of worker especially that working with computer and diligent task. Models The latest asset addition to Indo Global Mandiri University campus on Palembang is a twelve stories building called Gedung C. the building was finished in the end of 2019 and start being used in early 2020. Facing northeast, this 12 stories building is dedicated to students, Faculty's members, management of this university, and foundation management. The first until fifth floor was consisting of classrooms, computer laboratories, workshop rooms, and consultancy rooms. Faculty working spaces are on seventh to tenth floor. While eleventh is dedicated to university and foundation management. Placed on tenth floor, the lecturer and staff members of engineering faculty were clustered based on departments. Varies for five to ten members, the working spaces of faculty members were divided into rooms. From figure 1, floor plan of tenth floor area divided into 13 rooms. Faculty members are in room 2-3, 5, 6,8, and 9 that used for working space while room 1, 4, 7, 10, 11-13, are circulation and services area. Rooms that will be evaluated are rooms allocated for working space. Measurement of illumination level that is analyzed is the average illumination level for each room. And the arrangement of the artificial lighting is place as it is on the real room. The object of measurement is horizontal plane. Working plane is the standard 80cm raised from floor. Finding and discussion Evaluation of illumination level of daylight and artificial light is utilize a lighting software. For the purpose of this research, the condition that evaluation is 8 a.m. in the morning, 12 p.m., and 16 p.m. date of the evaluation is solstice time where sun is on the equator imaginary line. The figure 2 presents results of the simulation on daylight only while figure 3 present results of simulation of daylight factor combined with artificial light on. The figure 2 and table 1 show that with only daylight, illumination for working space was not meeting the standard requirement. The average of illumination level on three different time are range between 36.4 to 352 lux. From the figure shown, at 8am, all rooms are underlit. In 12am 1 of the room (room 8) is well-lit (352 lux), and at the third simulation (4pm) all the room are underlit. this conditions happens because the illumination level depends on the sky condition. The brightest condition is at 12am. The room that illuminates well on 12 am has more opening that others room. The second simulation is adding artificial lighting arrangements as designed. The luminaires are T8 16W with cool daylight color (6500K). Same time simulation, 8am, 12am, and 4pm. Results are shown in figure 3 and table 2. Range of average illumination level within rooms is varies between 154 to 575 lux. Significantly increased but not all rooms reach the standard illumination level that required. There are four of six room that have under-lit illumination level. From the simulation is stated that the illumination generated in the working area is lower that standard value designated by SNI which was 350 lux as seen in figure 3 and table 1 within the hour designated. And the artificial light that design for the area is increasing the illumination but not enough to reach the standard illumination level. Significantly increase but not all room are well-lit. Conclusion Although the illumination gained for the working space was lower than standard value, the visual comfort for the user was enough. But for further use and health, the minimum standard should be provided. The effort to meet the minimum requirement can be achieved by changing the luminaries, arrangement of luminaries, or the arrangement working space. Additionally, this research only focus on the measurement of illumination level from software simulation. The other aspect such as users' satisfactory should be involved in the future study. Optimized daylight for minimum energy consumption in artificial lighting could be the proposed recommendation.
2021-06-03T00:33:38.377Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "9612a4974346c6fd8f318eca24ed0bcb300eceb9", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/738/1/012037", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "9612a4974346c6fd8f318eca24ed0bcb300eceb9", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
232231623
pes2o/s2orc
v3-fos-license
Exercise test in patients with asymptomatic aortic stenosis – clinically useful or not? Introduction: Aortic valve replacement (AVR) is recommended for symptomatic patients (pts) with severe aortic stenosis (AS). In asymptomatic AS (AAS) exercise testing (ET) is recommended but is controversial and varies among practicing clinicians. Objectives: The aim of our study was to assess the importance of ET in AAS pts. Patients and methods: 89 pts with AAS (53 men; age 59.5 yrs) underwent 244 symptomslimited ET. Results: All ET were clinically negative. During median follow-up 22(12) months 39 pts (22 men) became symptomatic AVRgroup. This group was compared with the asymptomatic nonAVR 50 pts. In multivariable Cox analysis presence of maximal HR<85% (THR<85%) during ET was significantly related with AVR (p=0.01). After adjustment for the use of betablockers (BB) this was not statistically significant (p=0.08). In BB subgroup THR<85% was significantly related with AVR in univariable Cox analysis hazard ratio 2.2 (1.07-4.9) p=0.03 and after adjusting for age (p=0.047). This relationship was not observed in the group not treated with BB. Conclusion: In patients with asymptomatic AS exercise test is safe but in our group of patients the results were not crucial in decision of AVR. Patients treated with beta-blockers who did not achieve 85% of predicted maximal heart rate had a higher probability of AVR. The influence of beta blockers treatment on the decision of AVR in this small group of patients need further revision. Introduction Aortic stenosis (AS) is one of the most common heart valve disease in developed countries. Studies investigating the natural history of AS in adults show that as stenosis increases, compensatory mechanisms fail and the symptoms: dyspnea, angina, syncope, and arrhythmias start to occur [1,2,3,4]. Once symptoms develop, the prognosis worsens [5,6]. Aortic valve replacement (AVR), either surgical or transcatheter, is recommended by current guidelines for symptomatic patients with severe AS [3,4]. In asymptomatic AS (AAS) patients with preserved left ventricular (LV) function defined as ejection fraction (EF) above 50%, the benefit of prophylactic AVR is still unproven and the optimal timing of intervention remains controversial [3][4][5][6][7][8][9][10][11][12]. International guidelines recommend exercise testing (ET) to unmask the pseudo-asymptomatic patients and those without self-reporting symptoms. In the past, ET was contraindicated in pts with severe AS because of concerns of life-threatening complications [3,4]. Nowadays, ET is still absolutely contraindicated in patients with symptomatic severe AS. As studies over the past 15 years have shown, in patients with AAS, ET supervised by an experienced cardiologist is safe and based on ESC and ACC/AHA guidelines should be prognostic useful [13][14][15][16][17]. In practice, the use of ET in asymptomatic AS is controversial and its use varies among practicing clinicians [15,16,18]. The aim of our study was to assess the safety and tolerability of ET in asymptomatic severe AS patients and an attempt to answer the question if standard ET is still of important clinical value in these group of patients. Patients and Methods We prospectively included 120 consecutive patients from the Outpatient Valve Disease Department with a diagnosis of asymptomatic, significant AS. Severe AS was defined by the aortic valve area (AVA<=1.0cm2), mean transvalvular pressure gradient 40 mm Hg, EF>50%. The inclusion criteria were: the absence of symptomsmajor: dyspnea, angina pectoris, syncope, and minor: dizziness, weakness, fatigue, exercise intolerance. Exclusion criteria: predominant aortic regurgitation or more than mild mitral/tricuspid regurgitation/stenosis, history of coronary artery disease (myocardial infarction, CABG, PCI), comorbid disease associated with symptoms that could interfere with clinical evaluation and preclude the performance of an exercise test (i.e. uncontrolled hypertension, disabilities, etc.). Hypertension was diagnosed as previously made by physician (with medications) or blood pressure values ≥140mm systolic or ≥90mm diastolic on 2 visits [18]. Diabetes was diagnosed as previous made by physician (with medications) or fasting blood glucose level ≥7mmol/l on ≥2 blood samples [18]. Finally, the lack of symptoms was confirmed in 96 patients -after detailed examination and interview, twenty-four patients were classified as symptomatic. One patient was excluded from the study after the first test. Six patients denied participating in the study and AVR if necessary. Eighty-nine patients diagnosed as true asymptomatic AS agreed to participate in the study. The group consisted of 36 women and 53 men. The mean age was 59.5(11.7) yrs (range 25-77 yrs). All patients were informed about the procedures, benefits, and risks involved in participating in the study. Informed consent was obtained from each patient and the study protocol conforms to the ethical guidelines of the 1975 Declaration of Helsinki as reflected in a priori approval by the institution's human research committee. Patients were examined every 6 months (symptoms; echo and exercise test in asymptomatic pts). The follow-up was stopped at predefined time (31.12.2017) and the maximal follow-up was defined as 36 months. Transthoracic echocardiography The standardized examination comprised transthoracic echocardiograms. The severity of AS, LV wall thicknesses, chamber dimensions, and EF were measured according to the prevalent European and the United States guidelines. Exercise testing A symptom-limited ET test was performed using an electrical bicycle monitored by a cardiologist, according to the recommendation [19]. Every minute, patients were asked about their exhaustion (we use modified 0-10 Borg scale). The initial workload was 50 watts (W) with a gradual increase of 50 W every 3 minutes. The ET was conducted till the patient exhaustion (ie. 7th level in modified Borg scale). Target heart rate (THR) was calculated according to the formula: 220-age. Submaximal frequency corresponded to 85% of this value. The test was to stopped in case of: End points The end point was defined as the decision of AVR Statistical analysis Statistical analysis was performed using SPSS version 11.0. Values are given as mean, standard deviation (SD) for continuous variables and as percentages for categorical variables. The data was tested for normality by Kolmogorov-Smirnov test. Comparison between groups was performed by unpaired t test, χ 2 , and Mann-Whitney tests. Correlates of the end point were identified by multivariable Cox regression models and presented as hazard ratio and 95% confidence intervals. In multivariable analysis, we included the parameters that achieve p<0.1 in univariable analysis, not more than four. In univariable analysis more than 4 variables had p<0.1. It is well known (and clearly seen in our results) that age was the strongest parameter related with aortic valve replacement so it was used as "basic". Variables like hypertension, hyperlipidemia, statins were strongly age relatedwhat we verified firstly. This variables became not significant in multivariable analysis when age was one of variable. We clearly know that variables -Vmax, PAG and xAG are mathematically related each other so we took only one of them (most frequently used in publications) -PAG. The results of exercise test -Mets, heart rate are strongly age dependent. The % of target heart rate and presence of THR<85% are notso we took one of them. Results A history of hypertension was noted in 56 pts, 26 pts had dyslipidemia and 13 were diabetic. men, became symptomatic (it was recorded during periodic visits before the next exercise test) and the decision of AVR (AVR group) was made by the heart team. Figure 1 presents the details of follow up. We performed 244 tests (Table 1). No significant differences were observed between consecutive tests performed every 6 months All tests were finished because of pts fatigue (7/10 in Borg scale) or achievement of 100% of THR and were clinically negative, without blood pressure fall, or complex arrhythmias. The AVR group was compared with 50 patients -the nonAVR group (Table 2). Patients who remained asymptomatic had longer mean follow-up time, were younger and had less significant AS, less often had hypertension and more often were without medications. When we compared the first tests performed in AVR patients with nonAVR group we found, that AVR group was characterized by lower exercise capacity in METs and lower heart rate during maximal effort compared with non AVR group. This was mostly caused by the age differences. Patients from AVR group more frequently become fatigued before they reached age-adjusted 85% of THR (Table 3). This parameter was significant in univariable Cox analysis (Table 4). It might suggest new clinical parameter -an equivalent of AS symptoms ie. inability to achieve 85% of age adjusted THR during exercise. However, after adjustment for the use of beta-blockers this was not statistically significant (p=0.08). Very interesting results, we observed when we analyzed patients treated and not treated with beta-blockers separately. Among 45 pts treated with beta-blockers 28 became symptomatic, 16 (57%) of them did not reach 85% of THR. Among 17 finally asymptomatic patients on beta-blocker, only 5 (29%) did not achieve it. In this beta-blockers subgroup THR<85% was significantly related with AVR in univariable Cox analysis -hazard ratio 2.2(1.07-4.9) p=0.03. This parameter was still significant after adjusting for age (p=0.047). This relationship was not observed in the group of 44 patients not treated with beta-blockers. Eleven from 44 were finally qualified for AVRonly one had THR<85%. Among the rest 33 asymptomatic patients 7 did not reach 85% of THR ( Figure 2). Discussion Despite advances in the diagnosis of valvular heart disease, indications for valve replacement in patients with significant asymptomatic AS are one of the most difficult clinical problems. Supporters of surgical treatment point out that even being asymptomatic, patients with severe AS have a poor prognosis with a high event rate and early elective surgery should be recommended based on observational studies [8,9,10,11,12,20,21]. It is reported that approximately half of the patients diagnosed with severe AS do not report symptoms at initial diagnosis [3,4,22]. On the other hand, it is known, that 3-11% of patients die soon after the onset of symptoms before AVR can be performed [23]. Traditional symptom-limited exercise testing should be helpful to determine whether patients who do not report symptoms are truly asymptomatic. In a retrospective analysis of prospectively collected data, Saeed et al. in 2018 found event-free survival at 1 year 87%±3% in patients who were asymptomatic on ET compared with 66%±4% in those with revealed symptoms [24]. The similar data presented the meta-analysis by Rafique; they found that asymptomatic patients with abnormal results on ET had a risk of cardiac events during follow up eight times higher than normal and risk of sudden death 5,5 times higher [25]. In 2017 Redfors summarized 20 publications about stress testing in AAS [15]. They presented a report with available data on a stress test in AS and its potential role in decision making for optimal timing of AVR. Only 7 of 20 publications concerned the treadmill stress test, there were no cycloergometric tests. The rest were stress echocardiogram and cardiopulmonary testing. The most assessed group consisted of severe but also moderate AS. The abnormal stress test was present in the majority of patients from 15 to even 67%. The authors summarized that positive ET was the strongest predictor of developing symptoms at follow-up. There was no explanation, why the patients with positive stress tests were not referred for surgery as recommended by guidelines. The result of our study is completely opposite to those presented above. All of our tests were clinically negative. No symptoms were reported in 244 tests: The lack of a clinically positive ET in our group may be associated with a very careful selection of patients. The study group of 89 patients was selected from 120 patients referred to as AAS. At the beginning of the interview, all patients denied the symptoms. After a very careful examination, 25 of them confirmed symptoms. Perhaps those patients would be symptomatic during ET if performed. The lack of reporting the symptoms might be for various reasons. One of them is the self-limitation of physical activity. Some patients believed that reduced exercise tolerance, shortness of breath, chest pain, dizziness are age-related ailments, smoking, changes in the spine, etc. They have been adapted by decreasing their level of activity to avoid symptoms. They may also not recognize significant symptoms, often underestimate their severity, and only report when they become extremely limiting [15]. Assessment of absence of symptoms is based on information obtained from the patient, but also should be confirmed by the family. The exercise test is contraindicated in symptomatic patients. But it may happen that the patient is not telling the truth as in the case of one of our patients, who denied the symptoms because he wanted to be under the supervision of the granting physician. Despite the symptomatic (stenocardia, dizziness) significant AS, the stress test was performed without any symptoms reported by patient or complications. At the next visit, the patient refused to the test and confessed to the symptoms. Despite clinical and echocardiographic progression, the patient refused surgery and died at home because of heart failure. He was excluded from the study; he really did not fulfill the inclusion criteria. Because of doubts about symptoms in classic exercise test new risk factors are searched for. Chambers [13] and co described a new exercise measurement with additional important prognostic implication, an early rapid rise in heart rate (RR-HR) defined as achieving at least 85% THR or >50% increase from baseline within the first 6 minutes. They concluded that RR-HR is a compensatory mechanism to maintain cardiac output. This was associated with revealed symptoms later in the same test and predicted AVR. In the previous studies the authors showed also that stroke volume failed at the start of exercise and before symptoms developed in patients with severe AS [24]. Despite the lack of the symptoms during the test, we also tried to find out any differences in the result of ET between AVR and nonAVR group. AVR group was characterized by lower exercise capacity in METs and lower heart rate during maximal effort compared with nonAVR group. This difference might be easily explained by age because AVR patients were older. On the other hand, percent of maximal heart rate is age-adjusted and we found that patients from AVR group more frequently become fatigued before they reached 85% of predicted THR. Maximal heart rate below 85% of age-predicted was also presented as chronotropic incompetence and reported as important prognostic factor [26]. In our patients maybe it is a kind of equivalent of AS symptoms? In univariable Cox regression analysis THR<85% was related with higher probability of AVR. The significance persisted after adjustment to age. However after adjustment for the use of beta-blockers this parameter was no more statistically significant (p=0.08). Patients with asymptomatic AS do not need pharmacological treatment [3,4]. When they became symptomatic they need surgery. The prevalence of hypertension in pts with AS is up to 50% in some studyin our, younger group, it was about 30% [3,4,27]. Hypertension was shown to accelerate the progression of aortic stenosis and may increase the risk of disease. Hypertension, by increasing the systemic vascular load had negative effect on hypertrophic remodeling in aortic stenosis [27,28]. However antihypertensive treatment in severe AS has been thought of a relative contraindication due to the risk of hypotension and hemodynamic collapse; nowadays there is no doubt that hypertension should be treated with caution. Antihypertensive treatment with beta blockers is frequently avoided because of fear of depression of left ventricular function. This is in line with recent clinical practice guidelines recommendation with no mentioned of beta-blockers [3,4]. From the other hand recent studies have shown that use of beta-blockers are safe and may be even beneficial [27,28]. Pharmacological treatment in our group was used to treat hypertension or supraventricular arrhythmias. We found that presence of hypertension had no influence on survival/AVR , but beta-blockers treatment had. We found that patients treated with betablockers who did not achieve 85% of predicted maximal heart rate had a higher probability of AVR. This influence of beta-blockers treatment on AVR in this small group of patient need further revision. Although ET has been performed in AAS for over 15 years there are still a lot of questions that causes the use of exercise testing controversial. In the Euro Heart Survey on Valvular Heart Disease in asymptomatic patients with aortic stenosis exercise test was performed only in 5,7% of patients [18]. This observation based on real clinical practice showed the present role of exercise test in this group of patients. Even in multicenter randomized controlled trial EVOLVED, AVATAR that compared early AVR to routine care in AAS, stress test was not used as a part of the study protocol to eliminate "pseudoasymptomatic" patients [29,30]. The definitions of the clinically abnormal stress test in AAS also differ among reported studies [2,15,16,256]. Up to 20% of patients with AS are unable to perform a stress test due to poor mobility or impaired exercise capacity [21]. Nowadays AAS might be a different problem than it was observed 10-20 years ago -today, AS patients are elderly, often with multiple comorbidities, potentially more vulnerable to the hemodynamic derangements associated with severe AS [18,23,25]. Another issue to look out for is a very varied physical performance in the analyzed group. Some patients practiced amateur sports (tennis, cycling, climbing). A large percentage of the patients reported that they systematically attended the gym. But there were also patients who had a very conservative "couch" lifestyle. Compared to the general population, patients with AS were rather low fitness group. This is the next indication to performed ET in AAS to establish also the safety of daily physical activities or occupational work. Current guidelines recommend repeat clinical assessment and echocardiography every 6 to 12 months for severe AS but without information whether an ET should be repeated at each time of follow-up [3,4]. Like us, some authors stressed the limited usefulness of the repeated ET (apart from verifying the absence of symptoms) [16]. In our Institution, this is our practice to perform exercise stress testing in asymptomatic patients with aortic stenosis to verify the lack of symptoms. Study limitations Our hospital is dedicated to more difficult cases so the analyzed population may not be typical to general AAS patients. Cohort was highly selected due to National Institute of Cardiology is dedicated to more difficult cases so the analyzed population may not be typical to general asymptomatic AS patients. A limited number of patients may influence somehow statistical results. We performed cycloergometric ET that was not previously so often reported in such studies so our results may not be comparable with the results of studies based on treadmill stress test examination. We also did not performed ergospirometric test with RER measurement to assess the real level of exercise. It was impossible to assess how long after previous ET or how long before next ET patients develop symptoms. Despite being told to report the symptoms immediately, most of them waited for scheduled visit. It was also difficult to indicate most frequent symptom; sometimes it was dizziness, sometimes shortness of breath. Most complained of angina and more than one symptom. One patient had cardiac arrest with good result of resuscitation by family. Conclusion In patients with asymptomatic AS exercise test is safe but in our group of patients the results were not crucial in decision of AVR. Patients treated with beta-blockers who did not achieve 85% of predicted maximal heart rate had a higher probability of AVR. The influence of beta blockers treatment on the decision of AVR in this small group of patient need further revision. Contribution statement Ewa Orłowska-Baranowskaconcept of the study, data collection, manuscript preparation Rafał Baranowskidata collection, data analysis, revision of the manuscript Tomasz Hryniewieckiconcept revision, revision of the manuscript All authors accepted the final version of the manuscript.
2021-03-16T06:16:29.142Z
2021-03-15T00:00:00.000
{ "year": 2021, "sha1": "ebe51e6701efbe9d226d45b8ba574f258dfda34d", "oa_license": null, "oa_url": "https://www.mp.pl/paim/en/node/15873/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5efd8cdfaf96276cc0c7d5804cf1e285c69fae07", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219948823
pes2o/s2orc
v3-fos-license
MDM2 amplification and fusion gene ss18-ssx in a poorly differentiated synovial sarcoma: A rare but puzzling conjunction The detection of specific alterations by genetic analyses has been included in the diagnostic criterions of the World Health Organization’s classification of soft tissues tumors since 2013. The presence of a SS18 rearrangement is pathognomonic of synovial sarcoma (SS). MDM2 amplification is strongly correlated to well-differentiated or dedifferentiated liposarcoma (DDLPS) in the context of sarcoma. We identified one case of poorly differentiated sarcoma harboring both SS18-SSX2 fusion and MDM2 amplification. The review of the literature showed high discrepancies, concerning the incidence of MDM2 amplification in SS: from 1.4% up to 40%. Our goal was to precisely determine the specific clinico-pathological features of this case and to estimate the frequency and characteristics of the association of SS18-SSX fusion/MDM2 amplification in sarcomas. We performed a retrospective and prospective study in 96 sarcomas, (56 SS and 40 DDLPS), using FISH and/or array-CGH to detect MDM2 amplification and SS18 rearrangement. None of the 96 cases presented both genetic alterations. Among the SS, only the index case (1/57: 1.7 %) presented the double anomaly. We concluded that MDM2 amplification in SS is a very rare event. The final diagnosis of the index case was a SS with SS18-SSX2 and MDM2 amplification as a secondary alteration. If the detection of MDM2 amplification is performed first in a poorly differentiated sarcoma, that may lead to not search other anomalies such as SS18 rearrangement and therefore to an erroneous diagnosis. This observation emphasizes the strong complementarity between histomorphology, immunohistochemistry and molecular studies in sarcoma diagnosis. Introduction Genetics is of major importance in the recognition and clinical management of Soft Tissue Sarcomas (STS). Over the last decades, it has allowed the creation of a modern classification of STS. 1 Roughly, four groups of STS can be distinguished, according to their genetic alterations: STS with translocations leading to formation of fusion genes, STS with specific amplification, STS with specific mutations and STS with complex genome. However, while the discovery of pathognomonic anomalies related to morphological and clinical tumor types has served as novel bases of diagnosis, prognosis and targeted therapy of STS, their increasing number has also raised novel issues. Notably, it appeared more and more frequently that a given fusion gene may be present in several, apparently very distinct, entities. [2][3][4][5][6][7][8] Conversely, Next Generation Sequencing (NGS) studies have increased the number of fusion genes related to a same tumor entity. 9,10 This brought up several points about the role of the fusion genes in the initiation and progression of the tumor cells. It also raises the issue of the role and potential prominence of the genomic background of a fusion gene. So far, the SS18-SSX (SYT-SSX) chimeric gene robustly remains related to synovial sarcoma (SS). [11][12][13] This fusion SS18-SSX has never been described in any STS other than SS. 14 It results from the t(X;18)(p11; q11) that fuses SS18 either with SSX1 or with SSX2. Molecular variants involving SSX4, SS18L1 and NEDD4 are very rare. [11][12][13]15,16 The detection of SS18 and SSX rearrangements can be done routinely by fluorescence in situ hybridization (FISH) using break-apart probes. Reverse transcription polymerase chain reaction (RT-PCR) and RNA sequencing are also practical methods for detection of SS18-SSX fusion gene. [17][18][19] These molecular analyses are useful for confirmation of histological diagnosis. They are mandatory in challenging cases that can be mistaken for other mesenchymal tumors, such as cellular superficial fibromatosis, solitary fibrous tumor, spindle cell carcinoma, malignant peripheral nerve sheath tumor and Ewing's sarcoma/primitive neuroectodermal tumors. The ubiquitous localization and variable morphologic presentation of SS contributes to these difficulties. 1,17 In a series of 47 SS cases, Oda et al. 20 observed an amplification of the MDM2 gene at a frequency as high as 40%. The amplification of MDM2 was described by other authors in several series of SS but such an elevated frequency was not confirmed. [21][22][23] Though it can be observed occasionally in STS, such as intimal sarcoma or paraosteal osteosarcoma, MDM2 amplification is strongly related to atypical lipomatous tumors (ALT), well-differentiated liposarcoma (WDLPS) and dedifferentiated liposarcoma (DDLPS). [24][25][26] Whether the amplification of MDM2 is a recurrent or an exceptional feature of SS, has to be clearly and definitively established because of such a potential impact in molecular diagnosis. MDM2 amplification can be detected routinely either by FISH or comparative genomic hybridization on arrays (array-CGH). It is mainly used for distinguishing ALT/WDLPS from lipomas or DDLPS from poorly differentiated sarcomas. Among the 384 cases of STS or suspicion of STS included in the GENSARC trial (NCT 00847691), 27 one case harbored both SS18-SSX2 fusion and MDM2 amplification. We present here the detailed clinical, histological and genetic description of this novel SS18+/MDM2+ case. In addition, in order to specify the frequency and impact on diagnosis of this double alteration, we have investigated the presence of MDM2 amplification in 56 molecularly confirmed SS (SS18+), as well as the presence of SS18 rearrangement in a series of 40 MDM2-amplified DDLPS (MDM2+). Index case SS18+/MDM2+ The patient was a 70-year-old man who presented in December 2009 with neuropathy symptoms, pain and alteration of his general condition, notably asthenia and weight loss. Medical examination showed an intramuscular tumor mass of the left thigh that had been noticed one year earlier by the patient. The pathological examination of a biopsy sample led to suspect a poorly differentiated sarcoma. A large surgical excision of a tumor measuring 4 Â 2.5 Â 2.5 cm 3 was performed. Resection margins were in sano (R0). The results of microscopical histopathological analysis indicated a poorly differentiated sarcoma. The patient was informed of the possibility of inclusion in the GENSARC study that focused on molecular diagnosis of main sarcoma types. 27 He was included in the study in agreement with the current French law regarding non-interventional studies. Molecular analyses showed the presence of both SS18-SSX2 fusion and MDM2 amplification. The patient was treated by radiotherapy. No recurrence was detected on his last clinical examination in 2013, showing no recurrence. Cohorts 1 (SS18+ cases) and cohort 2 (MDM2+ cases) Fifty-six samples of SS showing SS18 rearrangement from 53 patients (cohort 1; Table 1) and 40 samples of DDLPS showing MDM2 amplification (cohort 2; Table 2), collected between May 1992 and September 2019, for which sufficient amount of tumor material was available for additional molecular analyses (MDM2 amplification status for cohort 1 and SS18 rearrangement status for cohort 2) were retrieved from the files of the Laboratory of Solid Tumor Genetics of Nice University Hospital and of the Pathology Department of Timone Hospital in Marseilles. The design of the study and protection of patient's data were in accordance with the local institutional rules, the current French legislation, and the European Union 2016/679 General Data Protection Regulation. In cohort 1, there were 30 male and 23 female patients whose ages ranged from 13-89 years. Tumor locations were: limbs (29/56 cases), retroperitoneum (3/56 cases), head and neck (5/56 cases) and trunk wall (14/46 cases). For five cases data on tumor localisation were unavailable. Fortyfive tumors were primary, three were lung metastases and three tumors was a local recurrence. For five cases this information was not recorded in patient's files. The SS18 rearrangement involved SSX1 in 26 cases, SSX2 in 15 cases and SSX4 in one case, respectively. In 14 cases the partner gene of SS18 could not be determined. In cohort 2, there were 24 male and 16 female patients whose ages ranged from 38 to 93 years. Tumors were located in limbs (7/40 cases), retroperitoneum (26/40 cases) and in other locations (7/40 cases). For one case data on tumor localisation was unavailable. Thirty-height tumors were primary, one tumor was a metastasis and for one case this information was not available. Array-CGH Index case: DNA extraction from a frozen sample of the surgical excision was done using a standard phenol-chloroform procedure (Phase Lock Gel Light, Eppendorf, Hamburg, Germany). DNA purity and concentration were evaluated using a NanoDrop spectrophotometer (Thermo-Fisher, Waltham, MA) (absorbance for an optimal labeling yield: A260/ A280 ! 1.8 and A260/A230 ! 1.9) and Qubit dsDNA BR Assay Kit (Invitrogen, Waltham, MA), respectively. Human Reference DNA was extracted from human blood from healthy control. Labeling of tumor DNA (1000 ng) by Cyanine 5 (Cy5) and of reference DNA by Cyanine 3 (Cy3) was followed by purification, co-hybridization in equal quantity (1ug) to the NimbleGen Arrays (Roche NimbleGen, Madison, WI) and washing according to the manufacturer's recommendations. Arrays were Labeling of tumor DNA by Cy5 and of reference DNA by Cy3 was followed by purification and co-hybridization in equal quantity on a genomewide oligonucleotide-based microarray Sureprint G3 Human CGH 180 k (average resolution 13 kb) (Agilent, Santa Clara, CA). Hybridization and washing were performed as specified by the manufacturer (Agilent). Hybridized slides were scanned using SureScan scanner (Agilent) and image analysis was performed using Cytogenomics software (v2.9.2.4, Agilent). Results were provided according to hg19 (GRCh37 Genome Reference Consortium Human Reference 37; www.genome.ucsc.edu/). Gene amplification was defined by a log2 ratio Cy5/Cy3 > 1.1 and gain was defined by a log2 ratio Cy5/Cy3 between 0.2 and 1.1. RNA-sequencing (RNA-seq) Total RNA was extracted from snap frozen sample of the index case using the Trizol/chloroform method (Thermo Fisher Scientific, Carlsbad, CA) and qualified with fragment analysis by TapeStation 4200 (Agilent Technologies, Carlsbad, CA). Libraries were prepared with TruSeq mRNA stranded library kits (Illumina, San Diego, CA). One mg of total RNA was purified, retrotranscripted, fragmented, indexed and amplified for preparation of the RNA libraries. Libraries were sequenced over 2x150pb using a 500 High Output v2 on NextSeq500 (Illumina). Expres- sion data were generated using Star Aligner (V2.5.3a) and count matrices using FeatureCount (V1.6.0). The count matrices were normalized in Fragments Per Kilobase Million (FPKM). Data were used for clustering analysis (Ward method and correlation Sparman or Pearson with or without Internal Quantil Range) and for boxplot generation. A fusion analysis was performed from FastQ with two approaches: 1) a targeted analysis using a dedicated reference fusion sequences implemented with known fusions of each tumor type; 2) an exploration analysis using 5 fusions finder tools (TopHat fusion v2.0.6, Defuse V0.6.0, StarFusion V2.5.3, Fusion Catcher v1.00 and FusionMap). The fusion interpretation combined results of the targeted fusion and those of the exploration analysis. Histological and molecular features of index case SS18+/MDM2+ Histologically, the tumor was composed of a monotonous proliferation of monomorphic spindle cells arranged in highly cellular and long fascicles. They had scarce cytoplasm and ovoid hyperchromatic nuclei. Stromal changes were focally abundant with collagen bundles or perivascular myxoid nests containing histiocytes. These stromal changes were devoid from tumor cells, confirmed by the negativity of HMGA2. There was no epithelioid component. Peripheral striated muscle fibres were infiltrated by tumor cells. A component of scattered mature adipocytes was also present mostly at the periphery but also in the centre of the lesion (Fig. 1A-D). Immunostaining of the spindle tumor cells showed classical features of synovial sarcoma: a weak positivity for EMA and AE1-AE3 ( Fig. 1E and F). A heterogeneous nuclear positivity for MDM2 was detected in 5% of the spindle cells ( Fig. 1H and I) while a diffuse positivity for CDK4, and HMGA2 was observed in approximately 100 % and 70% of cells, respectively ( Fig. 1J and K). On the contrary, adipocytes did not express MDM2, CDK4, or HMGA2. Moreover expression of SMARCB1 and H3K27me3 was conserved. Altogether, the complex morphology of the tumor, the presence of a mature adipocytic component and the immunohistochemical features led to a diagnosis of poorly differentiated sarcoma, possibly a SS or a DDLPS. MDM2 amplification, SS18 and SSX2 rearrangements were detected using FISH analysis ( Fig. 2A and B). MDM2 amplification was observed in 78% of tumor cells. Array-CGH showed a high-level amplification (average log 2 ratio Cy5/Cy3 > 1.3) of a large chromosomal segment, from nucleotide positions 50,927,411 up to 74,029,436 at 12q13.13-12q21.1. This amplified region notably included MDM2, CDK4, HMGA2, DYRK2, FRS2 and CPM. A gain of chromosome 21, a loss of 12q21.3-qter and a loss of chromosome 13 were also detected (Fig. 2C). RNA-Seq analysis confirmed both the presence of the SS18-SSX2 fusion (Fig. 3A) and of the MDM2 amplification (Fig. 3B). Firstly, five different algorithms were able to identify a fusion of the exon 10 of SS18 (5 0 ) with the exon 6 of SSX2 (3 0 ). A targeted analysis using dedicated reference fusion sequences implemented with known fusions of many sarcoma subtypes showed 103 split reads encompassing the fusion point. The exploration analysis using four of five fusionfinder tools detected the same overexpressed fusion (Tools-Split/Span reads: TopHat fusion-292/5; Defuse-385/183; StarFusion-52/5; Fusion Catcher-27/434). Secondly, count matrices using FeatureCount normalized in FPKM allowed to assess relative expression data. MDM2 expression showed a significant high-level overexpression (log 2 FPKM = 6.38) for the index case in comparison to synovial sarcoma (log 2 FPKM = 3.60; n = 7). These data have also been used for clustering analysis and showed that the transcriptomic profile of the index case clustered perfectly within SS profiles (Fig. 4). In addition, TLE1 immunostaining was performed and showed a diffuse positivity (Fig. 1G), consistent with a SS. Detection of SS18 rearrangement in a series of 40 MDM2+ DDLPS (cohort 2) In 32 out the 40 cases, no structural rearrangement of SS18 was detected using FISH (Table 2). A few extra-copies of the nonrearranged gene were observed in cases 75, 78, 83, 88 and 91 while amplification of SS18 was detected in rare cells from case 62. In case 87, we detected an unbalanced rearrangement of SS18 (gain of the 5 0 region) while no rearrangement of SSX1, SSX2 and SSX4 were observed. Further investigation of case 87 using array-CGH confirmed the MDM2 amplification previously detected using FISH and showed amplification of a large segment at 12q13-15, notably containing CDK4 and FRS2 in addition to MDM2. Amplification of 18q11 close to the SS18 gene at 18q11 was also observed (Fig. 5A-C). We concluded that the alteration surrounding SS18 was secondary to breakages generated by 18q amplification and was not related to an oncogenic SS18-SSX fusion gene. Discussion The main driver alteration of SS is the SS18-SSX fusion. Although SS presents a low mutational and copy number variations rate, 28 so-called`s econdary'' genetic structural or quantitative anomalies have been reported in addition to the SS18-SSX fusion. A few non-recurrent mutations, affecting oncogenes and tumor suppressor genes, such as TP53, HRAS and PTEN, have been described. 20,29,30 The clinical consequence of these mutations in the background of the SS18-SSX fusion is not clearly established yet. Losses and gains of chromosomal segments have been reported in the conventional karyotypic and CGH studies of SS. 21,22,31,32 The most frequent described anomaly was a partial or complete gain of chromosome 8. Low-level gains of 12q were also reported. 21,33 It has been noticed long ago that translocations in STS were frequently associated with secondary chromosomal alterations that could vary from a single extra-chromosome up to multiple alterations. [34][35][36][37][38][39][40][41] The clinical significance, notably prognostic value, of such additional chromosomal alterations has been under debate for years. 22,[34][35][36][37][38][39][40][41][42][43][44] Recent data showed that tumors harboring both fusion genes and genomic instability undergo an aggressive outcome. 45 However a few secondary alterations do not always relies on genuine genomic instability. Genomic amplification is a remarkable genomic feature that can be a diagnostic marker when recurrent in a tumor type or a marker of instability and aggressiveness. The presence of genomic amplification in addition to fusion genes has been reported in some STS including SS. In SS, amplification of MDM2 has been so far described in 23 patients. [20][21][22][23]33 The frequency of this association of two alterationsi.e SS18 fusion and MDM2 amplification-individually known to be representative of a specific entity has to be determined. Indeed, it was as high as 40% in the series of 47 SS studied by Oda et al. 20 while much lower (1.4% up to 11%) in other series: one out 9 cases (11%), 23 one out 13 cases (7%), 22 one out 67 cases Neoplasia Vol. 22, No. 8, 2020 MDM2 amplification and fusion gene ss18-ssx I. Di Mauro et al. (1.4%), 33 one out 69 cases (1.4%). 21 In a comprehensive and integrated genomic characterization of six types of sarcomas, only one case of the 10 SS studied showed a low gain of 12q13-15 containing DDIT3, CDK4, HMGA2, FRS2 and MDM2 genes. 28 In these previous series, the diagnosis of SS was mostly assessed on morphological bases. Only a part of the cases benefited from a molecular confirmation: 21 out 47 cases in the series from Oda et al. 20 and all the 13 cases in the series from Nakagawa et al. 22 On the basis of the observation of a novel SS18+/MDM2+ case, we subsequently aimed at exploring more deeply both the clinicopathological characteristics and frequency of such SS18+/MDM2+ tumors. Indeed, the conjunction of SS18 rearrangement and MDM2 amplification raises diagnostic issues with consequences on decision-making and treatment strategies. Notably, the finding of MDM2 amplification in a soft tissue tumor is never meaningless and deserves peculiar attention. For instance, amplification and expression of MDM2 has been detected also in malignant peripheral nerve sheath tumor (MPNST), a tumor that can be hard to distinguish from both DDLPS and SS Makise et al. 46 As in the case of SS presented here, 46 had to deal with MDM2 amplification/overexpression in a case of MPNST. Detection of H3K27me3 expression was a helpful marker in this context. Indeed, PRC2 alteration leading to H3K27me3 deficiency has been reported as the molecular hallmark of MPNST. In our index case, the conservation of H3K27me3 expression was consistent with a tumor other than MPNST. However, Makise et al. 46 showed that some DDLPS showed a complete loss of expression of H3K27me3, suggesting that this marker should not be used alone for an accurate distinction of MPNST from liposarcoma. Altogether, diagnosis has to be made according to the whole clinical, histological and molecular features. In addition, although a reduced expression of SMARCB1 is not specific of SS, it can also contribute to distinguish SS from its histological mimics. 47 Indeed, a disruption of mSWI/SNF (BAF) complex by the SS18-SSX fusion induces a loss of expression of SMARCB1. 48 Immunohistochemical detection of this reduced expression is therefore often observed in SS. However in our index case, SMARCB1 expression was conserved. In the present case, histological features were those of a poorly differentiated sarcoma that exhibited overlapping morphologic features between SS and DDLPS, with an unusual component of scattered intermingled mature adipocytes. In contrast to SS18-SSX fusion that has been described as fully specific of SS, MDM2 amplification has been reported in a variety of STS other than WDLPS/DDLPS and even in tumors other than STS. Therefore, SS18-SSX fusion was considered as the relevant alteration of this case, MDM2 amplification being only a secondary alteration. Moreover, the immunohistochemical diffuse expression of TLE1 as well as the RNA clustering were in favor of a SS. The treatments of primary SS and DDLPS do not sensibly differ: it consists in surgery followed by radiotherapy. 49 In contrast, the recognition of SS is crucial for conducting the treatment in metastatic patients. SS tend to have better survival rate and a higher chemo-sensitivity than other STS. 50,51 Doxorubicin and Ifosfamide are recommended in advanced or metastatic SS. [51][52][53] Recently, the multikinase inhibitor pazopanib became the first targeted agent to be approved for the treatment of advanced SS after failure of chemotherapy. 54 An accurate molecular diagnosis of SS is also important to allow access to clinical trials. Indeed, several molecules are currently under development for the treatment of SS, such as Wnt inhibitors, Þ-catenin inhibitors and immunotherapy. [55][56][57] In contrast, chemotherapy regimens are generally ineffective for metastatic DDLPS 58 and targeted treatments have been disappointing. For instance, potent and selective MDM2 inhibitors such as nutlins appeared ineffective because of their high toxicities. 59,60 Several clinical trials evaluating MDM2 inhibitors alone or in combination with anti-CDK4 molecules are ongoing. 61-63 MDM2-Pp53 interaction may also constitute an interesting target. 64 Since in routine practice multiple molecular analyses are not usually performed, the detection of MDM2 amplification is often done in priority in the context of a poorly differentiated sarcoma. In such a situation observation of a high level amplification of MDM2 might not be followed by further analyses and lead to overlook a SS18 rearrangement. This may have an unfavourable impact on metastatic patients. The frequency of the association SS18+/MDM2+ has therefore to be precisely determined in order to be aware of potential misdiagnoses of SS. In some cases, recognition of a SS might be difficult because the immunoprofile that may show some variations. Moreover, the diagnostic value of some markers has been limited by their lack of sensitivity and/or specificity; in particular TLE1 shows a good sensitivity but a limited specificity in the diagnosis of SS. Recently, Baranov et al. 65 described a novel antibody showing 95% of sensitivity and 100% of specificity for the fusion SS18-SSX. This marker is likely to become a useful IHC tool, especially in centers with limited access to molecular biology or as an additional element in challenging cases. Poorly differentiated histology is more frequent in SS from the elderly and be a source of diagnostic difficulties. 66 The location of the tumor is an element that has to be taken into account: though SS may arise in any anatomical site and that a significant proportion of DDLPS are located in limbs, a retroperitoneal location is more consistent with a DDLPS than a SS. Our study is, to the best of our knowledge, the only one to investigate specifically both SS18 rearrangements in DDLPS and MDM2 amplification in molecularly confirmed SS in a large series of patients. Only the index case presented both alterations. The SS18+/MDM2+ frequency in our series was 1%, closer to the results of 1.4% from Szymanska et al. 33 than the 40% found by Oda et al. 20 . This discrepancy was probably due to the methods used for detection of MDM2 amplification (differential PCR 20 versus CGH or FISH 35 ) as well as to the determination of the threshold for the definition of the amplification of MDM2. Possibly, the term``amplification'' when it is related to the differential PCR method may be confusing when compared to its use in genomic studies where it usually refers to a number of more than 8 copies/per cell for a given gene. The term``gain'' might be more appropriate for extra-copy number/cell <8. Conclusions Therefore, our results and the review of literature indicate that MDM2 amplification is a very rare event in SS. Whether this rare event has a clinical impact on prognosis remains to be established. MDM2 amplification might act synergistically with SS18-SSX fusion by promoting TP53 ubiquitination and degradation. 67 Moreover, this observation emphasizes the strong complementarity of clinical data, especially tumor location, histomorphology, immunohistochemistry and molecular studies to perform more accurate subtyping of sarcomas and to increase our knowledge of these tumors. Authors' contributions IDM and FP designed the research study; LMM, BC, ML, GP, AB and CB performed the research; IDM, FP and BDM analysed the data; IDM and FP wrote the paper; BDM and JFM gave technical support. Declarations of interest None
2020-06-18T09:09:27.619Z
2020-06-16T00:00:00.000
{ "year": 2020, "sha1": "423508f5edb189011480f35f076888c7b7fcb0f5", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.neo.2020.05.003", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "71a45c44952c9d3462a25f7e0900717b9bcf77b8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
119155069
pes2o/s2orc
v3-fos-license
Haar states and L\'evy processes on the unitary dual group We study states on the universal noncommutative *-algebra generated by the coefficients of a unitary matrix, or equivalently states on the unitary dual group. Its structure of dual group in the sense of Voiculescu allows to define five natural convolutions. We prove that there exists no Haar state for those convolutions. However, we prove that there exists a weaker form of absorbing state, that we call Haar trace, for the free and the tensor convolutions. We show that the free Haar trace is the limit in distribution of the blocks of a Haar unitary matrix when the dimension tends to infinity. Finally, we study a particular class of free L\'evy processes on the unitary dual group which are also the limit of the blocks of random matrices on the classical unitary group when the dimension tends to infinity. Introduction Let n ≥ 1, and U (n) be the group of unitary n × n-matrices. As proved in [14], the coefficient * -algebra of U (n) generated by the matrix coordinate functions U → U ij is isomorphic to the commutative * -algebra U n generated by n 2 elements {u ij } 1≤i,j≤n fulfilling the relations which make the matrix (u ij ) n i,j=1 unitary. The group law of U (n) gives rise to a structure of Hopf algebra (U n , ∆, δ, Σ). Brown introduced in [7] the (noncommutative) * -algebra U nc n , sometimes called the Brown algebra, generated by n 2 elements {u ij } 1≤i,j≤n fulfilling the same relations making (u ij ) n i,j=1 unitary. Even though the Brown algebra is not a Hopf algebra, which seems to limit its study, it is possible to define a structure of a dual group U n = (U nc n , ∆, δ, Σ) in the sense of Voiculescu [28] (see Definition 1.4). We refer to U n as the unitary dual group, and U nc n has to been considered as the "matrix coordinate functions" on U n . This structure allows to define naturally five notions of convolution of states on U nc n (of "measures" on U n ) with respect to the five natural independence: the free convolution, the tensor convolution, the boolean, the monotone and the anti-monotone one. The original motivation for this paper was to understand the existence of Haar states, or absorbing states, for those different convolutions. In the present paper, we prove that, except in the case n = 1, there exist no Haar states for those five convolutions. There exist at least two different ways of representing U n . On one hand, Glockner and von Waldenfels proved in [14] that U nc n is isomorphic to the complex algebra generated by some concrete operator-valued functions. On the other hand, in [17], Mc Clanahan studies the C *algebra C * (U nc n ) generated by U nc n and proves that it is isomorphic to the relative commutant of the matrix algebra M n (C) in the free product of M n (C) with the algebra of continuous function C(U) on the unit complex circle. In fact, the work of Mc Clanahan covers also the study of a particular state, which is the free product state of the normalized trace on M n (C) and the Haar measure on C(U) in the previous construction. We give another construction of this particular state and prove that it is a Haar trace for the free convolution on U n , in the sense that it is an absorbing element in the set of tracial states for the free convolution. The proof only relies on the combinatorial aspects of the free cumulant theory [20]. We construct also a Haar trace for the tensor convolution on U n , and prove that there exist no Haar traces for the three others convolutions. One other direction in the understanding of the unitary dual group is the study of free Lévy processes on it, in the sense of [1]. Quantum Lévy processes on quantum groups, bialgebras or dual groups have been intensively studied by Ben Ghorbal, Franz, Schürmann and Voss (see [2,3,11,22,23,31,30]). Very recently, the second author of the present article enlightened a deep link between free Lévy processes on U n and random matrices in [26]. More precisely, he proved that the n 2 blocks of a Brownian motion on the classical unitary group U (nN ) of dimension nN converge to the elements of a free Lévy processes on U n when N tends to infinity. A question occurs naturally: which free Lévy processes can be obtained in the same fashion? A possible starting point is the general model of Lévy processes on the unitary group U (nN ) defined by the first author in [9]. In the present paper, we define a particular class of free Lévy processes on U n and prove that those processes are indeed the limit of the blocks of the model of [9] when N tends to infinity. The proof of this phenomenon gives a new demonstration of the result in [26]. Moreover, the argument is also valid to construct a random matrix model for the free Haar trace on U n . As a by-product, we prove an embedding theorem which is already well-established for the other independences: the realization of every free Lévy process on U n as a stochastic process on some Fock space. The paper is organised as follows. In Section 1, we introduce the unitary dual group U n as well as the different notions of convolution. We also give the description of a general construction of quantum random variables on U n which is useful in the others sections. In Section 2, we state and prove Theorem 2.4 about existence of Haar traces for the free and the tensor convolution, and non-existence for the others convolutions. In Section 3, we show that the free Haar trace is the limit of a Haar unitary random matrix in the sense of Theorem 3.3. In Section 4, we introduce the free Lévy processes on U n and consider a particular class of processes which are limit of Lévy processes on the unitary group in the sense of Theorem 4.4 and compute their generators. for all * -homomorphisms f : A → C and g : B → C, there exists a unique * -homomorphism f ⊔ g : A ⊔ B → C such that f = (f ⊔ g) • i 1 and g = (f ⊔ g) • i 2 . Informally, A ⊔ B corresponds to the "smallest" * -algebra containing A and B and such that there is no relation between A and B except the fact that the unit elements are identified. We usually say that A is the left leg of A⊔B, whereas B is its right leg, and, for all A ∈ A and B ∈ B, we denote i 1 (A) by A (1) and i 2 (B) by B (2) . This terminology is particularly useful when we consider the free product A ⊔ A of A with itself, because, in this case, there exists two different way of thinking about A as a subset of A ⊔ A. Of course, if A and B are disjoint, we can avoid this subscript and identify A with i 1 (A) and B with i 2 (B). For * -homomorphisms f : A → C, g : B → D, we denote by f g the * -homomorphism Let (A 1 , φ 1 ) and (A 2 , φ 2 ) be two noncommutative spaces. The free product A 1 ⊔ A 2 can be equipped with five different product states, called respectively free, tensor independent (or just tensor), boolean, monotone and anti-monotone product of states. We define those five constructions (see [18] for a general study). First of all, we will assume for our unital * -algebras the decomposition of vector spaces A = C1 A ⊕ A 0 , where A 0 is a * -subalgebra of A. Remark that this decomposition is not necessarily unique, and sometimes does not exist. Definition 1.2. Let (A 1 , φ 1 ) and (A 2 , φ 2 ) be two noncommutative spaces with A 1 = C1 A 1 ⊕ A 0 1 and A 2 = C1 A 1 ⊕A 0 2 . There exist five different states φ 1 * φ 2 , φ 1 ⊗φ 2 , φ 1 ⋄φ 2 , φ 1 ⊲φ 2 and φ 1 ⊳φ 2 on A 1 ⊔ A 2 , called respectively free, tensor independent (or just tensor), boolean, monotone and anti-monotone product, and defined, for all A 1 , . . . , A n ∈ A 1 ⊔ A 2 such that A i ∈ A 0 ǫ i and ǫ 1 = ǫ 2 · · · = ǫ n , by respectively the following relations The tensor product and the free product do not depend on the choice of the decomposition A 1 = C1 A 1 ⊕ A 0 1 and A 2 = C1 A 1 ⊕ A 0 2 , but the other three products do. The positivity follows from a GNS representation. Freeness of random variables in the sense of Voiculescu [27] can be expressed thanks to this notion of free product of states. Following the point of view of the theory of quantum groups, we consider the * -algebra A as a set of "functions on the dual group G", and not as the dual group. This terminology of dual group can be ambiguous and one could prefer the terms H-algebras used by Zhang in [33], or the term co-group used by Bergman and Hausknecht in [5]. However, in the following remark, whose understanding is not needed for the rest of the paper, we will see that the duality can be seen as the existence of some particular functor. (1) Let Alg be the category of unital * -algebras. The dual category Alg op is the category Alg with all arrows reversed. The definition of a dual group has the immediate consequence that an element A in the category Alg which defines a dual group G = (A, ∆, δ, Σ) has a group structure in the dual category Alg op , in the following sense of [8,Chapter 4]: we have the commutativity of all the diagrams obtained from the diagrams defining a classical group by replacing the product by ∆ op , the unit map by δ op and the inverse map by Σ op . Remark that Alg op is not a concrete category: the morphism ∆ op in Alg op can not be seen as an actual function from A ⊔ A to A. Nevertheless, as shown in [8,Chapter 4], this group structure is sufficient to endow naturally the set Hom Alg op (B, A) of morphisms of Alg op from any unital * -algebra B to A with a classical structure of group. (2) As a consequence, for any unital * -algebra B, the set Hom Alg (A, B) of the unital *homomorphisms from A to B is a group. Moreover, one can verify that Hom Alg (A, ·) : B → Hom Alg (A, B) is a functor from Alg to the category of groups Gr. Conversely, if a unital * -algebra A is such that Hom Alg (A, ·) is a functor from Alg to Gr, then G = (A, ∆, δ, Σ) is a dual group for some particular ∆, δ and Σ (see [33] for a direct proof, or [8,Chapter 4] for a proof of the dual statement about Alg op ). We can summarize those considerations saying that dual groups are in one-to-one correspondence with the representing objects of the functors from Alg to Gr. As a comparison, commutative Hopf algebras are the representing objects of the functors from the category of unital commutative algebras to Gr. (3) Now, starting from a group G, one can ask the following question: is there a unital * -algebra A such that Hom Alg (A, ·) is a functor and Hom Alg (A, C) ≃ G? If yes, there exist ∆, δ and Σ such that (A, ∆, δ, Σ) is a dual group which can be called a dual group of G (not unique). One of Voiculescu's motivation of [28] was to show that a dual action of a dual group of G on some operator algebra gives rise to an action of G on that operator algebra. For example, the unitary dual group U n , the principal object of our study defined subsequently, is a dual group of the classical unitary group U (n) = {M ∈ M n (C) : U * U = I n } in the sense that Hom Alg (U nc n , C) ≃ U (n). As explained in the introduction, the first motivation of this article is a better understanding of Haar states and Lévy processes on dual groups. We know that those objects play a crucial role in the theory of compact quantum groups, and ideas from this theory can be a guide in the study of dual groups. However, let us emphasize, in the following remark, the major differences between dual groups and compact quantum groups. Remark 1.6. (1) Firstly, as Hopf algebras, the definition is purely algebraic: we use only the idea of * -algebras and we do not need to consider some C * -algebra. One possible direction of research is to consider a more analytic structure on dual groups which could lead to more powerful results. (2) The second difference is that the tensor product has here been replaced by the free product. The latter is in some way "more noncommutative" because in the case of the tensor product, the two legs of the product are still commuting. If we have gained in noncommutativity, we have lost in interpretation: while a classical (compact) group could always be seen as a (compact) quantum group via the isomorphism C(G × G) ≃ C(G) ⊗ C(G), we do not have such an isomorphism any more and hence classical groups cannot be seen as special cases of dual groups. (3) Finally, let us also remark that we here impose to have * -homomorphisms which correspond to the idea of a neutral element and inverses, whereas in the quantum case we only imposed the quantum cancellation property. We know that this cancellation property, which in the classical case yields automatically groups, is in the quantum case somewhat weaker. If we imposed in the quantum case to have a "neutral element" and "inverses" we would have only quantum groups of Kac type. 1.3. Unitary dual group U n . We introduce now the unitary dual group U n , first considered by Brown in [7], and which possesses naturally a structure of dual group. It has to be considered as the noncommutative analog of the classical unitary group. Definition 1.7. Let n ≥ 1. The unitary dual group is the dual group U n = (U nc n , ∆, δ, Σ) where: • The universal unital * -algebra U nc n is generated by n 2 elements (u ij ) 1≤i,j≤n with the relations • The coproduct is given on the generators by Let us remark that the relations defining U nc n can be summed up by saying that u = (u ij ) 1≤i,j≤n is a unitary matrix in M n (U nc n ). We do not suppose thatū = (u * ij ) 1≤i,j≤n is unitary. Indeed, unlike the relations n k=1 u * ki u kj = δ ij = n k=1 u ik u * jk , the relations n k=1 u * ik u jk = δ ij = n k=1 u ki u * kj do not pass the coproduct ∆, since we cannot simplify expressions like jq u (2) qk to δ ij . A quantum random variable on U n over the probability space A is a * -homomorphism j from U nc n to A (this reverse terminology is the usual one when dealing with dual objects). Of course, a quantum random variable j : U nc n → A yields to a unitary matrix (j(u ij )) 1≤i,j≤n ∈ M n (A), and conversely, for all matrix (A ij ) 1≤i,j≤n ∈ M n (A) which is unitary, there exists a unique * -homomorphism j : U nc n → A such that j(u ij ) = A ij . In a certain sense, U n is one possible formalism to deal with unitary elements of M n (A). The coproduct leads to different notions of convolution, that we sum up below. Let us remark that we can define five different convolutions of states, instead of the unique convolution of states on quantum group, given by the tensor convolution. 1.4. How to build states on U n ? We expose now a general method for defining quantum random variables on U n . Consider the noncommutative probability space M n (C) composed of matrices of dimension n equipped with its normalized trace tr n := 1 n Tr. Let us denote by E ij the usual matricial units (ie, the matrix whose entries are zero, except for the (i, j)-th coefficient which is 1). Let A be a random variable in a noncommutative space (A, φ). One way to consider A as a matrix is to count A as an element of M n (A) ≃ A ⊗ M n (C). In this way, the (i, j)-th block of A is just δ ij A. The starting point of our reflexion is the following: there is another way to consider A as a matrix. Let us denote by E 11 (A ⊔ M n (C))E 11 the * -subalgebra {E 11 XE 11 : It tells us that the (i, j)-th blocks of A viewed as an element of A ⊔ M n (C) can be defined as E 1i AE j1 ∈ E 11 (A ⊔ M n (C))E 11 . We endow the * -algebra E 11 (A ⊔ M n (C))E 11 with the state n(φ * tr n ), where we recall that tr n is the normalized trace on M n (C). Proposition-Definition 1.9. For all unitary random variable U ∈ A, there exists a unique quantum random variable j U : U nc n → E 11 (A ⊔ M n (C))E 11 determined by j U (u ij ) = E 1i U E j1 , which induces a state (n φ * tr n ) • j U on U n . Proof. It follows from the unitarity of (E 1i U E j1 ) 1≤i,j≤n . Indeed, we have and the same for the other relation. The elements j U (u ij ) = E 1i U E j1 have to be considered as the (i, j)-th blocks of U , and, when there is no confusion, we will denote them by U ij . The matrix (U ij ) 1≤i,j≤n seen as an element of E 11 (A ⊔ M n (C))E 11 ⊗ M n (C) ≃ A ⊔ M n (C) is exactly U ∈ A seen as an element of A ⊔ M n (C), which justifies this notation. Remark that we have (U ij ) * = (U * ) ji , and that the notation U * ij is ambiguous. 1.5. Free cumulants. The compression of random variables by a family of matrix units has been considered in different situations, and it is possible to write explicitly the free cumulants of U ij in terms of those of U . Let us first introduce briefly this notion of cumulants (we refer the reader to the book [20]). Let S be a totally ordered set. A partition of the set S is said to have a crossing if there exist i, j, k, l ∈ S, with i < j < k < l, such that i and k belong to some block of the partition and j and l belong to another block. If a partition has no crossings, it is called non-crossing. The set of all non-crossing partitions of S is denoted by N C(S). When S = {1, . . . , n}, with its natural order, we will use the notation N C(n). It is a lattice with respect to the fineness relation defined as follows: for all π 1 and π 2 ∈ N C(S), π 1 π 2 if every block of π 1 is contained in a block of π 2 . Definition 1.10. The collection of free cumulants (κ q : A q → C) q≥1 on some probability space (A, φ) are defined via the following relations: for all A 1 , . . . , A n ∈ A, where N C(q) is the set of non-crossing partitions of {1, . . . , q}. The importance of the free cumulants is in large part due to the following characterization of freeness. Proposition 1.11. Let (A i ) i∈I be random variables of (A, φ). They are * -free if and only if their mixed * -cumulants vanish. That is to say: for all n ≥ 0, ǫ 1 , . . . , ǫ n be either ∅ or * , and all A i(1) , . . . , A i(n) ∈ A such that i(1), . . . i(n) ∈ I, whenever there exists some j and j ′ with We are now ready to express the free cumulants of U ij = j U (u ij ) = E 1i U E j1 as defined in Proposition-Definition 1.9 in terms of the free cumulants of U ∈ A. Proposition 1.12 (Theorem 14.18 of [20]). Let U (1) , . . . , U (m) be unitary random variables of (A, φ). The free cumulants of (U (k) ) ij = j U (k) (u ij ) in the noncommutative probability space If the indices are not cyclic, the left handside is equal to zero. Let us mention two basic properties about the quantum random variables j U : U n → E 11 (A ⊔ M n (C))E 11 defined in Definition 1.9. Proposition 1.13. Let U, V ∈ A be two unitary variables of (A, φ). (2) If U and V are * -free, then, the image * -algebras of j U and j V are * -free in the noncommutative space (E 11 (A ⊔ M n (C))E 11 , n(φ * tr n )). Proof. The first property follows from the relation j U V (u ij ) = k j U (u ik )j V (u kj ). The second one follows from Proposition 1.12 and the characterization of freeness of Proposition 1.11. Haar state on the unitary dual group In this section, we will investigate the existence of the Haar state on U n for the five different convolutions. Unfortunately, the definition of a Haar state on U n is too strong, and we need to define a weaker notion of Haar state, namely the notion of Haar trace, to have some existence results. Definition 2.1. The free (resp. tensor independent, boolean, monotone, anti-monotone) Haar state on U n , if it exists, is the unique state h on U nc n such that, for all other state φ on U nc n , Theorem 2.2. (1) The Haar measure on {z ∈ C : |z| = 1} is the Haar state for the free, tensor independent, boolean, monotone and anti-monotone convolution on U 1 . (2) For all n ≥ 2, there exists no Haar state on U n for the free, tensor independent, boolean, monotone or anti-monotone convolution. Proof. In Section 2.1, we prove the first item. In Section 2.2, we prove the second item for the free and the tensor convolution. In Section 2.3, we prove the second item for the boolean convolution, and finally, in Section 2.4, we prove the second item for the monotone and antimonotone convolution. Let us define a weaker notion of Haar state. A state φ on U nc n is called a tracial state, or a trace, if, for all a, b ∈ U nc n , we have φ(ab) = φ(ba). Definition 2.3. The free (resp. tensor independent, boolean, monotone, anti-monotone) Haar trace on U n , if it exists, is the unique tracial state h on U nc n such that, for all other tracial Remark that a Haar state which is tracial is automatically a Haar trace. Theorem 2.4. (1) For all n ≥ 2, there exist no Haar trace on U n for the boolean, monotone or anti-monotone convolution. (2) For all n ≥ 1, there exist a Haar trace on U n for the free convolution, and a Haar trace on U n for the tensor convolution. Remark 2.5. As nicely communicated by Moritz Weber, a careful examination of the proof of Theorem 2.4 allows us to conclude a more general result: the free Haar trace h on U n is such , a case which includes the tracial states but not only. For example, a state which factorizes on the unitary quantum group, where n k=1 u ki u * kj = n k=1 u * ik u jk = δ ij , fulfills this condition and so is absorbed by the free Haar trace. Proof. In Section 2.3, we prove the first item for the boolean convolution. In Section 2.4, we prove the first item for the monotone and anti monotone convolution. In Section 2.5, we prove the second item for the free convolution, and give a more explicit description of the free Haar trace. In Section 2.5, we prove the second item for the tensor convolution, and give a more explicit description of the tensor Haar trace. Let us remark that one could also choose a side and ask about a right (resp. left) Haar state for each of these independences. It would be a state h such that for each state φ, it holds that h ⋆ φ = h (resp. φ ⋆ h = h). We define similarly a right (resp. left) Haar trace. Nevertheless, the following result shows that this notion does not introduce any more generality. Proposition 2.6. Let us consider one of the five notions of independence. If h is a right (resp. left) Haar state on U n then it is also a left (resp. right) Haar state. As well, if h is a right (resp. left) Haar trace on U n then it is also a left (resp. right) Haar trace. Proof. Let h be a right Haar state. We define the flip τ on U nc n ⊔U nc n as the * -homomorphism such that τ (u (1) ij ) = u (2) ij and τ (u (1) and (2) indicate if the element is in the first leg of U nc n ⊔ U nc n or in the second leg. A simple computation on the generators u ij shows that τ • (Σ ⊔ Σ) • ∆ = ∆ • Σ. Therefore, by denoting the notion of independence at hand by ⊙, we have for all states φ: Because Σ is invertible, this says exactly that h • Σ is a left Haar state. But then we have: by using the right (resp. left) Haar state property of h (resp. h • Σ). Therefore, h = h • Σ is a right and left Haar state. The argument is valid when replacing h and φ by tracial states since it implies that h • Σ and φ • Σ are also tracial. 2.1. The Haar state in the one-dimensional case. Let us emphasize first that we identify the states on U 1 with the probability measure on {z ∈ C : The Haar measure is the uniform measure on the unit circle and is given by The free, tensor independent, boolean, monotone and anti-monotone convolutions on U 1 correspond to five different multiplicative convolutions on probability measures on U which have been already studied in the literature. In each of those cases, it is straightforward to prove that h is absorbing. For the free multiplicative convolution, we refer to [27], or to Section 2.5. For the tensor independent convolution, one has just to observe that For the Boolean, the monotone, and the anti-monotone convolutions, our references are [4,12,13]. Let µ be a probability measure on U. We define the K-transform of µ for |z| < 1 by Let us remark that K h (z) = 0. The K-transform of the multiplicative Boolean convolution of µ and ν is given by 1 z K µ (z) · K ν (z), and consequently, h is absorbing for the Boolean convolution. The K-transform of the multiplicative monotone (resp. anti-monotone) convolution of µ and ν is given by K µ • K ν (resp. K ν • K µ ), and consequently, h is absorbing for the monotone and anti-monotone convolutions. The non existence of Haar state in the free and tensor cases. In this section, we prove that there exists no free Haar state, nor tensor Haar state, for n ≥ 2. Let us take n ≥ 2 and assume that h is a free Haar state. We take 1 ≤ k ≤ n − 1 and we consider the unitary matrix of size 2n × 2n (which is a version of [32, Non-example 4.1], attributed to Woronowicz): For all 1 ≤ i, j ≤ n, we set j k (u ij ) the (i, j)-th block of M k of size 2 × 2. Because M k is unitary, j k extends to a quantum random variable j : U nc n → M 2 (C). We define the state φ k for all a ∈ U nc n as φ k (a) = e 2 , j k (a)e 2 , or equivalently, as the (2, 2)-th coefficient of j k (a). Then, for every 1 ≤ i, j ≤ n, we have φ k (u ik u * jk ) = 0. Let us remark that h being a free Haar state, we also have This reasoning can be done for any 1 ≤ k ≤ n − 1. For k = n we take the matrix M k in the which we have exchanged the last two columns of blocks. We therefore also have φ k (u in u * jn ) = 0 and thus h(u in u * in ) = 0. Therefore we should have: which contradicts the unitarity relation n k=1 u ik u * ik = 1. The same proof can be done for the tensor case as well. Indeed, the tensor independance also verifies that, for any 1 ≤ i, j, p, q, k ≤ n, 2.3. The boolean case. In this section, we prove that for n ≥ 2, there exist no boolean Haar state and no boolean Haar trace on U n . First of all, we remark the following general result: if φ and ψ are two states on U nc n and if a, c come from the left leg and b from the right leg of U nc n ⊔ U nc n , then we have For all state φ, let us introduce the following matrices: which can be written where ⊗ denotes here the tensor product (or Kronecker product) of matrices. A measure µ on the unitary group U (n) = {M ∈ M n (C) : U * U = I N } can be seen as a unique state on U nc n via the integration map Let us set Consider now another state φ 2 defined by (1/2)(δ A + δĀ) where A = Diag(i, 1, . . . , 1). We see that M φ 2 = I n 2 because φ 2 (u * 22 u 11 ) = 0. Replacing M h by I n 2 and M φ by M φ 2 = I n 2 in (1) yields to a contradiction. Now, let us remark that φ 1 and φ 2 are both tracial, and consequently the proof allows also to conclude that there exists no Haar trace for the boolean convolution. 2.4. The monotone and the antimonotone case. In the proof of the nonexistence of a boolean Haar state, the only property of the boolean independence that we needed was for a, c in the right leg and b in the left leg of U nc n ⊔ U nc n . The monotone independence verifies this same property and we can thus deduce that there exists no monotone Haar state. On the contrary, the antimonotone case verifies (h ⋆ AM φ)(abc) = h(b)φ(ac). Nevertheless, for x, z in the left leg and y in the right leg of U nc We can then do the computation of the relation h( in the exact same way as before and we find that We again find a contradiction by looking on the particular states φ 1 and φ 2 . To sum it up, for n ≥ 2, there exists no monotone (resp. antimonotone) Haar state on U n . The same remark, about the traciality of the states used, allows us to conclude about the non-existence of a Haar trace. 2.5. The free Haar trace. In this section, we define the free Haar trace and prove that is is indeed an absorbing state for the free convolution on U nc n with other tracial states. Let us first interpret the existence result of the free Haar trace on U n in a very concrete way as follows. Let us denote by h the Haar trace of U n for the free convolution, and by u = (u i,j ) 1≤i,j≤n the collection of generators of U nc n . Let A = (a ij ) 1≤i,j≤n ∈ M n (A) be a collection of random variables in (A, φ) (φ tracial) such that (a ij ) 1≤i,j≤n is unitary. In order to define the state which will play the role of the Haar trace, we have to define a Haar unitary variable. A noncommutative variable U of a noncommutative probability space (A, φ) is called Haar unitary if it is a unitary variable, and φ(U k ) = 0 for all k ≥ 0. Here is a description of its free cumulants. [25]). Let U be a Haar unitary element on some noncommutative probability space. Then, for all r ≥ 1 and ǫ 1 , . . . , ǫ r ∈ {1, * }, we have: Let us consider a Haar unitary random variable U in (A, φ) and construct from there a quantum variable j U : U nc n → E 11 (A ⊔ M n (C))E 11 determined by j U (u ij ) = E 1i U E j1 for all 1 ≤ i, j ≤ n as indicated in Proposition-Definition 1.9. We will study the state h = [n(φ * tr n )]•j U on U nc n . We compute first the free cumulants of our variables u ij and u * ij . In fact, for all 1 ≤ i, j ≤ n, we denote by (u * ) ij the generator u * ji . The free cumulants of u ij and (u * ) ij turn out to be more convenient than the free cumulants of u ij and u * ij . Corollary 2.8. The free cumulants of (u ij ) 1≤i,j≤n and ((u * ) ij ) 1≤i,j≤n = (u * ji ) 1≤i,j≤n in the noncommutative probability space (U nc n , h) are given as follows. Let 1 ≤ i 1 , j 1 , . . . , i r , j r ≤ n and ǫ 1 , . . . , ǫ r be either ∅ or * . If the indices are cyclic (i.e. if j l−1 = i l for 2 ≤ l ≤ q and i 1 = j r ), r is even and the ǫ i are alternating, we have If not, the left handside is equal to zero. Proof. It suffices to apply Proposition 1.12 to U (1) = U and U (2) = U * in order to get the free cumulants of j U (u ij ) = U ij and j U ((u * ) ij ) = (U * ) ij . We will need another property of free cumulants. Let us first introduce new notation. For all r ∈ N, S ⊂ {1, . . . , r}, σ ∈ N C(S), and A 1 , . . . , A r ∈ A, set where K(σ) is the biggest partition on F such that σ ∪ K(σ) is non-crossing. We are now ready to prove that h = [n(φ * tr n )] • j U is indeed a Haar trace for the free convolution. Proof of Theorem 2.4 in the free case. Let φ be a tracial state on U nc n . Let 1 ≤ i 1 , j 1 , . . . , i r , j r ≤ n, let ǫ 1 , . . . , ǫ r be either ∅ or * and set where we recall that ((u) ij ) 1≤i,j≤n = (u ij ) 1≤i,j≤n and ((u * ) ij ) 1≤i,j≤n = (u * ji ) 1≤i,j≤n by convention. Remark that we prefer to work with the word m instead of the word u ǫ 1 i 1 j 1 . . . u ǫr irjr , since the computations are easier. ik (u * ) where the exponent (1) and (2) indicate if the element is in the first leg of U nc n ⊔ U nc n or in the second leg. So, when computing ∆(m), we obtain something of the form k 1 ,...,kr m k 1 ,...,kr where m k 1 ,...,kr are words of length 2r of the form (u ǫ 1 ) i 1 k 1 (u ǫ 1 ) k 1 j 1 · · · (u ǫr ) irkr (u ǫr ) krjr with the generators coming from both legs of U nc n ⊔ U nc n . More precisely, let us decompose {1, . . . , 2r} = S ∪ T where S contains the positions of the generators which are in the first leg and T contains the positions of the generators which are in the second leg, according to We can develop the computation using the freeness of the legs: where we recall that, according to (2), the free cumulant κ h σ (· · · ) only involves the variables which correspond to indices in S and κ φ µ (· · · ) only involves the variables which correspond to indices in T . Using Corollary 2.8, we know that, whenever the ǫ i are alternating and the indices are cyclic within the blocks of σ ∈ N C(S), the quantity Thanks to Proposition 2.9, we can sum over µ and we obtain So let us now examine equation (3) in greater details. Because the blocks of σ alternate the ǫ i , the blocks of K(σ) must also alternate the ǫ i . One can convince himself on a few examples, but also find a full proof in Proposition 7.7. of [19]. Now one has to understand how the cyclicity of the indices i 1 k 1 , k 1 j 1 , . . . , i r k r , k r j r in the blocks of σ is translated in terms of the blocks of K(σ). For every block B = {r(1) ≤ . . . ≤ r(q)}, we say that r(1) and r(q) are opposites in B. A condition k q = k q ′ for some 1 ≤ q < q ′ ≤ r appears twice. Once in the case where 2q and 2q ′ − 1 are opposites in the same block of σ, which is equivalent to the fact that 2q − 1 and 2q ′ are consecutive in the same block of K(σ) (it corresponds to the case ǫ q = * and ǫ q ′ = ∅, see the Figure 1). The other case is when 2q − 1 and 2q ′ are consecutive in the same block of σ, which is equivalent to the fact that 2q and 2q ′ − 1 are opposites in the same block of K(σ) (it corresponds to the case ǫ q = ∅ and ǫ q ′ = * , see Figure 2). The free Haar state can be computed with the help of the following proposition, which is just a reformulation of Corollary 2.8. Proposition 2.10. When U nc n is endowed with its Haar trace for the free convolution, the free cumulants of {u ij } 1≤i,j≤n are given as follows. Let 1 ≤ i 1 , j 1 , . . . , i r , j r ≤ n. We have where C r = (2r)!/(r + 1)!r! designate the Catalan numbers. Moreover, the free cumulants which are not given in such a way are equal to 0. In [17], Mc Clanahan defines a state on U nc n which is in fact equal to our free Haar trace. More precisely, let us denote by C(U) the algebra of continuous functions on the unit complex circle U and by M n (C) ′ the relative commutant of M n (C) in C(U) ⊔ M n (C). It is straightforward to verify that there exists a unique * -homomorphism ϕ : Endowing C(U) with the uniform measure h on the unit circle gives us a state (tr n * h) |Mn(C) ′ • ϕ on U nc n . Proposition 2.11. The state (tr n * h) |Mn(C) ′ • ϕ of Mc Clanahan is the Haar trace for the free convolution on U nc n . Proof. Let us first observe the * -homomorphism of noncommutative probability spaces (where A = C(U) equipped with Haar measure): which follows from the equality φ * tr n ( k E k1 AE 1k ) = n φ * tr n (A) for all element A of E 11 (A ⊔ M n (C))E 11 . Observe also that Id U is a Haar unitary element U of (C(U), h). The result follows from the equality ϕ =φ • j U which shows that the state of Mc Clanahan (tr n * h) |Mn(C) ′ • ϕ is exactly the Haar trace [n(φ * tr n )] • j U = (tr n * h) |Mn(C) ′ •φ • j U . 2.6. The tensor Haar trace. In this section, we prove that there exists a tensor Haar trace. Let us define the state which will be the tensor Haar trace. It is constructed via a very different method than the free Haar trace. We consider the Hilbert space H = ℓ 2 (Z) ⊗ k∈Z M n (C), where ℓ 2 (Z) is Hilbert space of square-summable families of complex numbers indexed by Z and k∈Z M n (C) is the infinite tensor product of copies of the Hilbert space M n (C), where the number of matrices different from I n is finite and the scalar product on M n (C) is given by tr n (A * B) = Tr(A * B)/n. For all 1 ≤ i, j ≤ n, we define the following bounded operator on H by setting, for all and therefore its adjoint, given by . We introduce Ω = δ 0 ⊗ k∈Z I n and the state on the algebra B(H) of bounded operators on H given by A → Ω, AΩ . The operators U ij verify that n k=1 U * ki U kj = δ ij = n k=1 U ik U * jk and so the quantum random variable over j : It induces a state h on U n , given for all a ∈ U nc n by h(a) = Ω, j(a)Ω . Let us compute first the value of h, thanks to the following lemmas. Lemma 2.12. For all 1 ≤ i 1 , j 1 , . . . , i r , j r ≤ n, we have Proof. We have which yields the first and the second result. For more general words, it is possible to reduce them and fit into the previous case. Fix 1 ≤ i 1 , j 1 , . . . , i r , j r ≤ n, ǫ 1 , . . . , ǫ r ∈ {∅, * }, and consider the word u ǫ 1 i 1 j 1 . . . u ǫr irjr . We can decompose {1, . . . , r} into r k=−r S k , where If we assume that ∅ corresponds to a North step, * to a South step, and consider the path given by ǫ r , . . . , ǫ 1 , the set S k contains the positions where the path goes from the level k to the level k + 1, or from the level k + 1 to the level k. Consequently, the S k form a partition of {1, . . . , r}, and the ǫ m are alternating inside each S k . Lemma 2.13. Let 1 ≤ i 1 , j 1 , . . . , i r , j r ≤ n and ǫ 1 , . . . , ǫ r be either ∅ or * . Proof. Let us prove by decreasing induction on l that, for all 1 ≤ l ≤ r, and l ∈ S k , we have First of all, we have U irjr (Ω) = δ 1 ⊗ (. . . ⊗ E jrir ⊗ . . .) with the non-identity matrix at level 0 and U * irjr where the non-identity matrix is at level −1. Thus the property is true for l = r. We are now ready to prove that h is indeed the Haar trace for the tensor convolution. Thanks to Proposition 2.6, it is a consequence of the following proposition. Proposition 2.14. The state h is tracial, and for all other tracial state φ, we have h ⋆ Proof. Firstly, h is tracial. Indeed, let us fix 1 ≤ i 1 , j 1 , . . . , i q , j q ≤ n, ǫ 1 , . . . , ǫ q ∈ {∅, * } and compare h(u ǫ 1 . Thanks to Lemma 2.12, if the ǫ i are alternating, we are done. If not, remark that acting by a cyclic permutation just shifts the S k 's. Thus, up to a cyclic permutation, the decomposition in the S k 's is the same for u ǫ 1 i 1 j 1 . . . u ǫr irjr and u ǫr irjr u ǫ 1 Consequently, by Lemma 2.13, the full traciality is a consequence of the traciality for words alternating the ǫ i 's. Now, let us prove that h⋆ T φ = h. Equivalently, we will prove that, for all 1 ≤ i 1 , j 1 , . . . , i r , j r ≤ n and ǫ 1 , . . . , ǫ r ∈ {∅, * }, is equal to h(u ǫ 1 i 1 j 1 . . . u ǫr irjr ). If ♯{m : ǫ m = * } = ♯{m : ǫ m = 1}, this is a direct consequence of Lemma 2.13. If not, let us prove the result by induction on the even length r = 2q of the word. Random matrix models In this section, we define a model of random matrices which converges to the free Haar trace defined in Section 2. Let us fix an arbitrary set I of indices. Let (M i ) i∈I be a family of random variables in some non-commutative space (A, φ). For each N ∈ N, let (M (N ) i ) i∈I be a family of random N × N matrices. We will say that (M (N ) i ) i∈I converges almost surely in * -distribution to (M i ) i∈I as N tends to ∞ if for all noncommutative polynomial P ∈ C X i , X * i : i ∈ I we have almost surely the following convergence: where we recall that tr N is the normalized trace. The following theorem, whose first version is due to Voiculescu [29], is a well-known phenomenon which makes freeness appear from independence and invariance by unitary conjugation. (see also [10,16,20,27]). ] ij } k∈K,1≤i,j≤n converges almost surely in * -distribution to {E 1i A k E j1 } k∈K,1≤i,j≤n seen as an element of (E 11 (A ⊔ M n (C))E 11 , n φ * tr n ) when N tends to ∞. j1 } k∈K,1≤i,j≤n converges to {E 1i A k E j1 } k∈K,1≤i,j≤n seen as an element of (A ⊔ M n (C), φ * tr n ) when N tends to ∞. But let us remark that P ] ij } k∈K,1≤i,j≤n as N tends to ∞. However, one has to be careful that the trace tr N is transformed via this map into the linear functional n tr nN , and that consequently the family ,j≤n seen as elements of A ⊔ M n (C) endowed with the linear functional n(φ * tr n ), or equivalently, seen as elements of the noncommutative space (E 11 (A ⊔ M n (C))E 11 , n(φ * tr n )). Free Lévy processes on the unitary dual group In this section, we study free Lévy processes on the unitary dual group. We recall their definition and the correspondence between Lévy processes, generators, and Schürmann triples. We describe a class of free Lévy processes which appears as limit of Lévy processes on the classical unitary group, and compute their generators thanks to a representation theorem which was still missing in the free case. Free Lévy processes. Definition 4.1. A free unitary Lévy process is a family (U t ) t≥0 of unitary element of a noncommutative probability space (A, φ) such that: The distribution of U t converges weakly to δ 1 as t goes to 0. One can generalize this definition by considering a process (U t ) t≥0 of matrices of elements of A which are unitary, instead of considering only one element. In other words, we want to consider a process (j t ) t≥0 of quantum random variables on U n over (A, φ) (for all time t ≥ 0, j t : U nc n → A is a * -homomorphism, which is equivalent with requiring that the matrix (j t (u ij )) n i,j=1 is unitary). Definition 4.2. A free Lévy process on U n over (A, φ) is a family of quantum random variables (j t ) t≥0 on U n over A such that: freely independent in the sense that the image * -algebras of U nc n are freely independent in (A, φ). towards δ(b) when s tends to 0. Some authors find more convenient to make the following assumptions on the family of increments (j st ) 0≤s≤t linked with (j t ) t≥0 by the relation j st = (j s • Σ) ⋆ j t (for all 0 ≤ s ≤ t): • For all 0 ≤ t 1 ≤ . . . ≤ t k , the homomorphisms j 0t 1 , . . . , j t n−1 tn are freely independent in the sense that the image algebras are freely independent. • For all b ∈ U nc n , φ • j 0s (b) converges towards δ(b) when s tends to 0. Of course, the two points of view are equivalent. Let us observe that a free unitary Lévy process (U t ) t≥0 is a free Lévy process (u → U t ) t≥0 on U 1 . Free Lévy processes as limit of random matrices. Let us present here an example of of free Lévy process constructed thanks to the homomorphism j U described in Section 1.4, and which is the limit of random matrices in the sense of Theorem 4.4. Then, (j t ) t≥0 is a free Lévy process on U n over the non-commutative probability space Proof. The fact that (j t ) 0≤t is indeed a free Lévy process on U n follows from Proposition 1.13, and from the definition of a free unitary Lévy process (U t ) t≥0 . In the particular case where (U t ) t≥0 is a free unitary Brownian motion (see the last section of the paper), this theorem above is the result stated in [26], proved via stochastic calculus. , it is a direct consequence of Corollary 3.2. In [9], one of the authors defined a matrix model for every unitary free Lévy process (U t ) t≥0 . More precisely, for each N ∈ N, there exists a Lévy process (U (N ) t ) t≥0 on the classical unitary group U (N ) such that the family {U (N ) t } t≥0 converges almost surely in * -distribution to the family {U t } t≥0 . As a consequence, every free Lévy process defined according to Proposition 4.3 from a one-dimensional free Lévy process is indeed the limit of a family of random matrices when the dimension tends to ∞. The rest of the paper is devoted to compute the generator of such free Lévy processes, whose expression is given in Theorem 4.7. Generator and Schürmann triple. In this section, we define two different objects which characterize Lévy processes on U n . In [3], it is proved that L is well-defined and determines completely the family of law (φ•j t ) t≥0 . The generator satisfies L(1) = 0, is hermitian and is conditionally positive, in the sense that Conversely, the recent [23] proves that, for all hermitian and conditionally positive L : U nc n → C such that L(1) = 0, there exists a free Lévy process on U n whose generator is L. We will call such a linear functional a generator, without mentioning any Lévy process. The description of the generators is made easier by the following notion of Schürmann triple. • a unital * -representation ρ of U nc n on H such that, for all a, b ∈ U nc n , we have It simplifies the data of L because the three maps ρ, η and L of a Schürmann triple are uniquely determined by their values on the generators {u ij , u * ij } 1≤i,j≤n of U nc n . A sort of GNS-construction (see [21]) allows conversely to construct a Schürmann triple (ρ, η, L) for every generator L. In the next section, we will prove the following theorem, which computes the Schürmann triple of the Lévy process over U n defined by Proposition 4.3. The Schürmann triple (ρ n , η n , L n ) of (j t ) t≥0 on H ⊗ M n (C) is given, for all 1 ≤ i, j ≤ n, by As a corollary, we have a sufficient characterization for the existence of a random matrix model in terms of the generator (we believe that this condition is not necessary). Corollary 4.8. Let (j t ) t≥0 be free Lévy process on U n . Let H be a Hilbert space such that the Schürmann triple (ρ n , η n , L n ) of (j t ) t≥0 is given on H ⊗ M n (C) by Proof. Let us show that we are indeed in the situation of Theorem 4.7, and that W , h and R can be read as the Schürmann triple of some Lévy process over U 1 . This is a consequence of the following general description of the generators on U n . Conversely, each generator L appears in a Schürmann triple (ρ, η, L) on a Hilbert space H as (7) for some (h ij ) 1≤i,j≤n , (W ij ) 1≤i,j≤n unitary, and (R ij ) 1≤i,j≤n selfadjoint given by Using this proposition for W , h and R shows that the generator (ρ n , η n , L n ) can be written in the form (6) for some Schürmann triple (ρ, η, L) on H. But let us consider a free unitary Lévy process (U t ) t≥0 with Schürmann triple (ρ, η, L), and the Lévy process (j Ut ) t≥0 of Theorem 4.7 defined by setting, for all 1 ≤ i, j ≤ n, j Ut (u ij ) = E 1i U t E j1 . Using the result [ is also a random matrix model for (j t ) t≥0 . Proof of Theorem 4.7. In the three next steps, we will (1) establish a concrete realization of any free Lévy process (j t ) t≥0 on U n on a full Fock space, starting from any Schürmann triple; (2) show that, considering a one dimensional free Lévy process (U t ) t≥0 , this concrete realization behaves nicely when applying the boosting j Ut (u ij ) = E 1i U t E j1 to define a free Lévy process (j Ut ) t≥0 on U n ; (3) conclude the proof by reading the Schürmann triple directly from the stochastic equation of (j Ut ) t≥0 . Step 1. In this step, we give a direct construction of a free Lévy process starting from a Schürmann triple of U n . To achieve this purpose, we will use the free quantum stochastic calculus. We do not recall the definition of the free stochastic equations on the full Fock space, but we define now the objects involved, and we refer the reader to [15] and [24] for further details. Let us consider a Hilbert space H. We denote by K the Hilbert space L 2 (R, H) ≃ L 2 (R) ⊗ H, and consider the full Fock space We turn B(Γ(K)), the * -algebra of bounded operator on Γ(K), into a noncommutative probability space by endowing it with the state τ (·) = Ω, (·)Ω . Let h ∈ H and t ≥ 0. The creation operator c t (h) ∈ B(Γ(K)) is defined by setting, for all n ≥ 0, and the annihilation operator c * t (h) ∈ B(Γ(K)) is its adjoint operator. Let W a bounded operator on H and t ≥ 0. The conservation operator Λ t (W ) ∈ B(Γ(K)) is defined by setting, for all n ≥ 1, and Λ t (W )(Ω) = 0 otherwise. The following general result is the free counterpart of the general results of Schürmann (see Section 4.4. of [22] for the tensor case). The free case turns out to be the only case which has not yet been written down. which extends to a free Lévy process (j t ) t≥0 on U n with value in (B(Γ(L 2 (R, H))), τ ), and with generator L. Proof. The existence and uniqueness of the solution of (9) is a consequence of a very general theorem in [24], from which we can also deduce the extension of the solution to a free Lévy process. On the contrary, proving that L is indeed the generator of this solution is not a direct consequence of [24], and requires some computations very similar to those of [21]. The existence theorem which we will use is [24,Theorem 10.1]. In order to use Theorem 10.1. of [24], we must write the n 2 stochastic equations (9) as one stochastic equation involving only one variable. This is routine using the explanations of Chapter 13 of [24]. For the convenience of the reader, we sketch the ideas: we consider the full Fock M N (C)-module M N (B(Γ(K))) ≃ B(C n ⊗ Γ(K)). The stochastic equations (9) can be summed up into the following stochastic equation in M N (B(Γ(K))) (where c t , c * t and Λ t are defined accordingly) with initial condition (j t (u ij )) n i,j=1 = Id. Let us define h = (h ij ) 1≤i,j≤n , W = (W ij ) 1≤i,j≤n unitary, and R = (R ij ) 1≤i,j≤n selfadjoint by the relation (8). The stochastic equation (10) can be rewritten (11) According to Theorem 10.1. of [24] (see the end of [24,Chapter 10] to make the link with this particular case), there exists a unique solution to (11) whenever W = (W ij ) 1≤i,j≤n is unitary and R = (R ij ) 1≤i,j≤n is selfadjoint, which is indeed true thanks to Proposition 4.9. Finally, there exists a unique solution (j t (u ij )) n i,j=1 to the coupled stochastic equations (9), and another consequence of [24, Theorem 10.1] is that (j t (u ij )) n i,j=1 is unitary. This is sufficient to extend (j t (u ij )) n i,j=1 as a process (j t ) t≥0 of quantum random variables. The stationary of the distributions is a consequence of the stationary of the underlying driven process and the freeness of the increments is a consequence of the particular underlying filtration for which (j t (u ij )) n i,j=1 Table 1. Itô's table its value at t = 0 gives us L(b * c). Using the initial condition, one checks that the first two terms on the right hand side of (14) give rise, under the vacuum state, to the first two terms on the right hand side of (13). We are left with the computation of the coefficient of the dt-part of dj t (b * ) · dj t (c) at t = 0. Because of the Ito table, this dt-part is coming from the dc t -parts of dj t (b) and dj t (c) by the formula Thus we are left to compute the dc t -parts of dj t (b) and dj t (c). Of course, we can assume that both b and c are monomials in u ij and u * ij . Assuming b = u ǫ 1 i 1 ,j 1 · · · u ǫr ir,jr , we can compute from the differential equation of j t and the quantum Ito table the exact expression for the dc t -part of dj t (b). For simplicity, we give here the expression of the dc t -part of dj t (b) where we have already put the integrand at time t = 0, as this will not affect the final result (notice that it allows us to replace j 0 (u ij ) by δ(u ij ), and n k=1 j 0 (u ik )dΛ t ((ρ − δ)(u kj )) by dΛ t ((ρ − δ)(u ij ))): i m(l) j m(l) · · · u ǫr ir,jr ) i l+1 ,j l+1 · · · u ǫr ir,jr ) =dc t (η(u ǫ 1 i 1 ,j 1 · · · u ǫr ir,jr )) = dc t (η(b)), where the hats mean that we omit the terms in the product. Finally, using (15), the integrand of the dt-part of (dj t (b * )) · (dj t (c)) at time t = 0 is equal to η(b), η(c) , which completes the equality (13). Now, for 1 ≤ i, j ≤ n, L(u ij ) is given by the integrand of the dt-part of dj t (u ij ) at time t = 0. Indeed, the three others parts are martingales. This integrand is given by (9): and it concludes the proof. Using Proposition 4.9, it is possible to rewrite Theorem 4.10 without mentioning any Schürmann triple. which extends to a free Lévy process (j t ) t≥0 on U n over (B(Γ(L 2 (R, H))), τ ). Let us first remark that L 2 (R, H) ⊗ M n (C) ≃ L 2 (R, H ⊗ M n (C)). Thus, for all h ⊗ M ∈ H ⊗ M n (C), the process c * t (h ⊗ M ), c t (h ⊗ M ) ∈ B(Γ(K ⊗ M n (C))) are defined as previously. Furthermore, for all W ∈ B(H) and M ∈ M n (C), the conservation operator Λ t (W ⊗ M ) is defined as previously, with M acting on M n (C) by the left multiplication. (16) and (j t ) t≥0 defined by (17). There exists a homomorphism of probability spaces ρ : E 11 (B(Γ(K)) ⊔ M n (C))E 11 , n(φ * tr n ) → B(Γ(K ⊗ M n (C))), Ω, (·)Ω such that the free Lévy process (J t ) t≥0 = (ρ • j t ) t≥0 is solution of the following differential equation, starting at J 0 (u ij ) = δ ij Id: Proof. Let us first describe the free product representation of B(Γ(K)) ⊔ M n (C) given in [27]. We consider M n (C) acting on itself by the left multiplication. We denote by Γ(K) • the Hilbert space n≥1 K ⊗n and by M n (C) • the Hilbert space M n (C) ⊖ CI n , in such a way that . and to define the Hilbert space isomorphism Γ(K) * M n (C) → Γ(K ⊗ M n (C)) ⊗ M n (C) accordingly. Unfortunately, we do not see any way of writing f directly, and for computing it, we will always follow the different steps of the proof of Lemma 4.14. Step 3. We conclude the proof of Theorem 4.7. Recall that we start from a free unitary Lévy process (U t ) t≥0 with Schürmann triple (ρ, η, L). Because Theorem 4.7 uniquely depends on the distribution of our random variables, we can without loss of generality represent (U t ) t≥0 as the solution of the stochastic equation (16). Let j t : U n → E 11 (B(Γ(K)) ⊔ M n (C))E 11 be the Lévy process defined by setting, for all 1 ≤ i, j ≤ n, j t (u ij ) = E 1i U t E j1 as in Proposition 4.3. We want to prove that (ρ n , η n , L n ) defined by setting, for all 1 ≤ i, j ≤ n, is the Schürmann triple of (j t ) t≥0 . First of all, (ρ n , η n , L n ) given by (18) is a well-defined Schürmann triple. Indeed, defining (h ij ) 1≤i,j≤n , (W ij ) 1≤i,j≤n unitary, and (R ij ) 1≤i,j≤n selfadjoint by h ki , h kj H⊗Mn(C) , Theorem 4.7 shows that the representation ρ n in the Schürmann triple (ρ n , η n , L n ) of (J t ) 0≤s≤t on M n (C) is equal to δ · Id Mn(C) , which means that (J t ) t≥0 is a gaussian process on U n (in the sense of [11]). Moreover, this process is non-degenerate in the following sense: Proposition 4.15. Let (J t ) t≥0 be the Lévy process on U n defined by (20). Then, when t goes to infinity, the distribution of (J t ) t≥0 converges towards the free Haar trace. Proof. Let (U t ) t≥0 be a free multiplicative Brownian motion in a non-commutative probability space (A, Φ). Then, (J t ) t≥0 is equal in distribution to j t : U n → E 11 (A ⊔ M n (C))E 11 defined by setting, for all 1 ≤ i, j ≤ n, j t (u ij ) = E 1i U t E j1 . It is well-known that (U t ) t≥0 converge in * -distribution to a Haar unitary variable U as t tends to ∞. Indeed, there is an explicit description of the moments of U t in [6], namely and they converge to zero, which are the moments of a Haar unitary variable U . As a consequence, (j t (u ij )) 1≤i,j≤n converge in * -distribution to (E 1i U E j1 ) 1≤i,j≤n as t tends to ∞, where (E ij ) 1≤i,j≤n are free from U . But u ij → E 1i U E j1 is a quantum random variable whose distribution is the free Haar trace (see Section 2.5). Consequently, (j t ) t≥0 converge in distribution to the free Haar trace, and so do (J t ) t≥0 .
2015-05-29T15:14:01.000Z
2015-05-29T00:00:00.000
{ "year": 2015, "sha1": "a60349268679138192ecf7c1883f90c0bab854cc", "oa_license": "elsevier-specific: oa user license", "oa_url": "https://doi.org/10.1016/j.jfa.2015.12.004", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "a60349268679138192ecf7c1883f90c0bab854cc", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
244103122
pes2o/s2orc
v3-fos-license
Baseline factors associated with death in a COVID-19 hospital cohort, Sao Paulo, 2020 ABSTRACT This study aimed to verify socio-demographic and baseline clinical factors associated with death in a hospital cohort of patients with COVID-19. A retrospective cohort study was conducted between February and December 2020 in a university hospital in the city of São Paulo, using Hospital Epidemiology Center data. RT-PCR-positive patients were selected to compose the sample (n = 1,034). At the end of the study, 362 (32%) patients died. In this cohort, age equal to or greater than sixty years (HR = 1.49) and liver disease (HR = 1.81) were independent risk factors for death from COVID-19 associated with higher in-hospital mortality. INTRODUCTION The coronavirus disease 2019 (COVID-19) was first identified in December 2019, in Wuhan, the capital of China's Hubei province, and has since spread globally, resulting in the ongoing pandemic. Until December 31, 2020, approximately 83 million cases and 1.8 million deaths due to COVID-19 were reported worldwide. In Brazil, 7,619,2000 cases and 193,875 deaths were reported in the official systems of the Brazilian Ministry of Health in the same period. The State of São Paulo remains the one with the highest numbers: 1,462,297 cases and 46,717 deaths; approximately 33% of these occurred in the city of São Paulo. The spectrum of this disease ranges from mild to life-threatening. Some cases might progress promptly to acute respiratory distress syndrome and/or multiple organ function failure. The presence of some specific social conditions, individuals' characteristics, comorbidities, and clinical aspects of the disease have been identified as predictors of death, making research on these themes crucial to prevent the event. Thus, our study aimed to verify socio-demographic and baseline clinical factors associated with death in a hospital cohort of patients with COVID-19. METHODS A retrospective cohort study was conducted with hospitalized patients with COVID-19 between February and December 2020, in Hospital São Paulo, a university hospital in the city of São Paulo. We used the data collected systematically by the Hospital Epidemiology Center, using a standardized form for data collection on the severe acute respiratory syndrome, according to clinical-epidemiological criteria established by the Brazilian Ministry of Health (available from: https://opendatasus.saude.gov.br/dataset/bd-srag-2020). The study population was composed of suspected COVID-19 hospitalized patients (n = 2,540). Patients with confirmation of COVID-19 infection by the polymerase chain reaction test (RT-PCR) were selected to compose the sample of the present analysis (n = 1,095). Patients with unknown clinical evolution (cure or death) were excluded from the sample (n = 65). Thus, the final sample was composed of 1,034 hospitalized patients with COVID-19. A descriptive analysis of the cohort was conducted. The Kaplan-Meier product-limit estimator was used to estimate the cumulative probability curves for the date of first onset of symptoms and the date of clinical evolution concerning the independent variables. The log-rank test was used for the comparison of the curves (data not showed). The Cox regression model for survival analysis was used to investigate factors associated with death. In the model, death by COVID-19 was considered an event and the cure as censure. The hazard ratio (HR) was estimated for each independent variable. The Wald test was used to test the hypothesis of HR = 1. The null hypothesis was rejected when the p-value was ≤ 0.05. The variables that maintained statistical significance and those that statistically adjusted the parameters of the other variables remained in the final Cox model. Proportionality of risk over time was determined using Schoenfeld's residual analysis, which employs a chi-square statistic with one degree of freedom based on the proportion of observed and expected survival. All analyses were conducted using R program. RESULTS Of the 1,034 patients analyzed, most were male (59%), 58% had 9 or fewer years of education level and a half of the cohort were 60 years of age or older. Concerning the comorbidities, 90% had at least one and cardiovascular disease was the most prevalent (45%). The most frequent symptoms observed were oxygen saturation < 95% (80%), dyspnea (77%), cough (73%), respiratory discomfort (70%), and fever (66%). The length of stay mean was 17 days (IQR: 12-26 days). At the end of the study, 362 (32%) patients died. In the adjusted model, age equal to or greater than sixty years and the presence of liver disease were independent risk factors for death from COVID-19, while the presence of fever at the time of admission was negatively associated with the event. The Table shows the complete distribution of patients according to variables of interest and the univariate and multivariate analyses of factors associated with death. DISCUSSION Our study presents three main results. First, 90% of the patients had at least one comorbidity at the time of admission and only liver disease was associated with death in this sample. Second, age greater than or equal to sixty years was independently associated with death. Finally, having a fever at the time of hospital admission was a protective factor against death by COVID-19. The characterization of our sample was remarkably similar to that found at the beginning of the epidemic and in the largest study published on hospital admissions by COVID-19 in Brazil 1 : most patients were men, older adults, with a high prevalence of comorbidity and frequent presence of fever, cough, dyspnea, and low oxygen saturation. Cardiovascular disease was the most prevalent comorbidity in our sample, following the pattern found elsewhere in the world 2 . Older adults and people of any age who have comorbidities are known to be more severely affected and had a higher mortality rate by COVID-19. Our analysis corroborates other studies that showed a worse prognosis in older adult population 3 . In our sample, liver disease was the only comorbidity independently associated with death. Studies have already shown that people with pre-existing liver disease who were diagnosed with COVID-19 are at higher risk than people without the disease 4 . Finally, in our analysis, the presence of fever was negatively associated with death by COVID-19. The fact that the person has a fever would be an indication of a better immune competence to fight the virus 5 . Our study has limitations and strengths that should be considered. It is crucial to note that this was a hospital-based study, with data from a university hospital, with specific care characteristics. This fact could impact the representativeness of our findings, although the characterization of our sample was remarkably similar to other studies as mentioned above. Moreover, the Brazilian Ministry of Health's standardized form contained some missing data for reported symptoms and comorbidities, which made it impossible to analyze characteristics previously associated with death, such as obesity. On the other hand, our study had an expressive number of patients analyzed and its period covers the entire year of 2020. This can bring important information about the clinical characteristics and factors associated with death in the period of the pandemic when the new variants of the virus were not yet circulating effectively in the city of São Paulo and vaccination had not yet been implemented. We concluded that age greater than or equal to sixty years and liver disease were associated with higher in-hospital mortality in a cohort of patients with COVID-19 admitted to a university hospital in the city of São Paulo. Knowing the populations with the highest risk of death by COVID-19 is crucial for the implementation of prevention therapeutic strategies, and the organization of the hospital service.
2021-11-15T05:08:54.896Z
2021-11-08T00:00:00.000
{ "year": 2021, "sha1": "c3e78b5d0386c7dfb2627fd73ad9d56bc9fc5db0", "oa_license": "CCBY", "oa_url": "https://doi.org/10.11606/s1518-8787.2021055003684", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c3e78b5d0386c7dfb2627fd73ad9d56bc9fc5db0", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
250113845
pes2o/s2orc
v3-fos-license
Symplectic duality for topological recursion We consider weighted double Hurwitz numbers, with the weight given by arbitrary rational function times an exponent of the completed cycles. Both special singularities are arbitrary, with the lengths of cycles controlled by formal parameters (up to some maximal length on both sides), and on one side there are also distinguished cycles controlled by degrees of formal variables. In these variables the weighted double Hurwitz numbers are presented as coefficients of expansions of some differentials that we prove to satisfy topological recursion. Our results partly resolve a conjecture that we made in [arXiv:2106.08368] and are based on a system of new explicit functional relations for the more general $(m,n)$-correlation functions, which correspond to the case when there are distinguished cycles controlled by formal variables in both special singular fibers. These $(m,n)$-correlation functions are the main theme of this paper and the latter explicit functional relations are of independent interest for combinatorics of weighted double Hurwitz numbers. We also put our results in the context of what we call the"symplectic duality", which is a generalization of the $x-y$ duality, a phenomenon known in the theory of topological recursion. The main results of this paper concern new cases of topological recursion [EO07] and blobbed topological recursion [BS17] that simultaneously generalize the results on topological recursion in the theory of two-matrix model [CEO06,EO08,Eyn16,DBOPS18] (bi-colored maps) and double weighted Hurwitz numbers [ACEH20,BDBKS20], as well as their fully simple analogs [BGF20,BCGF21a,BDBKS21]. Basically, we consider enumeration of double weighted Hurwitz numbers with a possibility that some cycles over 0 and ∞ are not marked and are controlled by extra formal variables. This problem emerged naturally in [BDBKS21], see Conjecture 4.4 there, as a framework for a natural generalization of the so-called x − y duality in the topological recursion [EO08, EO13, BGF20, BCGF + 21b, Hoc22]. In the present paper we generalize this x − y duality to something that we call "symplectic duality" which make sense for general double weighted Hurwitz problems We obtain these new cases of topological recursion and blobbed topological recursion and the respective symplectic duality as straightforward corollaries of our study of the so-called (m, n)-point correlators. The main body of our paper is devoted to proving a recursion formula for these (m, n)-point correlators and then proving loop equations and the projection property for them. In the rest of the introduction we recall the basic definitions related to topological recursion and the context for the x − y duality which emerges from the theory of twomatrix model and enumeration of maps. Then we introduce the general (m, n)-point functions and describe our results on them, namely the recursion formulas and the loop equations and the projection property for them. After that we describe the implications for the new cases of topological recursion, and then we introduce the notion of symplectic duality and discuss certain (partly conjectural) chains of symplectic dualities. 1.1. Topological recursion and the x − y duality. Let us first recall the formulation of topological recursion and some of its basic properties. 1.1.1. Topological recursion. Topological recursion of Chekhov, Eynard, and Orantin [CE06,EO07,Eyn16] is a recursive computational procedure. It associates to a small input data that consists of a Riemann surface Σ, a symmetric bi-differential B on Σ 2 (the Bergman kernel), and two functions x and y on Σ, subject to some conditions, a system of symmetric differentials ω (g) n on Σ n , g ≥ 0, n ≥ 1, as well as some constants ω (g) 0 , g ≥ 0. We assume that dx is meromorphic and has isolated simple zeros p 1 , . . . , p N ∈ Σ, y is holomorphic near p i and dy| p i = 0, i = 1, . . . , N . We also assume that B is meromorphic with the only pole being the order 2 pole on the diagonal of Σ 2 with the bi-residue equal to 1. The symmetric differentials ω (g) n , n ≥ 1, 2g − 2 + n, are produced from the initial unstable differentials ω (0) 1 and ω (0) 2 by an explicit recursive procedure: n+2 (w, σ i (w), z n )+ (1) g 1 +g 2 =g, I 1 I 2 = n (g 1 ,|I 1 |),(g 2 ,|I 2 |) =(0,0) ω (g 1 ) Here σ i is a deck transformation of x near the point p i , i = 1, . . . , N , by n we denote the set {1, . . . , n}, and z I denotes {z i } i∈I for any I ⊆ n . If g = 0, then we assume that ω (g−1) n+2 = 0. Here and everywhere below, if not specified otherwise, a sum of the form I 1 ··· I k =A is understood as a sum over ordered collections of sets which are allowed to be empty. The initial differentials are typically chosen as ω and ω (0) 2 which do not affect the recursion (1) that we discuss below. The constants ω (g) 0 are defined by the inversion of the boundary insertion operator, and in the simplest cases for g ≥ 2 the definition is but it is different in genera 0 and 1, see [EO07,Section 4.2] and needs a correction in some cases, see a discussion in [EO13] for the case of algebraic curves. 1.1.2. Possible choices for x, y, and unstable differentials. In general, the choice of ω (0) 1 and ω (0) 2 and even the functions x and y is just a matter of convention. Assume we want to let the given ω (g) n , 2g − 2 + n > 0, satisfy the recursion (1), and we allow any choice of x, y, ω The function x is used only to determine the points p 1 , . . . , p N , and the same points can be recovered quite often as the critical points of x −1 , log x, e x , or any other holomorphic function in x. In these cases, y can be changed accordingly such that ω (0) 1 is preserved. For instance, ydx = (−y)d(−x) = ye −x de x = yxd log x = −yx 2 dx −1 gives in some situations (always, if we stick to the chosen critical points p 1 , . . . , p N ) alternative choices for (x, y) that preserve ω (0) 1 = ydx, the involutions σ, and thus the recursion (1). But most important for this paper is that the convention ω (0) 1 = y(z)dx(z) is not necessary, and often inconvenient. We can change y to y + F (x) for any function F (x), and the recursion (1) is still satisfied for ω (0) 1 = (y(z) + F (x(z))dx(z). One can consider it either as a change of y with a subsequent change of ω (0) 1 such that the convention ω (0) 1 = ydx is preserved, or, alternatively, one can think that y is fixed but ω (0) 1 is chosen to be (y(z) + F (x(z))dx(z) instead of y(z)dx(z). A similar freedom exists in a choice of ω 2 . It can be deformed by any holomorphic bi-differential in the variables x(z 1 ) and x(z 2 ) without affecting the result of the recursion. It is convenient sometimes to chose it in the form ω (0) 2 = B(z 1 , z 2 ) − dx(z 1 )dx(z 2 )/(x(z 1 ) − x(z 2 )) 2 instead of just ω (0) 2 = B(z 1 , z 2 ). Final remark concerns the rescaling y → αy with a nonzero constant α. This implies the corresponding rescaling ω (g) n → α 2−2g−n ω (g) n for all g, n ≥ 0. 1.1.3. Loop equations and blobbed topological recursion. The differential obtained by (1) and considered as a differential in the first argument is a global meromorphic 1-form on Σ. Moreover, all its possible poles are at p 1 , . . . , p N . The principal parts of the poles are determined by the so called linear and quadratic loop equations which are essentially an equivalent form of (1). Assume that Σ = CP 1 is rational. In this case, the requirement that ω (g) n has no poles other than p 1 , . . . , p N is called projection property. If it is satisfied then ω (g) n being a rational 1-form is uniquely recovered from the principal parts of its poles, and the residual expression (1) is intended to realize the recovery procedure in a closed form, see [BS17,BEO15], and also a short exposition in [BDBKS20, Section 1.1.3]. One can consider a situation when for a given system of differentials ω (g) n the loop equations are satisfied but the projection property fails. These conditions taken together are called the blobbed topological recursion [BS17]. Remark that the blobbed topological recursion is not formally speaking a recursion: there is no control on the principal parts of the poles of these forms outside p 1 , . . . , p N , so that ω (g) n is not uniquely determined by the imposed restrictions. 1.1.4. Classical x − y duality. One of the recurring topics in the literature on topological recursion is the so-called x − y duality. Namely, one can replace functions x and y bỹ x = y andỹ = −x, and use the sameΣ = Σ andB = B to compute via the same procedure the differentialsω (g) n , g ≥ 0, n ≥ 1, as well as the constantsω Under some conditions, see [EO07,EO08,EO13], the statement on the x − y duality reads ω (g) 0 =ω (g) 0 . In this form, it is often presented as a statement on "symplectic invariance" of the constants ω (g) 0 . The latter interpretation refers to the "symplectic structure" Ω = dy ∧ dx = dỹ ∧ dx, for which two different ways to integrate it to a 1differential are chosen: ydx andỹdx. Note that these two choices for the 1-differentials integrating Ω determine two choices of the setup for topological recursion mentioned above. One more often occurring addendum to the theory of topological recursion in the context of the x−y duality is the so-called (m, n)-differentials, unifying the systems of differentials constructed above: ω (g) m,n are (m + n)-differentials on Σ m+n , g, m, n ≥ 0, with ω (g) n , n ≥ 1, and ω (g) 0 . These (m + n)-differentials are subject to a system of loop equations and play a crucial role in connecting the two sides of the x − y duality. See more details below. 1.1.5. Example: bi-colored maps and the two-matrix model. Let t = (t 1 , . . . , t d , 0, 0, . . . ) and s = (s 1 , . . . , s e , 0, 0, . . . ) be two sets of formal parameters. Consider the partition function of the formal Hermitian two-matrix model: Here H N is the product of two spaces of Hermitian N × N matrices, dM 1 and dM 2 are the properly normalized Haar measures, and V 1 ( are two polynomials. See e.g. [EO08] for more details. The logarithm of the partition function of this matrix model enumerates bi-colored maps on genus g surfaces (the genus is controlled by the parameter N 2−2g ), that is, the ways to glue a genus g surface out of black and white polygons along their sides, such that each edge of the resulting embedded graph is bounding a black and a white polygon. Each black (resp., white) p-gon carries the variable t p , p = 1, . . . , d (resp., s p , p = 1, . . . , s), and black (resp., white) p-gons with more than d (resp., e) sides are forbidden. Each of the resulting polygonal decompositions of a genus g surface is considered up to isomorphisms and is counted with the weight equal to the inverse order of its automorphism group. Define ω • m,n (ξ 1 , . . . , ξ m , ζ m+1 , . . . , ζ m+n ) as the cumulants of this matrix model by These ω (g),• m,n are the expansions of some (m, n)-differentials as above corresponding to one of the possible choices of the x − y duality in this case. They have a clear combinatorial interpretation: we consider bi-colored maps on possibly disconnected surfaces (the parameter N controls the Euler characteristic), with m (resp., n) distinguished white (resp., black) polygons. Distinguished polygons are ordered, one of their sides is marked (so these are the so-called rooted polygons), and the i-th distinguished p-gon is white and labeled by −dξ i /ξ p+1 i for i = 1, . . . , m (resp., is black and labeled by −dζ i /ζ p+1 i for i = m + 1, . . . , m + n) instead of t p (resp., s p ). The connected two-matrix model cumulants (with a sign twist), (−1) n ω (g) m,n , are exactly the system of (m, n)-differentials for a certain particular x − y duality [EO08,CEO06,Eyn16,DBOPS18]. In this case, we have where x and y are some global rational functions on CP 1 . 1.2. Results of the paper. To introduce the objects studied in this paper, we pass to the language of vacuum expectation values and KP integrability (see Sect. 2.4 below for more details; see also Section 2.1 for the definitions directly in terms of Schur polynomials, from which the relation to Hurwitz numbers is clear). 1.2.1. Vacuum expectation values and hypergeometric type correlator functions. Consider the charge 0 Fock space V 0 whose bosonic realization is presented as Define the operators J k , k ∈ Z, acting on V 0 as J k = k∂ q k , J −k = q k (the operator of multiplication by q k ) if k > 0, and J 0 = 0. Given a formal power seriesψ(θ, ) in θ and 2 such thatψ(0, 0) = 0, we introduce also the operator Dψ acting diagonally in the basis of Schur functions indexed by partitions λ, |λ| ≥ 0, Let |0 denote 1 ∈ V 0 and let 0| : V 0 → C be the extraction of the constant term. In these terms, the disconnected (m, n)-point functions H • m,n , and differentials ω • m,n of our interest are defined as the following vacuum expectation values (VEVs): Using inclusion-exclusion formulas one can pass to the so-called connected VEVs denoted by 0|· · ·|0 • , which are related to connected correlator functions H m,n (resp., forms ω m,n ) in exactly the same way. There are automatic restrictions on the exponent of entering the connected functions so that they admit the genus expansion, m,n and we have g≥0 The functions H (g) m,n and the forms ω (g) m,n are the main objects of our research. These are formal power series in the variables X 1 , . . . , X m , Y m+1 , . . . , Y m+n , while the quantities t 1 , t 2 , . . . , s 1 , s 2 , . . . , as well as the coefficients of the seriesψ are regarded as parameters of the problem. The cumulants of the matrix model (4) correspond to the special caseψ(θ) = log(1 + θ) and t i = 0, i > d, s j = 0, j > e (with the substitution ξ = X −1 , ζ = Y −1 , and N = −1 ), see [GJ08,GPH15,HO15,KL15,ALS16]. In general, the coefficients of H (g) m,n have the combinatorial meaning, dual to that of mentioned in Section 1.1.5, of generalized weighted double Hurwitz numbers enumerating ramified coverings of the sphere. The coverings in question have two distinguished ramification points, zero and infinity. There are m marked preimages of 0 and n marked preimages of ∞ whose ramification orders correspond to the exponents of X and Y variables, respectively. Besides, there are unmarked preimages of 0 and ∞ whose ramification orders correspond to the indices of t and s variables, respectively. The ramification type over the points different from 0 and ∞ is encoded in the coefficients of the seriesψ. This relation to counting ramified coverings becomes clear when the definitions are rephrased directly in terms of Schur polynomials, see Section 2.1. 1.2.2. The results on (m, n)-functions and topological recursion. Let us impose certain restrictions on the parameters of the problem. Namely, we assume that the t and s parameters are specialized such that they have only finitely many nonzero entries, t = (t 1 , . . . , t d , 0, 0, . . . ), s = (s 1 , . . . , s e , 0, 0, . . . ). Besides, we assume that ∂ θψ (θ, 0) and the coefficient of every positive power of inψ are rational. With these assumptions we introduce the notion of spectral curve of the problem (see Sect. 4.1) which is Σ = CP 1 with an affine coordinate z and holomorphic functions X, Y on it possessing the following properties: X is a local coordinate at z = 0, Y is a local coordinate at z = ∞, and the forms dX/X and dY /Y extend as global meromorphic (i.e. rational) 1-forms on Σ. We assume that the parameters are chosen in a generic way so that the forms dX/X and dY /Y have only simple zeroes. In fact, the spectral curve is determined by the restriction ψ(θ) =ψ(θ, 0) only. We may think, therefore, ofψ as an 2 -deformation of the function ψ that does not affect the equation of the spectral curve. We prove that for any triple (g, m, n) with 2g − 2 + m + n > 0 the form ω (g) m,n written at the corresponding local coordinates and regarded as an (m + n)-differential on Σ m+n extends as a global meromorphic (m + n)-differential (Theorem 4.5). Moreover, we prove certain equalities relating these forms for different triples of (g, m, n) and show that these equalities allow one to compute ω (g) m,n inductively in a closed form (Theorems 3.7 and 3.9). As an example, let us demonstrate here a simplified version of this relation for the case g = 0. It relates the functions H m,n+1 , m + n > 1, and in the exceptional cases m + n = 1 we set explicitly . . , X j k ), and we treat Y J in a similar way. Then, for m + n ≥ 2 we have Here Θ = Θ(z) is a certain Laurent polynomials entering (along with X = X(z) and Y = Y (z)) the equation of spectral curve and const is a certain function in X M , Y N but independent of z. In order to apply this formula we observe that e −vψ(θ) ∂ r−1 θ e vψ(θ) is a polynomial of degree r − 1 in v without free term. We will show that this relation actually determines both H We prove also that under certain explicitly formulated additional assumptions onψ (which are satisfied in the special case (14)-(15) discussed below) the forms ω (g) m,n can be integrated so that H (g) m,n 's are also rational for all g ≥ 0. The form ω (g) m,n has poles at zeroes of dX/X with respect to each of the first m arguments and it has poles at zeroes of dY /Y with respect to each of the last n arguments. We prove that the principal parts of these poles satisfy a series of loop equations, including the linear and quadratic ones (Theorem 3.8 and Corollary 4.8). The last assertion directly implies that the forms ω (g) m,0 satisfy the blobbed topological recursion, and also the forms ω (g) 0,n satisfy (another) blobbed topological recursion. The last step in establishing the true topological recursion is the projection property. We can reformulate the problem in the following way. Assume we are given the function ψ(θ) such that ψ (θ) is rational. It defines the spectral curve, and thus, two topological recursions on it, one with the poles at the critical points of X, and another one with the poles at the critical points of Y . The question is whether there exists an 2 -deformation ψ of ψ such that the forms obtained by the topological recursions coincide with the forms ω (g) m,0 and ω (g) 0,n respectively, defined as the corresponding VEV's? We answer this question positively in [BDBKS20] for a large variety of cases in the situation when all t-parameters are equal to zero. In this paper, we show that the answer obtained in [BDBKS20] can be extended to the setting of the present paper. Namely, we show that the projection property is satisfied (and thus, both topological recursions holds true) ifψ is chosen in the following form where R is a rational function and P is a polynomial (Theorem 5.1). This function is an 2 -deformation of ψ(θ) defined by The requirement that ψ is rational is satisfied since ψ = R /R + P . A choice of ψ in this form covers most of the known cases of enumerative problems for different kinds of special Hurwitz numbers (ordinary, monotone, strictly monotone, r-spin Hurwitz numbers, Bousquet-Mélou-Schaeffer numbers, enumeration of maps, hypermaps, etc., see [BDBKS20]). It also demonstrates the importance of 2 -deformations (it is crucial, for example, for the study of r-spin Hurwitz numbers, see [DBKPS19,BDBKS20]). The only reasonable case which is not covered by our considerations is the one when the polynomial P is replaced by a rational function. Remark that the concept of topological recursion was initially introduced in order to simplify explicit computation of correlator functions. However, even if it works its practical use in the cases considered in the present paper is quite restrictive, because in order to apply it one needs the knowledge of explicit positions of all critical points of the x function, and they are given usually by complicated algebraic equations. In opposite, recursions of Theorems 3.7 and 3.9 provide an efficient way to compute H (g) m,n explicitly in a closed form for small particular values of (g, m, n) and without any restrictive assumption on the initial data of the problem. Remark also that even for those cases when the projection property is satisfied this property along with the loop equations is not sufficient to determine uniquely ω (g) m,n in the case when both m > 0 and n > 0. This is due to the fact that this form still has additional poles on the diagonals z i = z j where i ∈ {1, . . . , m} and j ∈ {m + 1, . . . , m + n}, and we have no control on the principal parts of these poles at the moment. That is one of the reasons why the general theory of topological recursion at its present stage does not cover the case of (g, m, n) differentials with arbitrary m and n. m,n regarded as formal power series are defined by (7)-(10) with no restriction on the sets of (t, s)-parameters and the seriesψ. The restrictions of the previous section are needed to guarantee the analytic properties of the extension of these functions to the spectral curve. Without these analytic properties the very discussion of topological recursion and loop equations has no meaning. However, the recurrence relations of Theorem 3.9 and even the equation of the spectral curve make sense in the formal case. In fact, we first prove these relations in the setting of formal series in Sect. 2-3 with no restrictions imposed on the set of parameters. Then, we analyze in Sect. 4-5 the analytic properties of these function, and this requires to impose certain restrictions on the set of parameters formulated at the beginning of the previous section. 1.2.4. Specializations. In Section 6 we discuss various specializations and examples of the results of the present paper. In particular, Section 6.1 is devoted to the t = 0 specialization. In this case we do not get any new topological recursion results, as all the respective results for the respective H (g) m,0 functions are already proved in [BDBKS22,BDBKS20] (while H (g) 0,n functions are identically zero). However, this t = 0 case is still interesting, as in this case we can write explicit closed formulas for the H (g) m,n functions (not just a recurrence relation). 1.3. A variety of x − y dualities and the symplectic duality. It would be useful to put the results formulated in the previous section to the general context of the x − y duality for topological recursion and its extension that we call the symplectic duality. 1.3.1. A variety of x − y dualities and setups for topological recursion. Let the forms ω (g) n be defined by topological recursion with some initial data. Recall that adding to y an arbitrary function of x does not change any ω (g) n for 2g − 2 + n ≥ 0 (and ω (0) 1 is not produced by the recursion, it is merely a convenient convention that it should be equal to ydx). Thus, y → y + F (x) does not change the topological recursion, but it substantially changes the other side of the x − y duality, since nowx = y + F (x) andỹ = −x. Note that despite the shift y → y + F (x) the invariant "symplectic form" Ω is still the same, dx ∧ dy. This leads to a natural question whether there are reasonable, i. e. non-artificial examples of chains of several setups for topological recursion related by x − y dualities with suitable shifts of y's as above. Non-artificial here means that we want the resulting ω (g) n 's to have an enumerative meaning in some expansions. The only known example of this type in the literature with two different possible choices of the dual problem comes from the theory of maps / fully simple maps / the Hermitian two-matrix model, which we briefly recall below. Remark that the authors of many papers on topological recursion (including those of the present paper) usually introduce some artificial corrections to the unstable functions (and to speculate on the philosophic meaning of these corrections), in order to represent studied relations in a shorter form. However, for the rest of this section, in order to avoid confusion with the variables to be exchanged by duality, we shall always list explicitly x, y, and ω (0) 1 as well asx,ỹ, andω (0) 1 , and we never assume the convention ω (0) 1 = ydx by default; instead, we will always represent the forms ω 1.3.2. Fully simple maps. One of the instances of x − y dualities for maps is given by (5) as discussed above. Another example of an x − y duality for maps, for a different choice of y, is given by the enumeration of the so-called fully simple maps. A fully simple map is a bi-colored map with all distinguished polygons of just one fixed color and with all polygons of the other color being 2-gons, with an extra condition that the distinguished polygons do not have common vertices (which implies that at least some t i = 0 for i ≥ 3). See e.g. [BGF20] for more details. The corresponding n-point generating function is given by With this new definition, we obtain a new x − y duality statement for enumeration of maps. The corresponding (disconnected) (m, n)-differentials are given by In this case, the data for the x − y duality is as follows: where x and y are some global rational functions on CP 1 , and the function x and the differential ω 0 1 are exactly the same as in Equation (5) n 's given by They satisfy the topological recursion for the input data given by the curve CP 1 , the Bergman kernel B = dz 1 dz 2 /(z 1 − z 2 ) 2 in some global coordinate z, some global rational function x = x(z) such that x(z) −1 can serve as a local coordinate at z = ∞ and such that Equation (19) gives the expansion at z = ∞ in this local coordinate X = x(z) −1 , and some choice of function y. The enumerative meaning of the coefficients of the expansions of ω (g) n in X at z 1 = · · · = z n = ∞ in the local coordinates, X i = x(z i ) −1 is given by the count of the ordinary maps, that is, the bi-colored maps with n distinguished white polygons such that all black polygons are two-gons. The choice of y allows ambiguity, y = ω , and for at least two different choices of F (x) the x − y dual topological recursion produces the differentials whose expansions have meaningful enumerative interpretation. These two choices are n whose expansions in some local coordinate Y at z = 0 are given bỹ and the enumerative meaning of these expansions is the count of bi-colored maps such that the white polygons are controlled by the variables t = (t 1 , . . . , t d , 0, 0, . . . ), the distinguished black polygons by the variables Y 1 , . . . , Y n , and all non-distinguished black polygons are two-gons. The x−y dual topological recursion obtained by the second choice produces n-differentials ω (g) n whose expansions in some local coordinate w at z = ∞ are given bỹ and the enumerative meaning of these expansions is the count of fully simple maps. 1.3.4. 2d Toda tau functions and weighted double Hurwitz numbers. It is conjectured (see [BCGF + 21b, Conjecture 3.13] and also [Hoc22] for a genus 0 result under some extra conditions) that the n-point functions or differentials for the two topological recursions related by the x − y symmetry are subject to certain universal functional relations, obtained from the operator D ± log(1+θ) in the VEV formalism derived in [BDBKS21]. However, from the point of view of weighted Hurwitz enumerative problems and hypergeometric KP or 2d Toda tau functions it is more natural to consider D ψ for any reasonable functionψ =ψ(θ, 2 ) (for the sake of introduction we can assume thatφ := exp(ψ) is a polynomial with the constant term 1 that does not depend on 2 , but the actual assumptions onψ are much weaker: see Definition 4.1 below). To this end, the authors conjectured in [BDBKS21, Conjecture 4.4] the topological recursion of the following two systems of disconnected n-differentials: withψ 1 = −ψ, which have enumerative meaning of enumeration of particular double weighted Hurwitz numbers and/or constellations on genus g surfaces. Of course, this conjecture extends to the adjoint cases withψ 2 = −ψ and the same enumerative meaning as above (up to sign and interchange s ↔ t). In the present paper we prove topological recursion for the cases (22) and (24), and in [ABDB + 23] (using the results of the present paper as a foundation) the topological recursion is proved for the case (25) (and, by extension, (23) as well). And it is not even necessary to assume thatψ 1 = −ψ andψ 2 = −ψ, these results regarding the topological recursion hold when all of these three functions can be mutually different. 1.3.5. Symplectic duality. It appears that these four instances of topological recursion (two proved in the present paper, two in [ABDB + 23]) mentioned above (22) -(25) are related by a chain of the so-called symplectic dualities generalizing the usual x − y duality discussed above. Let us define it. Assume we have two topological recursions, with some given x, y, and ω (0) 1 = ydX/X, where X = exp(x) for the first one, andx,ỹ, and ω (0) 1 =ỹdX/X, wherẽ X = exp(x). We say that these two topological recursions are symplectic dual to each other, if The term "symplectic duality" refers here to the fact that on a surface S given in C 3 with the coordinates X,X, Λ by equation XXφ(Λ) = 1, the restrictions of the differentials (−Λ + F (X))dX/X and (Λ +F (X))dX/X satisfy d(−Λ + F (X))dX/X = d(Λ +F (X))dX/X. Of course, this definition is ad hoc, as it is dictated by an attempt to summarize the relations between the topological recursions (22)-(25). Let us show, however, that it indeed reduces to the x − y duality in the simplest case. Let φ(Λ) = Λ. Then ω , we obtain the classical x − y duality between these two instances of topological recursion. 1.3.6. Examples of symplectic dualities. We define a global rational function Θ(z) on CP 1 such that ω (0) 1 is given in all these four cases as The relations between these functions are as follows: where ρ 1 = exp(ψ 1 ) and ρ 2 = exp(ψ 2 ). This gives us a chain of symplectic dualities, though, to have the right sign change in the second symplectic duality, we might redefine (24) and (25) multiplying the corresponding ω (g) n 's by (−1) n , as we did in the example (20) above. As the authors remarked in [BDBKS21, Remark 4.5], even a proof of the topological recursion for the cases (22) and (24) is not available in the literature. Only the case (22) for t = (0, 0, 0, . . . ) is proved in [ACEH20] for polynomial φ and in [BDBKS20] for much more general families of choices of φ and s (and, therefore, the case 24 for s = (0, 0, 0, . . . ) for some general families of choices of φ and t). But the only known case for both t and s being nontrivial is the case of φ = 1 + θ covered by the two-matrix model, see e.g. [Eyn16,DBOPS18]. In this paper we prove the topological recursion for (22) and (24) for any choice of parameters t = (t 1 , . . . , t d , 0, 0, . . . ) and s = (s 1 , . . . , s e , 0, 0, . . . ) and for a large family of possible choices for φ. The hypergeometric-type KP tau function Z and the corresponding potential F = log Z are defined [KMMM95,OS01] by the following explicit expansion in the Schur functions The potential admits automatically the genus expansion F = ∞ g=0 2g−2 F g (that is, enters F with even exponents greater or equal to −2) and the correlator functions H (g) m,n are defined by More explicitly, if we treat F g as the generating function for its Taylor coefficients, the 'correlators', m,n pack the same correlators in a different way, In a more conceptual way, the tau function can be represented as the vacuum expectation value where J k and Dψ are certain operators acting on the Fock space, see above Section 1.2.1; for the complete introduction to the topic see [BDBKS22]. Then H (g) m,n could be represented equivalently as the corresponding connected vacuum expectation value: See also (7). The last treatment can be especially useful if we consider certain specializations of t and s variables, since for the specializations the partial derivatives are not available. Some relations on H (g) m,n look nicer if they are reformulated in terms of (m, n)point differentials defined by These forms can be regarded as yet another convenient way to pack correlators into generating series. The functions H (g) m,n and the forms ω (g) m,n are the main objects of our research. These are formal power series in the variables X 1 , . . . , X m , Y m+1 , . . . , Y m+n , while the quantities t 1 , t 2 , . . . , s 1 , s 2 , . . . , as well as the coefficients of the seriesψ are regarded as parameters of the problem. 2.2. Formulation of the recursion (preliminary version). Theorem below provides a formula relating the correlator functions H m,n+1 , respectively. In order to simplify manipulations with these distinguished variables we set To state the formula we introduce some new functions. Set The functions T Y m,n+1 , W Y m,n+1 depend on an additional variable u as a formal power series. Remark that W Y m,n+1 contains nonnegative powers of u only with an exception for m = n = 0: the series W Y 0,1 starts with 1 u . Next, for p ∈ Z we set Theorem 2.1. The following relation holds for all (g, m, n) with 2g − 2 + m + n > 0 (40) By the symmetry between X and Y variables, we also have (41) where W X , U Y , and also T X are defined in a similar way, with the exchange of the role of X and Y variables and also t and s parameters. We will need also the following extension of the last theorem. Set Theorem 2.2. The following relation holds for all (m, n) with no exceptions (43) By the symmetry between X and Y variables, we also have which involves negative powers of Y1. The second correction term means that the sum- which also involves negative powers of Y1. Remark, however, that a similar term 0,2 (Y1, Y2) appearing for (m, n, k) = (0, 0, 2) and intended to be restricted to the diagonal Y1 = Y2 is left in its original regular form. As a result, the series W Y m,n+1 contains arbitrary large positive or negative exponents of the variable Y . This series is well defined since the coefficient of any particular monomial in the remaining variables ( It follows that the most essential terms of W Y m,n+1 that contribute to the left hand side of (40) are exactly those with negative exponents of Y while the contribution of terms with nonnegative exponents of Y cancel out. Next, let us make a comment on the definition of W Y m,n+1 . The subsets I i , J i participating in (38) are allowed both being empty. By that reason the sum in (38) is infinite. It is useful to treat the factors with I i = J i = ∅ separately and to represent the expression for W Y m,n+1 in the following form where the summation carries over the set of all partitions of the set M ∪ N into unordered collection of disjoint nonempty subsets K α , and where we denote I α = K α ∩ M , J α = K α ∩ N . This sum has finitely many terms. In order to understand better the structure of the function W Y m,n+1 , it is convenient to represent all the terms entering to its definition by graphs of special kind. These are bipartite graphs with m + n + 1 white vertices labeled by indices 1, 2, . . . , m + n + 1, and some number of black unmarked vertices. The white vertex labeled by m + 1 is distinguished. It is connected by edges with all black vertices, moreover, multiple edges are allowed. Besides, every remaining white vertex is connected with exactly one black vertex by a single dashed edge as in the picture below. M N The contribution of the graph is computed as the product of the weights of its black vertices defined in the following way. with an appropriate singular correction for k = 1, I = ∅, and |J| = 0 or 1. Let γ be a graph. Its genus g(γ) is defined as the number of its edges (both filled and dashed) minus the number of vertices (both black and white) plus one. In other words, it is the minimal number of edges that can be removed such that the graph remains connected. Then W Y m,n+1 is computed as the sum over all isomorphism classes of graphs of the described type; the summand corresponding to a given graph γ is equal to the product of weights of all its black vertices multiplied by 2g(γ) and divided by the order of automorphism group of the graph. Finally, the obtained sum is multiplied by an overall factor 1 u S(u ) . One of the sources of the automorphisms of the graph are permutations of its multiple edges. These automorphisms are accounted in the factor 1 k! in (37). Another source of automorphisms are possible black leaves that is black vertices connected with the distinguished white vertex m+1 only. These automorphisms are accounted in the factorial factors appearing in the expansion of the exponent in (49). Thus, the total genus (the exponent of 2 ) corresponding to the contribution of a particular graph to the right hand side of (40) or (43) is computed as the sum of the following nonnegative integers: • the indices g corresponding to the functions H (g ) m ,k+n assigned to the black vertices; • the exponent of 2 in the differential operators S(u Yī∂ Yī ) applied to these functions; • the genus of the graph itself; • the exponent of 2 in the overall factor 1 uS(u ) ; • the exponent of 2 in the expansion of the functionsφ p andφ p entering the right hand side of the formulae. For example, in the case g = 0 all the graphs that contribute to the computation of H There is a remarkable action on this space of the Lie algebra gl(∞), a one-dimensional central extension of the Lie algebra of infinite matrices whose rows and columns are labeled by half-integers. Let E i,j , i, j ∈ Z + 1 2 be the matrix unit. Then, for example, the shift operator J k = i∈Z+ 1 2 E i−k,i corresponding to a matrix with one nonzero diagonal filled by units and situated on the distance k from the principal one acts as J k = k ∂ q k , J −k = q k (operator of multiplication by q k ) if k > 0, and J 0 = 0. Another example is the operator Dψ entering the definition (7) of correlator functions. It corresponds to the diagonal matrices of the form where the entries d k are defined by see [BDBKS22]. These equalities determine the components d k up to a common additive constant which is not important since the constant matrices from gl(∞) act trivially. In general, the elements of gl(∞) act on V 0 as differential operators in q-variables. This action can be described as follows. Consider the operators p∈Z+ 1 2 (p − k 2 ) r E p−k,p and collect them to the following generating series Then we have, explicitly (see for example [BDBKS22, Section 2] for the details): . We define now the functions W X m+1,n (w; X M ; X; Y N ) and W Y m,n+1 (u; X M ; Y ; Y N ) as the following connected vacuum expectation values Then we observe that W Y m,n+1 is given explicitly by (37)-(38) and W X m+1,n is given by similar expression with the exchange of the role of X and Y variables. Namely, the positive J-operators entering E( u, Y −1 ) while commuting with e ∞ k=1 s k J −k k and Y j produce the singular terms in (37), and the negative J-operators entering E( u, Y −1 ) produce the regular summands in (37). In fact, exactly this computation serves as the motivation for introducing singular terms in (37). The combinatorics behind this computation and also behind the inclusion/exclusion principle relating connected and disconnected correlators is the same as in [BDBKS22,BDBKS21] and we do not reproduce the details here. Next, we conjugate E( w, X) in (56) by the operator Dψ and obtain Comparing with (57) we obtain the desired relation of Theorem 2.2. Picking the coefficient of w 0 on both sides of the obtained relation we get that of Theorem 2.1. The formal spectral curve and Lagrange Inversion 3.1. Formal spectral curve. The variables X and Y entering relations of Theorems 2.1 and 2.2 are considered in these theorems as two independent variables having no relationship to one another. Equations (40) and (43) express the coefficients of the Laurent expansion of the left hand sides in X in terms of the coefficients of the Laurent expansion of W Y m,n+1 in Y . Our next step is to relate X and Y by a change of variables and to interpret (40) and (43) in terms of this change. The change will involve both positive and negative powers of the variables, and in order to assign a meaning to such a change we introduce the following definition. Definition 3.1. We denote by R the ring of regular power series in the 'basic' variables t k , s k , whose coefficients are Laurent polynomials in one additional variable denoted by z (or X or Y ). The whole series lying in R may contain arbitrary large positive or negative powers of z but the exponents of z entering a particular monomial in (t, s)-variables are bounded. Note that any series in R of the form where the summand o(1) belongs to the ideal generated by (t, s)-variables provides an invertible change in the ring R so that any series in R rewritten in terms of the new variable X also belongs to R. Let t = (t 1 , t 2 , . . . ), s = (s 1 , s 2 , . . . ) be as before, and φ(θ) = e ψ(θ) be an arbitrary power series with the constant term 1. Proposition 3.2. There exist series X(z), Y (z), Θ(z) in R possessing the following properties. • The three series satisfy • The series X(z) contains positive powers of z only, its coefficient of z is invertible as a series in (t, s), and we have where the term O(z) contains positive powers of z only. • The series Y (z) contains negative powers of z only, its coefficient of z −1 is invertible as a series in (t, s), and we have where the term O(z −1 ) contains negative powers of z only. The series X, Y, Θ are determined by these requirements uniquely up to a multiplication of z by a constant (an invertible series in (t, s) variables). Remark that the functions X(z) and Y (z) provide invertible changes in R, and the dependence between X and Y implied by these changes is independent of the freedom in a choice of the coordinate z. The actual rescaling of z is not so important, but it can be fixed, if needed, for example, by an additional requirement We will show that the coefficients of Θ(z) obey certain equation allowing one to express α 0 as a function in the remaining α-parameters. So we can take the coefficients α k , k = 0, as a new independent set of parameters of the problem (instead of t and s). We express t i , s j as functions in these parameters. Then, we apply the inverse function theorem to express α-coordinates as functions in t and s parameters. Taking the formal logarithm of φ we write where A and B involve nonnegative and nonpositive exponents of z, respectively. There is an ambiguity in a choice of the constant terms in A and B. This ambiguity exactly corresponds to an ambiguity in a possible rescaling of z coordinate in the proposition. We just require that the constant terms in A and B are certain series in α-parameters such that A| α=0 = B| α=0 = 0. Set Then the equation X Y φ(Θ) = 1 and the requirement that X and Y are regular changes at z = 0 and z = ∞, respectively, are satisfied. Applying these changes in R we can represent Θ as an infinite Laurent series in the corresponding coordinate: The coefficients t k , s k of these expansions are expressed as functions (formal power series) in α-parameters. Expansions (68) differ from (62), (63) by the presence of the constant terms t 0 , s 0 . Vanishing of t 0 and s 0 provides functional relations between parameters α k . In fact, these two equations are equivalent to one another due to the following identity: This identity is proved below. The equation t 0 = 0 (or an equivalent one s 0 = 0) allows one to express α 0 as a function in the remaining α-parameters. Inverting the obtained dependence of (t , s) variables in α variables we resolve the equations of spectral curve. In order to justify application of implicit function theorem one should add the computation of these equations in the liner approximation. Up to the terms of order greater than 1 in α-coordinates these equations read and so they are obviously solved in (t, s)-variables. Let us prove finally the identity (69). For that we involve into consideration formal meromorphic differentials of the form f (z) dz, f ∈ R. Definition 3.4. The residue of a formal meromorphic differential f (z) dz, f ∈ R, is defined as its coefficient of dz z and denoted by Res f (z) dz. The invariance of residues implies, in particular, where we denote dX(z) = X (z) dz and dY (z) = Y (z) dz. In these terms, the equations relating α and (t, s) parameters can be written as Remark that the identity X Y φ(θ) = 1 implies dX The series θ φ (θ) φ(θ) can be integrated as a formal power series in the variable θ. Therefore, the form Θ φ (Θ) φ(Θ) dΘ is a differential of an element of R, and hence, its residue is equal to zero. This completes the proof of (69), and hence that of Proposition 3.2. 3.2. Unstable functions. Consider the spectral curve defined above. Since X(z) is a regular formal change of variables at the point z = 0 and Y (z) is a regular formal change of variables at the point z = ∞, we can substitute X i = X(z i ), Y j = Y (z j ) to H (g) m,n and to treat this function as a series in z 1 , . . . , z m , z i−1 m+1 , . . . , z −1 m+n , that is, as a function on Σ m+n expanded as a formal power series at the point z 1 = · · · = z m = 0, z m+1 = · · · = z m+n = ∞. Our goal is to rewrite recursion of Theorem 2.1 it terms of these changes. The following result serves as a motivation for considering these changes. Proposition 3.5. The unstable correlator functions written in terms of the coordinates z i of the (formal) spectral curve are determined explicitly by the following relations For the proof see Sect. 3.4 below. Remark that the expressions in (75) can be integrated and we obtain, more explicitly, where the constants γ 1 and γ 2 are determined by the requirement that H Formulation of the recursion (final version). In order to rewrite relation of Theorem 2.1 in terms of the change implied by the spectral curve equation we first make the following observation. Define L r (p, θ) through the equality (80) ∂ r θφ p (θ) = L r (p, θ)φ(θ) p . Then we have, explicitly (see [BDBKS22, Section 4]), where S is defined in (36). This shows that the coefficient of any power of 2 in L r (v, θ) is polynomial in v. Let f (u, Y ) be a function which is polynomial in u and a Laurent series in Y . Denote by U X the transformation of Theorem 2.1 sending f to the series in X defined by (the formulation of Theorem 2.1 requires also to extend the transformation U X to the case when f = 1 u ). Proposition 3.6. The transformation U X acts on monomials in u as the following differential operator where f is any Laurent series in Y and where we assume that X, Y , and Θ = Θ(z) are related by the equation of spectral curve. As an immediate corollary of this Proposition combined with the statement of Theorem 2.1 we obtain the following principal form of our recursion. Theorem 3.7. With notations of Theorem 2.1, the following relation holds for all (g, m, n) with 2g − 2 + m + n > 0 (85) where we assume that X = X m+1 , Y = Y m+1 , and Θ = Θ(z m+1 ) are related by the equation of spectral curve. By the symmetry between X and Y variables, we also have where W X and U Y are defined in a similar way, with the exchange of the role of X and Y variables and also t and s parameters. The main advantage of this form of recursion comparing with that of Theorem 2.1 is that the right hand sides of (85) and (86) contain only finitely many nonzero summands for any particular triple (g, m, n). Indeed, it follows from (74) ∂ r θφ p (θ, w) =L r (p, θ, w)φ(θ) p , or, equivalently, Similarly to Proposition 3.6, we consider the transformationŨ X defined by Then we have, through the change implied by the spectral curve equation, where f is any Laurent series in Y , and we arrive at the following form of relation of Theorem 2.2. Theorem 3.8. The following relation holds for all (m, n) where we assume that X = X m+1 , Y = Y m+1 , and Θ = Θ(z m+1 ) are related by the equation of spectral curve. Lagrange inversion formula and principal identity. This section is devoted to the proofs of relations of the previous section. Our computations are based on Lagrange Inversion Theorem and Principal Identity used in [BDBKS22]. The invariance of residues implies that these tools can be applied to the situation when all considered functions belong to the ring R. Thus, for the proof of (90) we follow step by step the computations made in [BDBKS22,Corollary 4.4]: This proves (90). In the case (91), denoting Integrating both sides we obtain (91) up to an additive constant. The integration constant can be obtained by taking the residues of both sides of the obtained equality multiplied by dX X . Theorem 3.8 follows from Theorem 2.2 by the proven Eqs. (90)-(91). Taking the coefficient of w 0 in (90), (91), and (85) we obtain equalities of Proposition 3.6 and Theorem 3.7, respectively. Let us prove finally Proposition 3.5. Remark that the above computations do not use its statement, but it is needed to be assured that the right hand side of (85) has finitely many nonzero summands. Let us denote Taking the coefficient of w 0 0 on both sides of (43) for m = n = 0 we get Repeating the computations similar to those above we obtain that the right hand side is equal to Θ Y (Y ) with the substitution inverse to the change X(Y ) = 1 Y φ(Θ Y (Y )) . In other words, the following relation holds true involves the terms with both positive and negative arbitrary large exponents of Y , and in order to attain a meaning to this change we consider it as a change in the ring R. We conclude that the functions Θ X and Θ Y are identified by this change and the inverse change is given by Y = 1 X φ(Θ X (X)) . Indeed, We observe now that the function Θ X possesses the following properties. • It has an expansion of the form • The same function rewritten in the variable Y related to X by the change inverse to Y = 1 X Θ X (X) has an expansion of the form It is easy to see that these two conditions determine the series Θ X uniquely: this is yet another application of implicit function theorem similar to that one used in the proof of Proposition 3.2. The function Θ(z) entering the definition of the spectral curve and rewritten in the coordinate X does satisfy these conditions. Therefore, the functions Θ X (X) and Θ Y (Y ) coincide with the function Θ(z) rewritten in the coordinates X and Y of the spectral curve, respectively. This proves Eq. (74) of Proposition 3.5. Eqs. (75) of Proposition 3.5 corresponds to the coefficient of w 0 0 on both sides of (43) or (85) for the case m + n = 1. The only graph that contributes to the sum on the right hand side in these cases is a single tree with one black vertex connected with two white vertices. Taking into account the singular corrections entering (37) we obtain (for the cases (m, n) = (1, 0) and (0, 1), respectively) Where we assume that X i is related to Y i by the equation of spectral curve. Differentiating the first line in X 1 and the second one in X 2 we obtain Let us rewrite the obtained 2-differential in z coordinates of the spectral curve. Using the fact that X(z) is a regular change of coordinates at z = 0, we get by local computations from the first equality that it has the form On the other hand, from the last equality we get by similar local computations using the fact that Y (z) is a regular change at z = ∞ that this 2-differential has the form m,n+1 also contributes to the right hand side of (85), namely, to the summand with j = 0: we have from (85) and (81) It follows that the summand of (85) with j = 0, disregarding the δ m+n,0 -part, is equal to All the summands with j > 0 as we as the δ m+n,0 -contribution involve only those functions H (g ) m n satisfying 2g − 2 + m + n < 2g − 2 + m + n + 1, so that we may assume that they are already computed in the previous steps of computations. It follows that the formula can be rewritten in the form Let us study the last term in the above expression: Integrating (108) with (109) substituted, we obtain our following main relation, which considerably simplifies the inductive computation of the (m, n)-point functions: Theorem 3.9. For any triple (g, m, n) with 2g − 2 + m + n ≥ 0 we have where X = X m+1 , Y = Y m+1 , and const is certain function in X M , Y N and independent of X. Remark that there is a formally different expression for the right hand side due to the symmetry between X-Y variables, Let us look more closely on the right hand side of (110) at the coordinate z = z m+1 on the spectral curve. We observe that H m,n+1 but also to compute both of them! Namely, we compute H (g) m+1,n and H (g) m,n+1 as the regular and the polar parts, respectively, of the right hand side in (110) with respect to z coordinate. Remark 3.10. Relation of Theorem 3.9 treats both its sides as formal power series in (t, s) variables and also X 1 . . . , X m , Y m+2 , . . . , Y m+n+1 whose coefficients are Laurent polynomials in z. In fact, one can see by induction that there is a stronger rationality assertion: H (g) m,n can be represented in z-coordinates as a power series in (t, s) variables whose coefficients are rational functions in z 1 , . . . , z m+n with only possible poles at z i = ∞, z j = 0, and on the diagonals z i = z j for i ∈ {1, . . . , m}, j ∈ {m+1, . . . , m+n}. In other words, it can be represented as a ratio of a polynomial in z 1 , . . . , z m , z −1 m+1 , . . . , z −1 m+n and a product of factors of the from 1 − z i z −1 j . The degree of the denominator is uniformly bounded for each particular (g, m, n) and the degree of the numerator grows with the growth of the degree of (t, s)-monomial. We use exactly this form of the functions H Rationality of (m, n)-point functions and loop equations Up to this moment, in Sect. 2 and 3, we regarded all n-point functions as formal power series and no restrictions on the initial data (t, s, ψ) of the problem have been assumed. We now impose certain natural analytic assumptions on the initial data ensuring rationality of all n-point functions on the spectral curve. The rationality is crucial for the study of topological recursion and loop equations. Without this property the very discussion of topological recursion is senseless: it analyses the behavior of the analytic extension of the functions to the points different from the point of the expansion of these functions regarded as generating series. 4.1. Rational spectral curve. We say that the formal spectral curve introduced in Sect. 3.1 is algebraic if the forms dX X and dY Y extend as global meromorphic forms on Σ and Θ(z) is a Laurent polynomial. Moreover, we wish that its dependence in (t, s) parameters is also algebraic. It means that the coefficients of the rational forms dX X and dY Y and the Laurent polynomial Θ are defined not only as formal power series but also as true algebraic functions and their specializations at arbitrary complex numbers with sufficiently small absolute values are well defined. For the convenience of the reader we provide the definition of the spectral curve in the algebraic case which is independent of the formal case of Sect. 3.1. Definition 4.1. The spectral curve associated with the data (t, s, φ) is Σ = CP 1 with global affine coordinate z and three functions X, Y, Θ on (some open domains of) Σ satisfying the following relations. • We have • X is defined and holomorphic in a neighborhood of the disk |z| ≤ 1, has a simple zero at z = 0 and no other zeroes in that disk. In other words, X forms a global holomorphic coordinate on the disk |z| ≤ 1. Similarly, Y is defined and holomorphic in a neighborhood of the disk |z| ≥ 1, has a simple zero at z = ∞ and no other zeroes in that disk. In other words, Y forms a global holomorphic coordinate on the disk |z| ≥ 1. • The 1-forms dX X and dY Y extend as global rational 1-forms on the whole spectral curve. • Θ(z) is a Laurent polynomial. Moreover, its Laurent expansions at z = 0 and z = ∞ in the corresponding local coordinates are given by Proposition 4.2. For a given φ, if the absolute values of (t, s)-parameters are small enough then the requirements on the spectral curve define it uniquely up to a multiplication of the coordinate z by a nonzero constant. A choice for a rescaling of z is not important, since it does not change the analytic dependence between X, Y , and Θ functions. However, it can be fixed, if needed, by an additional relation Proof. We see from (113) and (114) that Θ has a pole of order d at z = 0 and a pole of order e at z = ∞, i.e. it has the form We will show that the coefficients of Θ(z) obey certain polynomial equation allowing one to express α 0 as a function in the remaining α-parameters. So we can take the coefficients α −d , . . . , α −1 , α 1 , . . . , α e as an independent set of parameters of the problem. We express t i , s j as (algebraic) functions in these parameters. Then, we apply the inverse function theorem to express α-coordinates as functions in t and s parameters. Having in mind the necessity to apply the inverse function theorem, we assume that the coefficients α k are small enough. An explicit estimate on the absolute values of these coefficients will be clear from the arguments below. Taking the logarithmic derivatives of the two sides of (112) we obtain the following equality of meromorphic 1-forms on the spectral curve, The two summands on the left hand side can be recovered as the contributions of the poles of the right hand side outside the unit circle and inside it, respectively. So, we define Equivalently, these relations can be written as follows where the integration contour |z| = 1 is oriented counterclockwise. The forms dX X and dY Y determine the functions X and Y themselves uniquely up to multiplicative constants, The integration constants can be fixed by (112) and, for example, (115). Next, we observe that X is a local coordinate at z = 0 and Y is a local at z = ∞. Expanding Θ in these coordinates, we get The coefficients t k , s k of these expansions are expressed as functions in α-parameters. More explicitly, we have Expansions (122), (123) differ from (113), (114) by the presence of the constant terms t 0 , s 0 . Vanishing of t 0 and s 0 provides algebraic relations between parameters α k . In fact, these two equations are equivalent to one another due to the following identity: where the integration contour Γ is the image of the unit circle |z| = 1 under the map Θ. Since the coefficients of the Laurent polynomial Θ(z) are small, we may assume that the contour Γ belongs a small disk centered at the origin in the θ-plane. Moreover, since φ(0) = 1, we may assume that this disk is small enough such that θ dφ(θ) φ(θ) is holomorphic inside the disk and hence its integral along any closed contour in the disk vanishes. Thus, regarding (124) as implicit algebraic equations on the α-parameters along with the equation t 0 = 0 (or an equivalent one s 0 = 0) we express by implicit function theorem α-parameters as holomorphic functions in (t, s)-parameters. This resolves the equation of spectral curve. Remark 4.4. If φ is rational then X and Y are also rational functions. Namely, if we represent φ(Θ(z)) as the product of linear factors of the form (z − a i ) ±1 multiplied by a monomial in z, then X and Y in the product X Y = 1/φ(Θ) absorb those factors with |a i | > 1 and |a i | < 1, respectively. In the general case, however, dX X and dY Y might have nonzero residues and the holomorphic extension of X and Y functions may meet logarithmic singularities. 4.2. Rationality of (m, n)-point functions. Let the data (t, s, ψ) of the spectral curve satisfy the analytic properties of the previous section, namely, t = (t 1 , . . . , t d , 0, 0, . . . ), s = (s 1 , . . . , s e , 0, 0, . . . ), and ψ (θ) is rational. Assume that an 2 -deformationψ of ψ is chosen such that the coefficient of any positive power of inψ is a derivative of a rational function. This implies, in particular, that the last summands of (110) and (111) are rational. We refer everywhere below to the assumptions made as the natural analytic assumptions on the data (t, s,ψ) of the problem. Let us treat X i = X(z i ) and Y i = Y (z i ) as the local coordinates at the corresponding points z i = 0 or z i = ∞, respectively, on the ith copy of the spectral curve. In that way we regard H (g) m,n as a function on Σ m+n expanded as a formal power series at the point z 1 = · · · = z m = 0, z m+1 = · · · = z m+n = ∞. The following theorem describes the properties of analytic extension of H (g) m,n to the spectral curve. Theorem 4.5. Assume that the parameters t i , s j are chosen small enough. Then for any triple (g, m, n) satisfying 2g − 2 + m + n > 0 the function H m,n as a rational function in z i , i = 1, . . . , m, then it might have poles on the diagonals z i = z j , j = m+1, . . . , m+n (this does not contradict to the above assertion). All the other poles converge to ∞ as the parameters t k , s k tend to zero. Similarly, if we regard H (g) m,n as a rational function in z j , j = m + 1, . . . , m + n, then it might have poles on the diagonals z j = z i , i = 1, . . . , m, and all the other poles converge to zero as the parameters t k , s k tend to zero. Proof. We argue by induction in g and m + n. Consider Eq. (110) as an equality in the ring R. By induction hypothesis, every term on the right hand side is rational in z coordinates. Since the right hand side contains finitely many terms, we conclude that the whole right hand side is rational as a function in z 1 , . . . , z m+n+1 . Let us look more closely at the dependence of all the terms on the right hand side of (110) in each particular variable z i . Every term is holomorphic in z i in the domain |z i | ≤ 1 for i ∈ M , and holomorphic in the domain |z i | ≥ 1 for i ∈ N . Hence, the same holds true for the whole right hand side. The dependence in z m+1 is more complicated. The terms might have poles both for |z m+1 | < 1 and for |z m+1 | > 1. Let us denote by F (z) the right hand side of (110) regarded as a rational function in z = z m+1 and represent it as F (z) = F + (z) + F − (z) + c where F + is holomorphic in |z| ≤ 1 and F − is holomorphic in |z| ≥ 1, with the normalization F + (0) = F − (∞) = 0, and c is a constant. More explicitly, we have This integral representation shows that both F + and F − depend regularly in t and s as these parameters tend to zero. This implies that the Laurent expansion of F + (F − ) in the ring R contain only positive (respectively, negative) powers of z. This implies the equalities F + = H m,n+1 as it is explained at the discussion after Theorem 3.9. This proves Theorem 4.5. Remark that the arguments above provide not only the proof of rationality of the functions H (g) m+1,n and H (g) m,n+1 but also an explicit inductive procedure for their computations: H (g) m+1,n regarded as a rational function in z = z m+1 absorbs the principal parts of the poles of the right hand side of (110) situated in the domain |z| > 1 (including those at z i for i ∈ N ), while H (g) m,n+1 absorbs the principal parts of the poles situated in the domain |z| < 1 (including those at z i for i ∈ M ). It is also useful to note that the contour integral used in the proof above is the analytic analogue of the operator Res applied in the formal case of the ring R in Sect 3. 4.3. Possible poles and linear loop equations. It is sometimes convenient to represent the equalities of Theorem 3.7 as equalities between meromorphic differential forms rather than functions. Consider the following operators acting in the space of meromorphic 1-forms on Σ: They are the counterparts of the corresponding operators X∂ X and Y ∂ Y acting in the space of functions. Then (85) can be rewritten in yet another equivalent form where the differential on the left hand side is taken with respect to the variable X = X m+1 . The operator D X acting in the space of meromorphic differentials on Σ has poles at zeroes of the form dX/X, that is, at the critical points of X. Definition 4.6. We denote by Ξ X the space of meromorphic differentials defined in a neighborhood of the zero locus of dX/X on Σ and spanned by the differentials of the form D k X ω where k = 0, 1, 2, . . . and ω is holomorphic. We denote by Ξ Y the space of meromorphic differentials defined in a neighborhood of the zero locus of dY /Y and spanned by the differentials of the form D k Y ω where k = 0, 1, 2, . . . and ω is holomorphic. The condition α ∈ Ξ X implies restriction on the principal part of the poles of α. For example, assume that the dX/X has a simple zero at the given point. Then we may choose a local holomorphic coordinate ζ at this point such that dX/X = ζ dζ. Then Ξ X is spanned by the forms holomorphic at ζ = 0 and the forms dζ ζ 2k , k > 0. In other words, for any form from the space Ξ X , the principal part of its pole at a simple zero of dX/X should be odd with respect to the deck transformation for the function X regarded locally as ramified covering with the ramification of order two at the considered point. The very form of (128) along with the symmetry with respect to the X-Y -variables implies Corollary 4.7. The differential of H (g) m,n with respect to any X-variable belongs to Ξ X and its differential with respect to any Y -variable belongs to Ξ Y : M,3,N (X, X, X) + 3 g 1 +g 2 =g−1 I 1 I 2 =M, J 1 J 2 =N DH X,(g 1 ) I 1 ,1,J 1 (X) DH X,(g 2 ) I 2 ,2,J 2 (X, X) + g 1 +g 2 +g 3 =g I 1 I 2 I 3 =M, J 1 J 2 J 3 =N DH X,(g 1 ) The last summand of the last expression itself belongs to Ξ X . However, we include it to the cubic loop equation just in the way it appears in [ 2g w 2 ]W X m+1,n (w) dX X . Note that the linear and quadratic loop equations imply the blobbed topological recursion for, separately, the H g m,0 functions, and the H g 0,n functions (under the assumptions of meromorphy and generality). Projection property and topological recursion 5.1. Projection property. In this section we assume that where R is a rational function and P is a polynomial. We also further assume the generality condition, i.e. that all zeroes and poles of R(θ) and all zeroes of P (θ) are simple. We prove here the following theorem has no poles in z i for 1 ≤ i ≤ m apart from the poles at the zeroes of dX(z i ) and apart from the diagonal poles at z i = z j where m + 1 ≤ j ≤ m + n; and it has no poles in z j for m + 1 ≤ j ≤ m + n apart from the poles at the zeroes of dY (z j ) and apart from the diagonal poles at z j = z i where 1 ≤ i ≤ m. The proof will go by induction in g and m + n. Let us assume the induction hypothesis (we refer to it as IH below), i.e. that this statement holds for all H g m ,n such that either g < g or simultaneously g = g and m + n < m + n. In order to proceed with the proof of Proposition 5.2, let us prove several technical statements first. Note that in this case we can write σ (g) m,n as where reg r is some expression regular in z at z = B. From the conditions of generality there exists exactly one root A of the numerator of R(θ) such that B is a root of equation Θ(z) = A. Then note that where reg is regular in θ at A, and the pole in z at B in the whole expression for σ S( ∂ θ ) −1 log(θ−A) part (after the substitution θ = Θ(z)). According to [BDBKS20,Lemma 4.1], we can then rewrite where p k (v) are some polynomials in v and reg is regular in θ at A. Note that where reg ψ A is regular in θ at A. Thus This means that we can rewrite (149) as wherep k,l (v) are some polynomials in v and reg is regular in θ at A. Now let us plug θ = Θ(z) into this expression and substitute it into (147). Note that for z → B for some constant C. Since |B| > 1, d log X(z) has a simple pole at B, while d log Y (z) is regular at B, according to (118) and (119) respectively. Thus, d log Y /d log X has a simple zero at z 1 = B. This means that Equation (147) can be rewritten as where q r,k,l are some expressions polynomial in v and reg r is regular in z at B. Taking the sum over j we can rewrite this expression as (155) Now note that, since X∂ X = (d log X/dz) −1 ∂ z and |B| > 1, then, again according to (118), at z → B, and, therefore, by an easy inductive argument, is regular at z → B for any l ≥ 1. This implies that (155) is regular at z → B. 00 is regular at the zeroes and poles of R(Θ(z)) which are not equal to ∞ and lie outside the unit circle on the z-plane. Proof. As in the proof of Lemma 5.3, let B be a zero of R(Θ(z)), B / ∈ {0, ∞}, and furthermore let |B| > 1. The proof for the case of a pole (not coinciding with 0 and ∞) of R(Θ(z)) is analogous. Note that the expressionσ which directly implies that the operator X∂ X preserves the degree of the pole at B for any function. We have: From that point the proof becomes analogous to the proof of Lemma 5.3. Just note that while the term d log Y /d log X (which had a simple zero at z → B) is absent, we have an extra X∂ X which accounts for the same effect. We recall from (143) thatσ and count the order of pole of this expression at z → ∞. We begin with a few observations: S( ∂ θ ) −1 log R(θ) | θ=Θ(z) has no pole at z → ∞. • Note that d log φ/dz = R (Θ(z))Θ (z)/R(Θ(z)) + P (Θ(z))Θ (z) has a pole of order (e deg P − 1) at z → ∞. Thus, taking into account (117) and the fact that Y (z) has a simple zero at z → ∞, we can conclude that each application of X∂ X = (d log X/dz) −1 ∂ z decreases the degree of the pole at z → ∞ by e deg P . The total effect of the operator (X∂ X ) j−1 and of the separate occurrence of X∂ X is the decrease of the order of pole by je deg P . Hence, the order of pole of (160) at z → ∞ is equal to the order of pole at z → ∞ of the following expression: where by | we mean that we only select the terms with deg v ≥ 2 before the substitution v = z −e deg P . Note also that • Since Θ(z) has a pole of order e at z → ∞, each application of ∂ θ decreases the order of pole in the resulting expression by e (until the total number of ∂ θ exceeds deg P ). Note that S(z) is an even series starting with 1, and the first terms of the expansion of e v(S(v ∂ θ )−1)P (θ) in v have the following form: (162) 1 + 1 24 v 3 2 ∂ 2 θ P (θ), with all the other terms having only lower degree of the pole at z → ∞. Since Θ(z) has a pole of order e at z → ∞, after the substitution v = z −e deg P the term (163) 1 24 v 2 2 P (Θ) Θ becomes regular at z → ∞, and all the other terms as well (we disregard the term 1 from (162) due to the deg v ≥ 2 requirement). So, (161) is regular at z → ∞, and, therefore,σ 0 is regular at z → ∞ as well. Lemma 5.6. σ (g) m,n is regular at z → ∞. Proof. The proof is similar to the proof of Lemma 5.5. We recall from (143) that σ (g) m,n is equal to the coefficient of 2g in the expansion of j≥1 r≥0 and count the order of pole of this expression at z → ∞. We begin with a few observations: • Similar to what happened in the previous Lemma, the factor e −u Θ W Y m,n+1 (u) has no pole at z → ∞. • The factor e v S(v ∂ θ ) S( ∂ θ ) −1 log R(θ) | θ=Θ(z) has no pole at z → ∞ and cannot acquire one when acted upon by further operators ∂ θ . This follows from the generality condition. • Note that d log φ/dz = R (Θ(z))Θ (z)/R(Θ(z))+P (Θ(z))Θ (z) has a pole of order (e deg P − 1) at z → ∞. Thus, taking into account (117) and the fact that Y (z) has a simple zero at z → ∞, we can conclude that the factor d log Y /d log X has zero of order e deg P at z → ∞ and each application of X∂ X = (d log X/dz) −1 ∂ z decreases the degree of the pole at z → ∞ by e deg P . The total effect of the factor d log Y /d log X and of (X∂ X ) j−1 is the decrease of the order of pole by je deg P . Hence, the order of pole of (164) at z → ∞ is equal to the order of pole at z → ∞ of the following expression: where by | we mean that we only select the terms with deg v ≥ 1 before the substitution v = z −e deg P . Note also that • Since Θ(z) has a pole of order e at z → ∞, each ∂ θ decreases the order of pole in the resulting expression by e. • Multiplication by ψ (θ) increases the order of pole by e(deg P − 1). Taking into account these two observations and that each v factor decreases the order of pole by e deg P , we see that each application of the operator ∂ θ + vψ (θ) decreases the order of pole in the resulting expression by e. Since P (θ) comes together with at least one power of v, and any further application of ∂ θ or ∂ θ + vψ (θ) only decreases the order of the pole, we see that this expression is regular at z → ∞. Let us proceed to the proof of Proposition 5.2. In a completely analogous way, using Equation (111) instead of Equation (110), one can prove that there are no "unwanted" poles inside the unit disk. This completes the proof of the proposition. Finally we are ready to prove Theorem 5.1. This completes the proof of the theorem. i.e. with ω (0) 1 = y dx = Θ dY Y . Remark 5.8. The generality condition can actually be lifted, but one has to replace the topological recursion with the Bouchard-Eynard recursion of [BE13]. This can be done in a completely analogous way to how it is discussed in [BDBKS20, Section 5.1]. Specializations and examples 6.1. Specialization t = 0. The above computations are applied for arbitrary choices of (t, s,ψ) (both in the formal and analytic settings). We wish now to compare our general results with some special cases known in the literature. In the case t = 0 the equation of the spectral curve reduces to the following: s i z i , X(z)φ(Θ(z)) = z. This is exactly the spectral curve that was considered in [ACEH20] in the case when φ is a polynomial and in [BDBKS22,BDBKS20] for more general φ. Remarkably, the correlator functions H (g) 0,n in this case are known explicitly: they are all identically equal to zero! The only nonzero terms that contribute to the inductive formula (110) for m = 0 are the singular corrections in (37). Applying the induction starting from the known H n,0 with all steps collected into a single expression coincides with that of [BDBKS22] where it is formulated as a summation over the set of all connected graphs with n labeled vertices. We just haven't mentioned in [BDBKS22] Remark that even for this case t = 0 the relation of Theorem 3.9 provides an alternative (and even probably more efficient) way of computing these correlator functions. Namely, let us apply just one step of induction in the inverse direction, in order to express H s i z i , X(z)φ(Θ(z)) = z. Let (172) Then for 2g − 2 + m + n > 0, m + n > 1 we have W (g) m,n (X(z 1 ), . . . , X(z m ), Y (z m+1 ), . . . , Y (z m+n )) (173) where the sum is over all connected simple graphs γ on m labeled vertices v 1 , . . . , v m with n additional leaves (i.e. 1-valent vertices) v 1 , . . . , v n ; E γ is the set of normal edges (i.e. connecting the vertices v 1 , . . . , v m ), E γ is the set of edges for which one of the endpoints is an additional leaf, w k, := e u k u S(u k z k ∂z k )S(u z ∂z ) and U i is the operator acting on a function f in u i and z i by For m = 1, n = 0 there is an additional term in the formula, but we do not write this case here for brevity, as it is completely covered by [BDBKS22,BDBKS20]. It is also straightforward to write an explicit formula for the H (g) m,n function itself (rather than W (g) m,n ), but it is bulkier and we do not list it here for brevity. It is also useful to consider an even deeper specialization t = 0, s = 0. In this case Θ = 0 and the above formulas get simplified. The spectral curve becomes Y = z −1 , X = z. All H m,n where m and n are both nonzero are interesting, since they correspond to the weighted double Hurwitz numbers with all preimages of 0 and ∞ being marked (which is equivalent to absence of any markings at all), and this way we get a rather simple formula for their generating functions. 6.2. Specialization φ(θ) = 1 + θ. Specialization φ(θ) = 1 + θ or ψ(θ) = log(φ(θ)) = log(1 + θ) corresponds to enumeration of maps/bicolored maps, see discussion in Sect. 4.2. In the case of just maps we have further specialization s = (0, 1, 0, . . . ) i.e. s k = δ k,2 . One of the ways to write the equation of spectral curve for this case that can be found in the literature (see, for example, [CEO06,Eyn16]) is The generating series enumerating maps is the power expansion of the corresponding correlator functions in the local coordinate X = x −1 at the point z = 0 on the spectral curve. In this case, that is for φ(θ) = 1 + θ, s k = δ 2,k the above equations produce the same spectral curve as the one introduced in Sect. 4.1 with the identifications It is an exercise to check that all requirements of Definition 4.1 are satisfied. Remark that with this identification we have (182) Θ dX X = y dx + 1 x − x dx = dH We conclude that Θ dX X , y dx, and dH In the case of bicolored maps the parameters s k are chosen arbitrarily. The equation of the spectral curve for this case can be found in [Eyn16]. With a small change of notation it can be formulated by saying that x = 1 X and y = 1 Y are Laurent polynomials of the form We see that these conditions become equivalent to those of Definition 4.1 if we set x y = Then, in the notations of Theorem 3.9, we have (189) e −u Θ W X 1,0 (u, X) = 1 u + u 2 W X,(0) 0,2 (X, X) − u 24 + u 2 24 (X∂ X ) 2 Θ 2 + O( 4 ). The first operator has poles at z = ±s −1 . These poles are zeroes of the form dX X and they converge to infinity as the (s, t)-parameters tend to zero. The second operator has poles at z = ±t. These poles are zeroes of the form dY Y and they converge to zero as the (s, t)parameters tend to zero. This observation allows one to single out the summands on the
2022-06-30T01:15:58.473Z
2022-06-29T00:00:00.000
{ "year": 2022, "sha1": "76b87dbec34b9517615fbc1ee46838e8e5fb5fda", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "76b87dbec34b9517615fbc1ee46838e8e5fb5fda", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
51911737
pes2o/s2orc
v3-fos-license
Demographic characteristics of blood and blood components transfusion recipients and pattern of blood utilization in a tertiary health institution in southern Nigeria Background An insight into the utilization pattern helps in future planning of blood drive. This study was conducted to describe the demographic characteristics of the transfusion recipients and pattern of blood and blood product utilization in Nigeria. Methods Blood bank registers of University of Calabar Teaching Hospital (UCTH) Calabar were analysed for a 12 month period. Number of blood units requested, number of units issued, Cross-match to transfusion ratio (C/T), age, gender, blood group, blood components received, patients ward and clinical diagnosis were computed. Diagnoses were grouped into broad categories according to the disease headings of International Classification of Diseases (ICD-10). Results Majority of the 2336 transfusion recipients studied were females (69.09%) and are in the reproductive age group; 15–49 years (75.23%). The median age of the recipients was 35 years (range, 0–89). Most of the recipients (n = 1636; 70.04%) received whole blood transfusion. Majority (94.46%) of the cross–matched units were issued giving C/T ratio of 1.06. The common blood group type was O Rhesus positive (62.63%). Obstetrics and Gynecology had the highest blood requisition (41.40%). The majority of the patients were diagnosed with conditions related to pregnancy and childbirth (38.70%), conditions originating in prenatal period (14.38%). The age range of 25–54 years had the highest blood transfusion requests (n = 501; 51.07%), of these, females were majority (n = 390;77.84%). Conclusions Our study recorded mostly young patients who received mostly whole blood. Most of the patients in the reproductive age group received transfusion for pregnancy and child-birth related cases. Background Blood transfusion plays important role in medical and surgical practice [1]. In order to achieve these, critical review and continuous evaluation of the use of blood and its components becomes essential [2]. These entails studying the pattern of blood components use, the clinical conditions and wards requiring blood transfusion, the risks associated with blood transfusion and the demographic characteristics of the blood transfusion recipients in a population. Evaluation of blood requisition and utilization is essential in assessing the present and future demands for blood and avoiding unnecessary requests and transfusions [3]. Despite the development of national blood service policy [4], most medical facilities in Nigeria still find it difficult in establishing viable and efficient blood banking system [5]. On this premise, it becomes necessary to ensure judicious utilization of this scarce commodity. Data on blood utilization is helpful in resource limited settings in which there is always competing needs for scarce resources [6]. Information on blood utilization will assist in establishing clinical practice guidelines, strategizing on new donor recruitment, streamlining resources for the therapeutic benefit of the patient [3,7] and conducting cost effective analysis [8]. Various studies have shown variable distribution in demand for blood and its components [9,10], but none has looked into the variability in the blood use based on diseased classification (1CD-10) as well as blood group type of the recipients. Hence, this study sets off to establish local use pattern of blood and blood product to aid in effective management of patient need. Study setting This study was conducted in the blood bank unit of the Hematology Department of University of Calabar Teaching Hospital, Calabar, Cross River State, Nigeria which happens to be the only tertiary institution in the state. Study design This study employed retrospective analysis of blood transfusion recipients' data covering all blood and blood components transfused within the period from March 2016 to February, 2017. Data collection Data were collected retrospectively from the register of the blood bank for the 12 months period from March 2016 to February 2017 and covered all blood and blood components recorded in the blood bank during this period. Cross-match and issue registers were accessed to retrieve the required information such as gender, age, blood group, product requested, ward and clinical diagnosis. Statistical analysis The data collected were analyzed using SPSS version 20 software (Armonk, NY. IBM Corp.). Frequency and percentages were used to summarize categorical demographic and clinical variables. Result A total of 2473 units were requested within the study period consisting of 1770 whole blood, 468 packed cells and 235 plasma. About 94.46% (2336) of the cross matched requests were issued consisting of 1636 (70.04%) whole blood, 467 packed cells (19.99%) and 233 (9.97%) plasma resulting in a cross match to transfusion (C/T) ratio of 1.06 (Table 1). Most transfusion recipients were female (1614; 69.09%) of whom 475 (29.43%) were in the reproductive age group (15-49 years). Approximately 20% of the transfusion recipients were under the age of 15 while 7% were at least 65 years ( Table 2). The most common blood group type observed in the blood/blood component recipients was O Rhesus Positive (1463; 62.63%) while the least was AB Rhesus Negative (0; 0%) ( Table 3). A category of the transfusion recipients based on the four broad category showed that more blood requisition in Obstetrics and gynecology (n = 967; 41.40%) while the least was pediatrics (n = 178; 7.62) ( Table 6). Further stratification of blood components showed that whole blood was utilized more (n = 947; 57.89%) in obstetrics and gynecology, while packed cells and plasma were utilized more (n = 374; 80.09% and n = 188; 80.69%, respectively) in medicine (Table 7). Discussion This study provided information on the pattern of blood and blood components utilization and demographic characteristics of blood transfusion recipients in University of Calabar Teaching Hospital, Nigeria. This study comprised much of younger cohort of transfusion recipients. This observation is similar to the report of a study in Zimbabwe [6] but in contrast with studies reported from developed countries in which majority of the transfusion recipients were above the age of 60 years [8,10,11]. This low median age reflects the age trend of the Nigerian population which comprised mainly of young people with only 3.12% being above the age of 65 years [12]. The life expectancy at birth for Nigeria is currently estimated at 54 years whereas the global population average is 70 years [13]. Developed countries are mainly characterized by ageing population owing to higher mean life expectancies. This study recorded greater number of female transfusion recipients. Blood transfusion recipients in sub-Saharan Africa are mostly children in malaria endemic areas, and women of childbearing age due to complications of labour [14,15]. Women, especially the childbearing age group (15-49 years) received the majority of the blood and blood components transfused. This observation is in consonance with findings in other countries in sub-Saharan Africa where women receive more blood for pregnancy-related complications consequent to intra-partum and post-partum hemorrhage [15,16]. In contrast, studies from developed countries reported that more men than women receive blood transfusion [8,17]. This may be attributed to advanced health care services which reduced the associated complications of child bearing requiring transfusion [6]. Majority of the transfusion recipients in this study (71.57%) received whole blood transfusion. This is consistent with earlier study in Jigawa Nigeria [18] which reported 87.3%. This is a reflection of common practice of requesting for whole blood in resource limited settings owing to non-availability of facilities to practice component separation. In standard practice, whole blood is only issued for transfusion following cases of massive hemorrhages and exchange transfusion. This study recorded an average C/T ration of 1.06 which is an indication of effective and efficient utilization of blood and blood products. This finding is similar to 0.9 reported in Ibadan Nigeria [19] but lower than 2.2 reported in Ibadan, Nigeria [20]. The observed differences may be due to the varying levels of availability of blood and indications for blood transfusion as judged by the requesting physician [19,20]. The top six diagnoses for which patients received blood transfusion in this study were pregnancy and childbirth, conditions originating in perinatal period, diseases of the genitourinary system, diseases of blood and blood forming organs, neoplasm and injury and poison. This finding is similar to earlier report in Zimbabwe [6] with conditions originating in perinatal period being replaced by infections and parasitic diseases. However, other studies in Nigeria did not classify diagnosis according to ICD-10 making direct comparisons with the findings of this study impossible. Studies from non-African decent reported neoplasms, injury and poison, digestive system diseases and circulatory system diseases as the main diagnosis associated with transfusion [21,22]. This strongly portrays that blood utilization pattern vary significantly within regions and according to practice as well as patients clinical findings. More so, diseases burden, level of organization and advancement of healthcare in the different settings also contribute to the significant differences in blood utilization [6]. The obstetrics and Gynecology ward had the highest blood requisition (41.40%). This finding is similar to previous reports [18,23]. This observation may be related to the fact that most obstetric and gynecological events may be bleeding related. Peri-partum hemorrhage has been reported as common indication for blood transfusion in obstetric events [24]. More so, the fact that majority of the subjects were females of child bearing age contributed to this outlying peak. Further classification of the blood transfusion recipients based on the four broad classification showed obstetrics still had the highest requisition (41.40%) while pediatrics had the least (7.62%). This finding is similar to previous report by Musa et al. [9] who reported 42.79 and 11.67%, respectively in a study in Zaria, Nigeria. However, their study made 6 broad classification; splitting some areas of internal medicine into trauma and emergency. Whole blood was mostly used by obstetrics and gynecology while packed cells and plasma were mostly used in medicine. Ideally, blood is effectively used by processing it into components such as red cell concentrates, platelet concentrates, plasma (fresh frozen plasma) and cryoprecipitate [25], but lack of facility for component separation as in our institution makes such difficult. Published guidelines based on "expert opinion" recommends transfusion of plasma for the following clinical indications: active bleeding in the setting of multiple coagulation factor deficiencies (massive transfusions, disseminated intravascular coagulation); emergency reversal of warfarin in patients with active bleeding in settings where prothrombin complex concentration with adequate level of factor VII is not available; and for use as replacement when performing plasma exchange [26][27][28][29][30]. Specifically, these are seen in burns [31,32], oncology [33], obstetric events [34][35][36], and more. Packed red cells are indicated in conditions requiring prevention of anemia related tissue hypoxia [37]. Indications for packed cell transfusion include acute sickle cell crisis (for prevention of stroke), acute blood loss greater than 1500 mL or 30% of blood volume [38]. The distribution of ABO blood groups among blood recipients in this study is consistent with that reported in donor population in Nigeria [39]. Acute blood shortage of specific group is a common event in Nigerian hospitals, hence making an understanding of the distribution of blood groups among transfusion recipients important. This information is essential in planning for blood drive as well as distribution of blood and blood components; subsequently ensuring that patients receive blood matching their ABO blood group and Rhesus type [6]. Of the 33 blood group systems representing over 300 antigens as listed by International Society of Blood Transfusion, ABO and Rhesus blood groups are the first two most clinically important blood groups [40,41]. This study has a number of potential limitations. However, this study was carried out in a major tertiary health institution in Calabar, Nigeria, but the extrapolation of Conclusion This study recorded greater number of women receiving blood and blood product transfusion for conditions associated with pregnancy and childbirth. The blood recipients were mostly young patients of reproductive age group. We found that the most indications for blood transfusion based on ICD-10 were pregnancy and child birth, conditions originating in perinatal period, diseases of genitourinary system and diseases of blood and blood forming organs. Whole blood was the major blood component recorded in this study. This shows a lag in healthcare improvement and unnecessary waste of blood. Although this study was based on a single blood bank, the findings provided an insight into the characteristics of blood transfusion recipients and as well aid in future planning of better blood and blood product utilization. Availability of data and material The datasets supporting the finding of this study is available from the corresponding author on reasonable request. Funding No external funding was received for this study.
2018-08-01T14:00:56.661Z
2018-07-31T00:00:00.000
{ "year": 2018, "sha1": "b45fcc5719aa6dfd50f98b689c984b95f0db64f4", "oa_license": "CCBY", "oa_url": "https://bmchematol.biomedcentral.com/track/pdf/10.1186/s12878-018-0112-5", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b45fcc5719aa6dfd50f98b689c984b95f0db64f4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
151764798
pes2o/s2orc
v3-fos-license
An ethnography of energy demand and working from home: Exploring the a ff ective dimensions of social practice in the United Kingdom The practice of working from home has become widespread in developed countries, and the numbers of regular home workers are steadily increasing. There are potentially positive implications for energy consumption associated with home working, but these depend on myriad variables. This qualitative study, based on interviews with regular home workers, provides a more in-depth perspective on how and why energy is used compared with quantitative models of household consumption. Ethnographic research data is analysed using insights from practice theory. Placing the practice at the heart of analysis, it explores meanings, materials and competences involved in home working, and attends to the a ff ective experiences of practitioners. Considering working from home as an integrative practice, it explores how dispersed practices are incorporated into individual performances, bringing about a ff ective satisfaction. Findings show that the practice of working from home is characterised by themes of comfort, control and fl exibility, with implications for energy demand. It is argued that the synthesis of practice theory and a ff ect can provide valuable insights for energy research. The paper discusses the implications for demand reduction, demand shifting and ‘ smart ’ controls, with reference to the role of employers, researchers, policy makers and home workers themselves. Introduction The practice of working from home has become widespread throughout the 'knowledge economy' of Europe and North America and is steadily increasing in both developed and developing countries, led by the expansion of internet access [1][2][3]. In the UK, more than 25% (7.7 million) of those in employment reported in 2014 that they sometimes work from home as part of their main job, while 4.2 million (13.9%) reported their home as their main place of work, an increase of 2.8 percentage points since 1998 [4,5]. Motivations for individuals working from home include eradicating the commute, facilitating flexible child-care arrangements, aiding concentration and mental health benefits [6,7], while opportunities for employers include increased productivity and energy savings [8]. Achieving reductions in energy consumption in domestic and commercial buildings is a universal priority for policy makers. Focusing on the practice of home working represents an opportunity in this regard, with potential savings of up to 1.4 tCO 2 e per year per home-based employee, according to UK estimates [8]. Further, reducing energy demand from small and medium sized enterprises (SMEs) is a significant policy challenge: with nearly 60% of all SMEs based in domestic premises, achieving energy reductions through supporting efficient home working practice offers a way of addressing this sector of the economy while potentially avoiding politically unpalatable 'redtape' for small enterprises. However, data on energy consumption associated with the practice is subject to uncertainty due to the difficulty of separating consumption data from other domestic practices, being outside the scope of corporate energy portfolios, and with a significant proportion of home-based businesses undeclared due to legal and tax concerns and therefore 'invisible' to public authorities [9]. Efforts to quantify the environmental impacts of home working have been made in technical and transport focused literature [10][11][12][13][14]. On the one hand, when substituting for a commute, working from home can represent significant energy and emissions savings [14]. On the other, the heating and lighting of a domestic space in addition to an unused desk at work, or the use of technologies such as cloud computing services can lead to increased energy consumption and environmental impact [8,10]. In attempting to calculate a net balance of energy demand, key factors include the mode, length and energy intensity of the commute; the ability of employers to manage desk-space flexibly; and the technologies and practices involved in heating and lighting the home space [13]. Whilst many empirical studies find net energy and emissions reductions associated with home working [10,[12][13][14], considerable methodological difficul-ties and the highly contingent nature of the practice prevent generalisations about its benefits [15]. Several calls have been made in this journal for research which tries to understand the messy dynamics of energy demand within the home, seeking greater analytical depth than quantitative models permit [16,17]. Stern [18] points out that as actors in the energy system, householders have multiple roles, requiring a range of research approaches. Home workers span several of Stern's categories, as domestic consumers of energy, participants of institutions and as practitioners whose activities have implications for the wider energy system. Seeking to capture their hybrid practice, this paper develops an account based on ethnographic data, placing home working at the 'heart of enquiry' [19]. The three-element model of practice theory provides a theoretical framework for investigating elements of working from home and the role of energy [20,21]. The analysis also draws on discourses of affect to provide insights into the experiential nature of the practice [22][23][24]. This approach offers a number of valuable insights. First, it encourages an in-depth look at how constituent elements of practice circulate through repetitive performances to form and continually configure practice entities. Second, it expands the field of enquiry to account for the interaction between bundles of practices, highlighting the 'affectively satisfying' effects of incorporating 'dispersed practices' into home working [25]. This allows characteristic themes associated with working from home to be identified, providing insights into the energy consuming behaviours of home workers. These themes emerge through findings which describe achieving comfort when working from home, the exertion of control over material assemblages, and the performance of flexibility through the spacetimes of practice. Finally, these insights have ramifications for efforts to reduce demand and encourage demand-side flexibility: challenges for energy systems in transition throughout the world. In accordance with this expansive approach, this paper is guided by two broad research questions: 1) what are the characteristics of working from home as a practice? 2) what are the implications for energy demand? The rest of the paper is structured in five parts. The next section describes the insights provided by practice theory as it has been applied in studies of energy consumption in the domestic setting. The concept of affect is introduced and links are drawn with a practice perspective. Section 3 outlines the methods used for this study and its analytical approach alongside a discussion of the challenges of researching practices. In Section 4, findings are structured according to characteristic themes which emerge from empirical data. The first part of the discussion in Section 5 develops insights from the theoretical approach employed, before outlining implications for energy systems and identifying possible areas for further research. The conclusion argues that practice theory and affect together offer a fruitful theoretical framework, with implications for researchers, employers and policy makers. A practice theory perspective on working from home Practice theory has its roots in philosophy [26], but has been widely adopted by sociologists [21] in seeking to understand how widespread, everyday practices are established and maintained. The practice perspective decentres the individual as the unit for social analysis [27], instead, developing a model of 'distributed agency' [28] which highlights the ways in which 'elements' such as meanings, materials, skills, technologies, rules and embodied knowledge configure everyday social practices [29,30]. With a shift in emphasis away from the individual as the principle agent of consumption [31], this theoretical framework has been widely applied within social scientific studies of energy demand [19,21,29]. Practice theory expands the field of enquiry beyond a narrow emphasis on individual choice, identifying the role of physical elements such as technology, materials and building design in mediating everyday energy-consuming practices [19,21]. It also encourages research into how social and cultural meanings and forms of embodied and intellectual knowledge are reproduced through everyday activity. Practice theory has been widely employed in studies of energy consumption in the household, for example in efforts to understand quantitative observations regarding the diversity of energy demand patterns within even identical houses [29]. Empirical studies since the 'practice turn' have analysed elements of practice within the household including lighting [32]; the use of appliances, technologies and interfaces [33,34] and thermal comfort [29,35,36]. A small number of studies in energy research have examined workplace practices using practice theory, identifying insights into energy consumption by looking beyond the user as the unit of analysis. Garabuau-Moussaoui [37] argues that building 'occupants' are constructed as actors within a 'technology script' by a combination of corporate, architectural and social logics, finding that an attention to practices helps to uncover the material, social and ideological elements of comfort in the workplace. Similarly, whereas studies of organisational energy demand conventionally focus on the corporate entity and individual users, Janda [38] argues for a 'building communities' approach, identifying the potential for change in energy management practices through a focus on socialrather than technicalpotential. Also focusing on practices rather than organisational units, Powells et al. [39] find potential for active network management amongst SMEs. Despite its growing incidence and significance for energy demand however, no studies since the 'practice turn' have addressed the practice of working from home. This paper builds on three theoretical constructs developed in the literature to guide analysis of empirical data. Firstly, it follows a number of recent publications adopting the 'three element model' as a means of clustering elements of practice [20,40]. Led by the work of Elizabeth Shove, this approach groups elements of practice into meanings, materials and competences. These categories assist with analysis of qualitative research findings, helping to uncover the complex characteristics of practice [20,21]. Secondly, it draws on Schatzki's account which describes the field of practice in two dimensions [41]. In the 'organisational dimension', the constellation of elements constitute the 'practice-as-entity': a relational network existing in the realm of potential. This constellation is 'integrated' through performance, which takes place in the 'activity dimension'. Practice entities become recursively reconfigured through repetitive performance, as new elements are recruited and others discarded. These two dimensions help to highlight how practices are influenced by spatial and temporal settings, as performances are conducted in different material environments and interwoven with other practices. This paper explores how the characteristics of working practices change as work is brought into the domestic setting. It follows a number of studies which have sought to identify the characteristics and changing dynamics of practice, for example in the development of digital photography [42]; the spread of Nordic walking [20], or everyday mobility [43]. A third theoretical construct used in this paper is Schatzki's distinction between integrative and dispersed practices [26]. Dispersed practices are small scale activities such as following rules [31], tinkering [36] or consuming energy through appliances' standby mode [33]. They can be conducted without context and incorporated into more complex social practices, taking on different meanings. Integrative practices are broader activities including business practices, shopping or cooking. In Schatzki's account [26], these practices have their own 'teleo-affective structure', which is to say they hold meaning and significance both for the performers of practice and in the wider social world. This distinction has been used effectively by Cass and Faulconbridge [43] to delve into the integrative practice of mobility, into which a variety of dispersed practices such as listening to music and navigating are incorporated. The authors argue that these purposive incorporations produce 'affective satisfaction' in otherwise mundane patterns of mobility, such as commuting. This paper applies this construct to the practice of working from home: an integrative practice increasingly performed, reproduced and reconfigured in millions of homes throughout the world. Drawing on a sample of UK home workers, it focuses on the incorporation of the dispersed practices of achieving comfort, controlling material elements, and expressing flexibility. This bundling is shown to animate the practice of working from home, and produce affective satisfaction. Cass and Faulconbridge's paper is one of relatively few in the energy research literature that directly seek to address the affective elements of practice. In studies of household consumption this is somewhat surprising, given the undeniable impact of building design and architecture on inhabitants' affective sensibilities. Affect is a concept with origins in psychology [44], but has been adopted and developed in the social sciences [23]. Geography has led the way in a discourse which also draws on non-representational theory in efforts to highlight 'the ways in which the world is emergent from a range of spatial processes whose power is not dependent upon their crossing a threshold of contemplative cognition' [45]. For studies of comfort in the home, Andersons' [24] account of 'affective atmospheres' provides rich theoretical insights, describing how affect emanates from the circulation of bodies and elements of the material environment. Vannini and Taggart's account of domestic warmth [46] draws on these principles to explore the precognitive, embodied characteristics of thermal comfort. The authors demonstrate how, in off-grid homes in Canada, active involvement with the material technologies of heating intensifies inhabitants' affective sensitivity towards thermal conditions in the home environment. There are clear parallels between geographers' use of affect and the concept of distributed agency developed in ontological accounts of practice theory [28]. Just as practice theory decentres the individual from the focus of analysis, affective literature attends to the precognitive, embodied and transpersonal dimensions of experience [23]. This paper explores the synergies between both theoretical perspectives by attending to the affective dimensions of the practice of working from home, finding implications for energy demand. For example, it responds to Ellsworth-Krebs and colleagues' call in this journal for further empirical explorations of the links between home and comfort [17]. The next section sets out the methods used for investigating the practice of working from home, and discusses the challenges of conducting research on affect. Methods The empirical study was designed according to Bryman's 'steps in qualitative research' [47]. A mix of qualitative methods were used, including semi-structured interviews with 20 UK based home workers, conducted in January-June 2016. Participants were recruited from personal and professional networks and all worked in professional services in Oxfordshire, UK: 10 each in the public and private sectors. Seven were self-employed and another seven were responsible for managing at least one employee. Three participants identified home as their main place of work, with the remainder working from home for at least one day per week. All participants were usually alone when working from home, and where possible interviews were conducted in their homes. In some cases, meetings took place in cafés where participants would bring their work on home working days for a change of scene. Additional data was collected in the form of photographs and personal reflection on several years of regular home working. In placing the practice at the heart of enquiry, interviews focussed on the meanings, materials and competences involved in working from home, with questions about energy consumption woven into conversations about home, work, boundaries and work-life balance. Energy related questions focused on how home workers achieved comfort and used electrical appliances during their working day. Interviews were recorded and notes taken to draw out the main points from the discussion. Following the interviews, notes were added to by revisiting recordings and transcribing key passages. Data were compiled and analysed in a spreadsheet, where themes relating to the two research questions began to emerge. Participants were recruited to the study until themes and even specific phrases began to recur, reaching what Bryman [47] refers to as 'theoretical saturation'. Respondent validation was then sought through additional interviews with a subsample of five participants, in which emergent themes associated with home working were discussed. The aim of this methodology was not to establish a representative sample of home workers, nor to draw universal conclusions about energy consumption patterns in the home: the sample is both too small and non-random. Nonetheless, the sample focuses on geographies and sectors where home working is prevalent. The highest proportion of home workers in the UK work in professional services, and in 'affluent towns and cities and their rural hinterlands in Southern England' [9]. What follows is abductive analysis [48], informed by home workers' reflections on their practice, subsequent interpretation of their narratives and participation as a regular home worker. It follows Flyvberg's call for social scientific research to 'drop the fruitless efforts to emulate natural science's success in producing cumulative and predictive theory', and instead to 'contribute to society's practical rationality in elucidating where we are, where we want to go, and what is desirable' [49]. Such guidance is pertinent for the practice of working from home, given the optimism which it inspires in those aiming to reduce the environmental impact of working practices [8,50]. This paper explores the 'teleo-affective' structuring of working from home by analysing how the integrative practice incorporates other dispersed practices in multiple performances. However, seeking to account for affective dimensions of practice presents a methodological challenge, as the realm of affect is said to exist both prior to and beneath the 'sociolinguistic fixing' of conscious reflection [51]. The reliability of asking interviewees to linguistically reflect on aspects of their practice is a subject of debate, particularly in geography, where non-representational theory and affective approaches remain controversial in this strong empirical tradition [49,50,see also 40,and 51]. Adopters of non-representational theory have used a variety of alternative methods to access affective registers, including dance [45], images [55] and 'sensuous ethnography' [46]. Conversely, both Bonnington [56] and Hitchings [52] make compelling arguments for the role of reflexivity and the interview as a source of empirical data for research on practices. In one of his more recent works, Schatzki has sought to clarify the 'teleo-affective' dimension of practice, arguing that affective structures can be 'to varying degrees allied with normativized emotions and even moods' [57]. Although language and reflexivity cannot completely account for the affective dimensions of practice, these contributions from Hitchings, Bonnington and Schatzki indicate the value of personal narrative and observation as sources of research data. Seeking to capture the affective sensibilities of home workers, this paper analyses interview data not as a source of objective insight, but as an artefact of narrative sensemaking carried out in a staged setting [48]. Sitting down for an hour and attempting to explain to a researcher the variety of meanings, materials and skills required to conduct one's work from home is surely an unusual experience. In seeking language, insights and narratives to reflect on their practice and the role of energy, interviewees are required to step outside the normal doings and sayings of the practice: a process which can both be illuminative and transformative for the 'carrier' of practice [27]. Interviews are therefore considered performative occurrences in which the researcher is crucial: in framing the discussion, interpreting meaning and representing results. Follow-up interviews also offered a sub-sample of participants a chance to reflect again on their practice and how it might have changed since the first interview, as well as on the nature of the interview process itself. How did they feel about describing the nature of this solitary, private practice, perhaps for the first time? Had their accounts surprised them at all? The findings that follow should be considered as products of participatory research, in which reflections on home working emerged through a process of collaborative discov-ery. The analysis draws on Cass and Faulconbridge's [43] notion of 'affective satisfaction' as a means of capturing teleo-affective fulfilment expressed by home workers. In the data cited below, the identities of research participants are anonymised. Characteristics of an integrative practice Three themes emerge from interviews with home workers, based on incorporating dispersed practices into working from home. These are comfort, control and flexibility; themes which may be seen as characteristics of the teleo-affective structure of working from home. Each is presented in turn, including a summary of related literature and insights from empirical data. The meanings, materials and competences associated with these themes are discussed and summarised in Table 1. Comfort Thermal comfort is a subject of interest across disciplines, with this journal demonstrating the breadth of approaches even within social sciences, through publications applying psychological approaches [58], practice theory [36,59] actor-network theory [60], building models [61] and behaviour change theory [62] to studies of household heating and cooling. Beyond the social sciences, in literatures concerned with building design and energy engineering, comfort has become technically specified, as 'optimal' conditions are defined in relation to human physiology and embedded in building energy management [e.g. American Society of Heating,Refrigerating and Air Conditioning Engineers (ASHRAE);,63]. In contrast to relatively fixed ideas of comfort, the notion of 'adaptive comfort' has been developed to highlight individuals' ability to achieve comfort in flexible ways by making psychological, physiological and behavioural adjustments [64,65]. Social scientists employing practice theory have expanded on technical and behavioural discourses on comfort, illustrating the variety of materials and technologies [30,36]; cultural norms [35]; forms of knowledge [29]; and power relations [66] involved in the everyday practices that formulate comfort. How comfort relates to practices is an area for debate [67]. Can it be considered a practice itself, an element, or an outcome of other practices? The findings below and subsequent discussion consider it as an attribute of practice, emerging from the incorporation of dispersed practices in multiple performances. For working from home, these include tinkering with heating controls and materials in the home, as well as bodily movement and clothing. All participants reported having high levels of control over the temperature when working from home, citing the unconstrained ability to adjust thermostats, radiator controls and timers in their own homes. Despite this, 14 of 17 occasional home workers reported maintaining lower temperatures when working from home when compared with their normal place of work, while 17 of 20 also tolerated cooler temperatures when working at home compared with other times spent in their homes. When reflecting on this, some participants were surprised to realise this was something they did. Between the first and second interview for example, Peter had become increasingly aware of his tolerance of cold when working from home having put this into words for the first time previously. He had subsequently adjusted his practice to ensure he had all the clothing and materials he needed to prevent him becoming graduallyand imperceptiblymore uncomfortable. When attempting to explain their motivation for tolerating colder temperatures, the most common explanations cited a desire to conserve resources; not having to respond to the needs of others; staying alert for work and reducing environmental impact. 18 respondents reported using clothing and blankets to offset the need for heating when working from home, including Mick who wore 'big fluffy socks and a hoody' and Emma who found satisfaction wearing her 'leopard print onesie'. Materials such as hot-water bottles, hot drinks and microwaveable wheat sacks were utilized for comfort by several participants, whilst Anne was one of several interviewees who reported making use of bodily movement: 'My main cure is to move… I have a small little trampoline in the garden so if the going gets really rough I'll go out and bounce on that! Then it feels warmer when you come in. You know, I've never told anyone all these secrets!' Home workers seemed to relish the opportunity to make use of materials, technologies and bodily movement in ways that would be inappropriate in workplace environments. Practising adaptive comfort presented an apparently satisfying challenge, requiring forms of embodied competence and a variety of materials in order to avoid the use of central heating. Anne's revealing of 'secrets' is an example of how interviews provide a unique platform for reflections on practice, as well as highlighting the intimate nature of comfort practices in the home. While all interviewees conducted the same broad set of work tasks in both home and office environments, it was the incorporation of different dispersed practices into working practices which starkly delineated their practice in the two settings. In their workplace environments, Jade and Dorothy, two senior managers in a large organisation, spoke of the importance of projecting authority and how draining it could be to 'push certain agendas'. Clothing, the use of language and the embodiment of authority through physical competences such as posture are all important in these environments. For Dorothy, home working offered the opportunity to 'reconcile' These illustrations of dispersed practices show that affective sensibilities are central to the experience of home working, and that the notion of comfort was constituted by more than a physiological response to thermal conditions. Isabelle, for example described how 'light…. in some way it sort of compensates [for heat]. Looking out over beautiful views… part of being warm is about a feeling of well-being'. This quotation may be considered a 'non-rational' account of affective experience. One might typically think of warmth or comfort as a prerequisite for well-being, but put the other way around, Isabelle reveals that the word warmth holds significance for her beyond physiological sensation. Her account was mirrored by other interviewees who had trouble finding words to describe how the concepts of home and comfort were linked. When asked about the meanings of home, more than half of the participants used words like 'comfy', 'cosy' and 'comfort', however, long pauses, expressions of contemplation and sometimes even discomfort were evident when attempting to elaborate on this relationship. If we understand working from home as a particular iteration of wider working practice, then it becomes clear that its distinctive characteristics emerge from the incorporation of dispersed practices in the activity dimension. Affective satisfaction is generated through the creative use of materials such as clothing, mugs of tea and hot water bottles, movement and avoiding the use of central heating, incorporated into the performance of home working. The next section describes how a sense of control is a central characteristic of working from home, and how the process of incorporating dispersed practices can itself bring about affective satisfaction. Control In research on adaptive comfort, personal control has been shown to increase tolerance of a wider range of thermal conditions [68]. The corollary of this is that energy savings could be made through the provision of greater personal environmental controls in workplaces [69]. All but one participant reported having greater control over thermal conditions when working from home as opposed to workplace environments. This was both due to having better access to technologies such as thermostats and radiators in the home, as well as being unconstrained by the perception of others' needs for comfort when cohabiting space. In some cases respondents reported colleagues actively expressing discomfort, while others cited co-workers' needs based on gender, body-mass-index or ethnicity: 'Sharing with three women, they like it on full tilt.' (James) 'Some of those guys, they're really big… they're actively really hot… you can tell they are.' (Dorothy) 'We have an Italian contingent that have been known to wear their coats full time.' (Michael) Temperature was reported as a common source of tension in shared environments. Josh indicated that this had been a topic of much debate at his workplace, where 'no one seems to agree on anything about heat', leading to senior management intervening on the use of thermostats: 'the policy is we don't touch them'. In Michael's ethnically diverse workplace, culturally mediated notions of comfort had led to 'thermostat wars'. In all but one case, days spent working from home offered participants a rare opportunity for controlling the temperature of their working environment. Whereas most interviewees usually worked from home alone and tolerated lower temperatures when doing so, several cited the occasions when family members or housemates were home as times when they might put the central heating on. Rita's account corroborated the notion of 'social loading' [70]: 'If I'm alone in the house, I try to avoid turning on the main heat, just because… I'm alone. But if my housemate is going to stay at home then I'll probably turn on the heat.' The question 'to what extent do you have control over the temperature when working from home', prompted some surprising responses. Describing the means of control, technologies such as thermostats, programmable timers and radiator valves were cited as key elements of this dispersed practice. However, more surprisingly around half of the sample responded to this question by choosing to talk about waste and inefficiency: 'It's a very old Victorian terrace, so it's probably leakier than it should be.' (Anne) 'Bizarrely, even though it's quite a new flat, it's not particularly good at saving heat… the windows… even though its double glazing, it's not great double glazing, and you get a bit of a draft under the door.' (James) 'There must have been about 15 metres of copper piping… it was immense, just piping everywhere.' (Dorothy) It is clear from these responses that the inefficiency of building materials and meanings of waste were intertwined with home workers' perceptions of temperature control. Different forms of competence were reported as elements of the dispersed practices of comfort, including forms of intellectual knowledge and embodied know-how [33]. Dorothy for example, took a real interest in the technologies of her domestic heating system and had recently upgraded her boiler: 'Now I feel like I have a lot [of control] because I spent a lot of time training up on heating systems.' (Dorothy) It is clear that Dorothy's sense of control is made up of more than an ability to set the temperature in her home, but encapsulates a sense of intellectual and affective satisfaction gained from the learning process, getting to know the materials and technologies in her home. As well as the intellectual knowledge described by Dorothy, interviewees recruited material objects using forms of know-how that Royston terms 'tinkering' and 'bricolage' [36]. 'You can tweak a Labrador [to warm your feet]' (Isabelle) 'I notice that I get cold fingers, so I find myself going downstairs for a hot drink literally to warm up my cold fingers again… its more targeted'. (Peter) Isabelle and Peter's descriptions of targeted comfort were accompanied by a visible sense of glee relating to their inventive means. As was Liza's description of working on the floor in front of the fire, using its periodic demand for new logs as moments for reflecting on her work and refocusing. During a tour of their homes, Emma and Martha showed pride in having created homely, brightly decorated homeoffices, choosing the location of their desk to maximise passive solar gain. These examples illustrate the distribution of agency in practice, shared between bodies, materials and influenced by the logics of fire or the weather. As a form of dispersed practice, tinkering with the material assemblages of the home space to establish comfort brought about affective satisfaction. The perception of ownership was important for some participants in producing affective satisfaction, giving homeowners like Nathan the power and freedom to curate their environment: 'I've upgraded it slowly over the years. So I've got central heating, I've got double glazing, (I could do with more insulation), but all of that allows me to control it, and makes working from home a more realistic option. I would feel a lot less comfortable if I was sharing. It would be like perching. Whereas I feel like this is a workspace where I'm in control of it. The energy side is part of that… part of my broader personal situation.' It is clear from the examples above that control is a key characteristic of working from home, associated with positive affective sensibilities, and as Nathan says, bound up with the consumption of energy. Focusing on the incorporation of dispersed practices relating to heating, these findings show that thermal conditions are not solely responsible for producing affective satisfaction. Instead, recruiting materials, reducing waste, performing know-how, expanding one's knowledge and upgrading one's own home are expressions of control which can bring satisfaction and even glee. It is the processes of incorporating dispersed practices into home working such as interacting with pets, making hot drinks or even getting back into bed, which give character and affective satisfaction to the practice. Flexibility Flexibility is a concept gaining increasing attention in studies of energy demand and social practices, in response to challenges raised by the need to decarbonise the electricity grid. In future scenarios where thermal power plants are replaced by large amounts of intermittent renewables, peak demand is predicted to become a significant challenge, requiring flexibility in various forms on both the supply and demand sides of the system [71]. It is unclear to what extent domestic users of electricity will offer flexibility as a service to the wider system, or what the mechanisms for achieving this might be [72]. However, a prerequisite to developing policies to shift domestic demand is an understanding of flexibility in everyday life [73][74][75]. Every home worker interviewed expressed satisfaction with the flexibility afforded to them in their practice. As we have seen so far for comfort and control, one form of flexibility relates to the ability to selectively incorporate dispersed practices and their constituent elements in the performance of integrative practices. However, there are many forms of flexibility [76]. A practice can take on the attribute of flexibility for example, when it affords the opportunity to be performed in different spatial settings, or interwoven with the sequencing and scheduling of other practices [73]. Enabled by computer technology and the internet, company policy and forms of self-discipline, this section explores the spatial and temporal dimensions of flexibility reported by home workers. Many respondents described their motivations for working from home in the context of having a break from routine. For example, several managers cited the demands of always being available and interruptible, or tied up in meetings with little time for themselves: 'In the office, everybody wants a piece of me… Working from home is my sanity time.' (Jade) Home working provides the opportunity to be flexible with participants' time and focus, for instance by changing normal working hours. My sample of home workers would often start later, take a longer lunch break and work into the evening: 'I might have a break and assemble the dinner at half past 4 or 5, then put it in the oven and go back and do some work.' (Isabelle) These forms of flexibility were described alongside reports of blurred boundaries between domestic and working practices with both positive and negative implications. On the one hand, conducting household chores during the working day afforded home workers more time in evenings and on weekends for other activities, while on the other, many struggled to manage the transition from doing work to being at home. Managing this divide was reported by many interviewees as a challenge requiring self-discipline. This often took the form of symbolic acts like Dorothy's ritual of shutting the laptop and putting it away in her work bag as a way to demarcate work practices from home. Mick felt it was 'unhealthy' to work at the kitchen table, instead using a spare bedroom to which he was able to shut the door at the end of the working day. Simon even had a sign on his spare room saying 'work', and resisted entering outside of his self-allocated working hours. For many, the challenge was in managing the relationship with computing devices which encroached on 'home time', particularly if they were used for both work and social activities. Jade for example felt it was important to remind herself that 'we operate technology, it doesn't operate us'. While flexibility was overwhelmingly described in positive terms, these examples indicate some of the tensions associated with bringing work into the home environment, highlighting the importance of self-discipline as a form of competence. All 20 respondents described interweaving household chores with desk-based work, citing laundry in particular. Loading the washing machine, dryer, or hanging out washing were tasks well-suited to break-times, sometimes acting as prompts for taking a pause. 'There's usually a day [in the 3 days per week working from home] … where the weather is going to be pretty reasonable… so I would actually go with the weather…it's a bit of a break from sitting in front of the computer.' (Rupert) Living alone in a small flat, Josh uses his home-working day to do the laundry, but finds the washing machine to be distractingly noisy, so typically takes his work to a café for a couple of hours before coming back and hanging up his wet clothes. The examples from Rupert and Josh highlight how flexibility is not simply the ability of the individual to dictate the circumstances of practice, but that sequencing of bundles of practice can be influenced by appliances, the 'needs' of the house and the weather. They demonstrate the agentive role of material assemblages in giving shape to practices, issuing their own demands for a range of competences, including coordinating schedules and interweaving practices. Three kinds of flexibility emerge from these findings. The first involves the ability to incorporate different dispersed practices into home working, for instance those practices involved in adaptive comfort: targeting warmth, tinkering with materials and conserving energy. In this form of flexibility, different materials, meanings and competences are mobilised in each incorporative practice-as-performance. Second, flexibility is manifest for home workers in having the freedom to choose when and where to work. Home workers are able to transcend the logic of institutional rhythms [74], for instance getting back into bed with a laptop, going to a café, or working into the evening. Finally, flexibility can be seen in the ability to coordinate daily schedules involving multiple integrative practices, for example interweaving work with household chores. Each of these forms of flexibility bring affective satisfaction for home workers. One-sided flexibility? For interviewees who had another regular place of work, over 75% reported that their desk would be unoccupiedbut heated and litwhen they were working from home. In these circumstances, the energy used for heating and lighting when home working constitute additional consumption. Only one employer was identified as mitigating this effect by providing a flexible 'hot-desking' arrangement, allowing them to downsize and cut leasing costs. The solution in this case was reported with dissatisfaction, as Jade said that the result was a cramped working environment. She also cited the requirement to carefully coordinate the schedules of flexible workers, which was not always done successfully, often requiring her to give up her home working day to attend meetings. While hot desking has clear energy reduction potential, its affective implications are under-researched. This study shows that for the employer, the benefits of flexibility often go unrealised, or present management challenges when energy and cost savings are made. For most interviewees, flexibility was seen as a benefit provided by employers, enabling workers to avoid a stressful commute, or even to compensate for uncompetitive pay. Although flexibility was largely described positively, one participant described the problem of feeling isolated and disconnected from colleagues due to home working. These affective impacts are explored extensively in organisational and psychological literature [77], and have implications for the uptake and sustainability of the practice. Table 1 collates the various elements of practice identified from this qualitative analysis, grouping them into the three characteristic themes. Energy is implicated in each performance of practice as elements are integrated. Discussion Using theoretical tools offered by practice theory, key characteristic themes associated with the teleo-affective structure of working from home have so far been identified, each of which provide affective satisfaction for practitioners. The findings presented above demonstrate how these characteristics are generated in practice, through mechanisms such as incorporating dispersed practices, selecting elements and spatial settings, and interweaving performances into temporal sequences. But what is the value of identifying affective satisfaction in practices, and what does all this mean for energy demand? This discussion tackles these questions in turn, first making four assertions about the value of attending to affect in practices, before exploring implications for contemporary challenges faced by energy systems. Insights from an affective perspective Cultural conventions of comfort have been widely discussed in energy literatures, expanding discourse beyond technical and physiological specifications. However, in discussions of comfort and energy consumption in buildings, the affective dimensions of practice have rarely been analysed in depth [46]. The first assertion builds on affective literature developed in geography, framing comfort as an affective capacity [23]. Understood this way, comfort becomes more than a physiological or even psychological state; an 'atmospheric attunement' [78], produced by the coming together of human and non-human elements in the always-emerging, enveloping 'affective atmosphere' of the home [24]. This theoretical framing holds that comfort is a relational capacity, flowing and radiating from the interactions of materials, memories, smells, doings and sayings associated with the home [23,28]. Data in this study has shown how, by (1) curating elements, (2) interweaving practices and (3) incorporating dispersed practices into working from home, these relations can be orchestrated by practitioners to produce affective satisfaction. A second assertion is that an attention to affective sensibilities provides insight into the apparently 'non-rational' behaviours of home workers in relation to energy consumption. Avoiding the use of central heating and achieving comfort by creative means appear to be valuable activities for home workers, carried out to produce affective satisfaction and to define the teleo-affective characteristics of their practice. As such, affect offers a theoretical complement to efforts made by energy researchers using practice theory to move beyond the notion of demand being a result of individual attitudes, behaviour and choice [79]. The third assertion relates to the conduct of research on affective dimensions of practice. If affective satisfaction emanates from the integration of elements in particular spatiotemporal settings [24], then care must be taken in seeking to capture affective sensibilities in the course of interviews. This study has taken heed of discussions on the methodological challenges of researching affect [52,53] by considering the staging of interviews as potentially transformative events in the lives of practice. Evidence for this was found in examples of home workers being surprised by aspects of their own accounts, and making adjustments following our first conversations. Emma, for example, is a regular home worker and part time singer. Her band often experiments with the process of song-writing, with Emma improvising lyrics over backing music. During one such exercise carried out between our first and second interviews, Emma found herself singing about meanings of home and comfort. Having completed the song, she was certain that our discussion two months previous about work, home, comfort and selfmanagement had infiltrated her unconscious, playing out in this affective experiment. Fourthly, putting affect in the analytical scope, researchers are encouraged to identify the mechanisms by which affective sensations are generated. This in turn can help to uncover broader characteristics of practices. Widespread, integrative practices such as work take on different attributes as they are performed in distinct spatiotemporal settings, enrol unique bundles of elements and incorporate dispersed practices. In the case of working from home, comfort, control and flexibility are attributes which help home workers to distinguish the practice from other performances of work, such as those contained in structured, 'thermally monotonous' office environments [66]. Understanding how these attributes become established and the role of energy in the process may even help to identify possibilities for steering future configurations of practice towards lower energy consumption [50]. The next section discusses the implications of this range of insights for energy systems challenges. Implications for energy systems challenges Five implications for the challenges faced by the energy system arise from this paper. These relate to the policy objectives of reducing energy consumption in domestic and commercial buildings and encouraging temporal demand shifting. Firstly, the sample of home workers overwhelmingly reported a tendency to tolerate lower temperatures when working from home compared with office environments or at other times in the home. Data highlighted that home workers were actively willing to achieve comfort in creative ways, experiencing affective satisfaction in differentiating their practice from office based work. While adaptive means of achieving comfort represent a potential source of energy reduction, these creative practices are often invisible to academics and policy makers; not being captured in energy models for example. Not only does this lack of information make policy design challenging, there are undoubtedly further difficulties to be encountered when thinking through the kinds of measures that might be involved to stimulate adaptive comfort practices. Despite the prevalent use of materials such as hot water bottles and thermal clothing in achieving comfort when home working, policy makers are likely to be wary of commissioning campaigns to encourage their use [see [80] for a recent example]. Nonetheless, further research verifying these adaptive comfort findings would help to build the case for a more expansive policy paradigm, 'more in proportion with the challenges faced' [50]. A further challenge is that despite evidence of adaptive comfort practices being incorporated into home working, these only bring about net demand reductions if paralleled with flexible energy management practices undertaken by employers. This leads to a second practical implication: the need for employers to develop the capacity to respond to variable occupancy in buildings with flexible heating, lighting and ventilation controls, in order to prevent unnecessary energy services being provided to unoccupied workspaces. Organisational policies are also required to support coordinated schedules and the deployment of technologies such as motion sensors and high quality videoconferencing services. In this sample, only one large public sector employer had implemented these practices. Smaller businesses and those leasing space are likely to face greater difficulty in developing this capacity for flexibility [81]. Thirdly, evidence from interviews suggested that some home workers valued their days spent at home partly due to the avoidance of a lengthy commute. Quantitative studies have shown that this can lead to significant energy savings. However, where this benefit is incorporated into long term decisions such as where to live, the 'rebound' effect can further serve to offset any positive energy reductions associated with home working [82]. Targeting employee commuting could be an effective way to reduce environmental impacts associated with working practices, but can be problematic. Carbon footprinting protocols for corporations and public bodies typically exclude commuting from core calculations [83], while many employers would consider this outside the scope of their responsibility. Given that the net energy consumption of working from home pivots on the commute [8], there is a strong case for accounting for its length and carbon intensity when negotiating telecommuting arrangements. Exploring the possibilities and pitfalls of tackling employees' commutes is an area that warrants further empirical study [see for example 84]. A fourth implication arising from this paper relates to the design of 'smart' household energy systems. This sample of UK home workers all had central heating systems with a condensing gas boiler and individually controllable radiators in each room. However only two respondents would bother to turn down radiators in other rooms when home working, with the remainder preferring to control the heating for the house as a whole, and a majority actively conserving energy by keeping it turned off. The additional effort required to isolate radiators may have encouraged adaptive comfort practices and contributed to the conservation of energy. In coming years, as 'smart' heating systems deployed in domestic buildings allow occupants to control individual radiators with internet-connected devices, there is potential for increased energy consumption [85]. It is the responsibility of social scientists to highlight these contingencies of practice and, as seen in the case of the UK's smart electricity meter rollout, to draw the 'nonrational' tendencies of householders to the attention of policy makers [86]. Finally, the ability and willingness of householders to adapt to signals to shift their consumption over time has economic and environmental implications for the electricity system [71]. Working from home represents an opportunity to use energy more flexibly, as the ability to shift demand in the domestic setting has unsurprisingly been shown to be correlated with occupancy [87]. Moreover, this paper has demonstrated that flexibility is a characteristic theme of the practice of home working which is often performed in relation to the use of energy consuming appliances such as washing machines, dishwashers and cookers. Demand shifting is already undertaken in performances of home working, producing affective satisfaction and performatively delineating home working from 'normal' work and home activities. Whereas Friis and Haunstrup-Christensen [74] have highlighted how time shifting of energy demand can 'challenge the temporality of households' routinised everyday practices', this finding highlights the possibility of harmonising demand-response with naturally-occurring flexibility. Targeting home workers with demand response policies may therefore be a fruitful enterprise for electricity suppliers and grid operators, and warrants empirical research. Conclusions Despite the steady growth in working from home across developed countries, the practice has been under-researched within social scientific energy research since the 'practice turn'. Most research concerned with energy and home working has focussed on the difficult calculation of net consumption and associated emissions, finding myriad contingent variables. This paper has explicitly drawn on practice theory, using theoretical constructs to aid analysis of ethnographic data. By investigating the meanings, materials and competences associated with home working, and the mechanisms by which the constellation of these elements are configured, characteristics of the practice have been uncovered, with significance for energy consumption and the wider energy system. This is one of few studies to incorporate the notion of affect into practice-based analyses of energy demand, attending to the experiential dimensions of performance [43,46]. The analysis benefited from this approach in helping to deepen understandings of comfort, and explain 'non-rational behaviours'. Attending to affect necessitates methodological judiciousness, demanding close appreciation of the emergent nature of interview data and the processes of narrative sensemaking. This theoretical approach also highlights the mechanisms by which affect is produced, through the coordination and interweaving of practices for example, which in turn help to uncover the dynamics of practice and identify potential avenues for change. In combining ideas from practice theory and discourses of affect, this paper demonstrates the synergies between these conceptual frameworks. This theoretical synthesis has helped to identify three characteristic themes associated with working from home. Comfort, control and flexibility help to construct the practice as a distinct ontological entity, helping to demarcate home working from other forms of work. By analysing these, energy was found to be integral to the practice, being consumed and conserved as home workers tinker with the technologies and materials of their environment, or interweave the use of appliances such as washing machines with stretches of desk-based work. These findings challenge the idea that convenience trumps energy conservation, or that demand shifting necessarily comes at a cost to householders. With implications for contemporary energy system challenges, these insights cannot be captured by models of household energy consumption, rational economic decision-making or by standardized definitions of comfort. Working from home has potential to bring about net energy and emissions savings, and is often referred to with optimism in academic and grey literature concerned with sustainable business practice. Aspects of practitioners' performances of home working appliances are crucial in determining the balance of energy consumption. Home workers' tolerance of lower temperatures, the degree of mobility 'rebound' and the incorporation of dispersed practices are key determinants of demand; as is the ability of employers to alter energy management practices. By identifying the nuances of practice which may help to realise this potential, social scientific studies such as this can offer insights to those in a position to steer future configurations of practice, such as policy makers, regulators and employers.
2019-05-10T13:09:45.105Z
2017-06-01T00:00:00.000
{ "year": 2017, "sha1": "033fffdd209982a6a3b0e7c1462403570823a5b3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.erss.2017.03.012", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "9ed1e2e43fb342d64296ff3ef65548882633044f", "s2fieldsofstudy": [ "Environmental Science", "Sociology" ], "extfieldsofstudy": [ "Sociology" ] }
215726460
pes2o/s2orc
v3-fos-license
OmniSCV: An Omnidirectional Synthetic Image Generator for Computer Vision Omnidirectional and 360° images are becoming widespread in industry and in consumer society, causing omnidirectional computer vision to gain attention. Their wide field of view allows the gathering of a great amount of information about the environment from only an image. However, the distortion of these images requires the development of specific algorithms for their treatment and interpretation. Moreover, a high number of images is essential for the correct training of computer vision algorithms based on learning. In this paper, we present a tool for generating datasets of omnidirectional images with semantic and depth information. These images are synthesized from a set of captures that are acquired in a realistic virtual environment for Unreal Engine 4 through an interface plugin. We gather a variety of well-known projection models such as equirectangular and cylindrical panoramas, different fish-eye lenses, catadioptric systems, and empiric models. Furthermore, we include in our tool photorealistic non-central-projection systems as non-central panoramas and non-central catadioptric systems. As far as we know, this is the first reported tool for generating photorealistic non-central images in the literature. Moreover, since the omnidirectional images are made virtually, we provide pixel-wise information about semantics and depth as well as perfect knowledge of the calibration parameters of the cameras. This allows the creation of ground-truth information with pixel precision for training learning algorithms and testing 3D vision approaches. To validate the proposed tool, different computer vision algorithms are tested as line extractions from dioptric and catadioptric central images, 3D Layout recovery and SLAM using equirectangular panoramas, and 3D reconstruction from non-central panoramas. Introduction The great amount of information that can be obtained from omnidirectional and 360º images makes them very useful.Being able to obtain information from an environment using only one shot makes these kinds of images a good asset for computer vision algorithms.However, due to the distortions they present, it is necessary to adapt or create special algorithms to work with them.New computer vision and deep-learning-based algorithms have appeared to take advantage of the unique properties of omnidirectional images.Nevertheless, for a proper training of deep-learning algorithms, big datasets are needed.Existing datasets are quite limited in size due to the manual acquisition, labeling and post-processing of the images.To make faster and bigger datasets, previous works such as Dai et al. [2017], Song et al. [2015], Xiao et al. [2012], Armeni et al. [2017], Straub et al. [2019] use special equipment to obtain images, camera pose, and depth maps simultaneously from indoor scenes.These kinds of datasets are built from real environments, but need post-processing of the images to obtain semantic information or depth information.Tools like LabelMe Russell et al. [2008] and new neural networks such as SegNet Badrinarayanan et al. [2017] can be used to obtain automatic semantic segmentation from the real images obtained in the previously mentioned datasets, yet without pixel precision.Datasets such as Geiger et al. [2013], Zhang et al. [2010] use video sequences from outdoor scenes to obtain depth information for autonomous driving algorithms.In addition, for these outdoor datasets, neural networks are used to obtain semantic information from video sequences Cordts et al. [2016Cordts et al. [ , 2015]], in order to speed up and enlarge the few datasets available. Due to the fast development of graphic engines such as Unreal Engine EpicGames [2020], virtual environments with realistic quality have appeared.To take advantage of this interesting property, simulators such as CARLA Dosovitskiy et al. [2017] and SYNTHIA Ros et al. [2016] recreate outdoor scenarios in different weather conditions to create synthetic datasets with labeled information.If we can define all the objects in the virtual environment, it is easier to create a semantic segmentation and object labeling, setting the camera pose through time and computing the depth for each pixel.These virtual realistic environments have helped to create large datasets of images and videos, mainly from outdoor scenarios, dedicated to autonomous driving.Other approaches use photorealistic video games to generate the datasets.Since these games already have realistic environments designed by professionals, many different scenarios are recreated, with pseudo-realistic behaviors of vehicles and people in the scene.Works such as Doan et al. [2018] use the video game Grand Theft Auto V (GTA V) to obtain images from different weather conditions with total knowledge of the camera pose, while Richter et al. [2017Richter et al. [ , 2016] ] also obtaining semantic information and object detection for tracking applications.In the same vein, Johnson-Roberson et al. [2016], Angus et al. [2018] obtain video sequences with semantic and depth information for the generation of autonomous driving datasets in different weather conditions and through different scenarios, from rural roads to city streets.New approaches such as the OmniScape dataset ARSekkat [February, 2020] uses virtual environments such as CARLA or GTA V to obtain omnidirectional images with semantic and depth information in order to create datasets for autonomous driving. However, most of the existing datasets have only outdoors images.There are very few synthetic indoor datasets McCormac et al. [2017] and most of them only have perspective images or equirectangular panoramas.Fast development of computer vision algorithms demands ever more omnidirectional images and that is the gap between the resources that we want to fill in this work.In this work we present a tool to generate image datasets from a huge diversity of omnidirectional projection models. We focus not only on panoramas, but also on other central projections, such as fish-eye lenses Schneider et al. [2009], Kingslake [1989], catadioptric systems Baker and Nayar [1999] and empiric models such as Scaramuzza's Scaramuzza et al. [2006] or Kannala-Brandt's Kannala and Brandt [2006].Our novelty resides in the implementation of different non-central-projection models, such as non-central panoramas Menem and Pajdla [2004] or spherical Agrawal and Ramalingam [2013] and conical Lin and Bajcsy [2006] catadioptric systems in the same tool. The composition of the images is made in a virtual environment from Unreal Engine, making camera calibration and image labeling easier.Moreover, we implement several tools to obtain ground-truth information for deep-learning applications, for example layout recovery or object detection. The main contributions of this work can be summarized as follows: • Integrating in a single framework several central-projection models from different omnidirectional cameras as panoramas, fish-eyes, catadioptric systems, and empiric models.• Creating the first photorealistic non-central-projection image generator, including non-central panoramas and non-central catadioptric systems.• Devise a tool to create datasets with automatic labeled images from photorealistic virtual environments. • Develop automatic ground-truth generation for 3D layout recovery algorithms and object detection. The next section of this work is divided in 4 main parts.In the first one, Section 2.1, we introduce the virtual environment in which we have worked.Section 2.2 presents the mathematical background of the projection models implemented and in Sections 2.3 and 2.4 we explain how the models are implemented. Materials and Methods The objective of this work is to develop a tool to create omnidirectional images enlarging existing datasets or making new ones to be exploited by computer vision algorithms under development.For this purpose, we use virtual environments, such as Unreal Engine 4 EpicGames [2020], from where we can get perspective images to compose 360º and omnidirectional projections.In these environments, we can define the camera (pose, orientation and calibration), the layout, and the objects arranged in the scene, making it easier to obtain ground-truth information. The proposed tool includes the acquisition of images from a virtual environment created with Unreal Engine 4 and the composition of omnidirectional and 360 images from a set of central and non-central camera systems.Moreover, we can acquire photorealistic images, semantic segmentation on the objects of the scene or depth information from each camera proposed.Furthermore, given that we can select the pose and orientation of the camera, we have enough information for 3D-reconstruction methods. Virtual Environments Virtual environments present a new asset in the computer vision field.These environments allow the generation of customized scenes for specific purposes.Moreover, great development of computer graphics has increased the quality and quantity of graphic software, obtaining even realistic renderings.A complex modeling of the light transport and its interaction with objects is essential to obtain realistic images.Virtual environments such as POV-Ray POV-Ray [2020] and Unity Unity [2020] allow the generation of customized virtual environments and obtain images from them.However, they do not have the realism or flexibility in the acquisition of images we are looking for.A comparative of images obtained from acquisitions in POV-Ray and acquisitions in Unreal Engine 4 is presented in Figure 1.The virtual environment we have chosen is Unreal Engine 4, (UE4) EpicGames [2020], which is a graphic engine developed by EpicGames (https://www.epicgames.com).Being an open-source code has allowed the development of a great variety of plugins for specific purposes.Furthermore, realistic graphics in real time allows the creation of simulations and generation of synthetic image datasets that can be used in computer vision algorithms.Working on virtual environments makes easier and faster the data acquisition than working on the field. In this work, we use UE4 with UnrealCV Qiu et al. [2017], which is a plugin designed for computer vision purposes.This plugin allows client-server communication with UE4 from external Python scripts (see Figure 2) which is used to automatically obtain many images.The set of available functions includes commands for defining and operating virtual cameras; i.e., fixing the position and orientation of the cameras and acquiring images.As can be seen in Figure 3, the acquisition can obtain different kinds of information from the environment (RGB, semantic, depth or normals). However, the combination of UE4+UnrealCV only allows perspective images, so it is necessary to find a way to obtain enough information about the environment to obtain omnidirectional images and in particular to build non-central images.For central omnidirectional images, the classical adopted solution is the creation of a cube map Greene [1986].This proposal consists of taking 6 perspective images from one position so we can capture the whole environment around that point.We show that this solution only works for central projections, where we have a single optical center that matches with the point where the cube map has been taken.Due to the characteristics of non-central-projection systems, we make acquisitions in different locations, which depend on the projection model, to compose the final image. Projection Models In this section, we introduce the projection models for the different cameras that are implemented in the proposed tool.We are going to explain the relationship between image-plane coordinates and the coordinates of the projection ray in the camera reference.We distinguish two types of camera models: central-projection camera models and non-central-projection camera models.Among the central projection cameras, we consider: • Panoramic images: Equirectangular and Cylindrical • Fish-eye cameras, where we distinguish diverse lenses: Equi-angular, Stereographic, Equi-solid angle, Orthogonal • Catadioptric systems, where we distinguish different mirrors: Parabolic and Hyperbolic • Scaramuzza model for revolution symmetry systems • Kannala-Brandt model for fish-eye lenses Among the non-central projection cameras, we consider: • Non-central panoramas • Catadioptric systems, where we distinguish different mirrors: Spherical and Conical Central-Projection Cameras Central-projection cameras are characterized by having a unique optical center.That means that every ray coming from the environment goes through the optical center to the image.Among omnidirectional systems, panoramas are the most used in computer vision.Equirectangular panoramas are 360º-field-of-view images that show the whole environment around the camera.This kind of image is useful to obtain a complete projection of the environment from only one shot.However, this representation presents heavy distortions in the upper and lower part of the image.That is because the equirectangular panorama is based on spherical coordinates.If we take the center of the sphere as the optical center, we can define the ray that comes from the environment in spherical coordinates (θ , φ ).Moreover, since the image plane is an unfolded sphere, each pixel can be represented in the same spherical coordinates, giving a direct relationship between the image plane and the ray that comes from the environment.This relationship is described by: where (x, y) are pixel coordinates and (x max , y max ) the maximum value, i.e., the image resolution. In the case of cylindrical panoramas, the environment is projected into the lateral surface of a cylinder.This panorama does not have a 360º practical field of view, since the perpendicular projection to the lateral surface of the environment cannot be projected.However, we can achieve up to 360º on the horizontal field of view, FOV h , and theoretical 180º on the vertical field of view, FOV v , that usually is reduced from 90º to 150º for real applications.We can describe the relationship between the ray that comes from the environment and the image plane as: Next, we introduce the fish-eye cameras Schneider et al. [2009].The main characteristic of this kind of camera is the wide field of view.The projection model for this camera system has been obtained in Bermudez-Cameo et al. [2015], where a unified model for revolution symmetry cameras is defined.This method consists of the projection of the environment rays into a unit radius sphere.The intersection between the sphere and the ray is projected into the image plane through a non-linear function r = h(φ ), which depends on the angle of the ray and the modeled fish-eye lens.In Table 1 h(φ ) for the lenses implemented in this work is defined. Table 1: Definition of h(φ ) = r for different fish-eye lenses. Equi-angular Stereographic Orthogonal Equi-solid angle For catadioptric systems, we use the sphere model, presented in Baker and Nayar [1999].As in fish-eye cameras, we have the intersection of the environment's ray with the unit radius sphere.Then, through a non-linear function, we project the intersected point into a normalized plane.The non-linear function, h(x), depends on the mirror we are modeling.The final step of this model projects the point in the normalized plane into the image plane with the calibration matrix H c , defined as , where K c is the calibration matrix of the perspective camera, R c is the rotation matrix of the catadioptric system and M c defines the behavior of the mirror (see Equation ( 3)). where ( f x , f y ) are the focal length of the camera, and (u 0 , v 0 ) the coordinates of the optical center in the image, the parameters Ψ and ξ represent the geometry of the mirror and are defined in Table 2, d is the distance between the camera and the mirror and 2p is the semi-latus rectum of the mirror. The last central-projection models presented in this work are the Scaramuzza and Kannala-Brandt models.Summarizing Scaramuzza et al. [2006] and Kannala and Brandt [2006], these empiric models represent the projection of a 3D point into the image plane through non-lineal functions. In the Scaramuzza model, the projection is represented by , is defined by the image coordinates and a n-grade polynomial function , where ρ is defined as the distance of the pixel to the optical center in the image plane, and [a 0 , a 1 , . . ., a N ] are calibration parameters of the modeled camera. Table 2: Definition of Ψ and ξ for different mirrors. Catadioptric system ξ Ψ In the Kannala-Brandt camera model, the forward projection model is represented as: where r = x 2 + y 2 , (c x , c y ) are the coordinates of the optical center and d(θ ) is the non-linear function which is defined as are the parameters that characterize the modeled camera. Non-Central-Projection Cameras Central-projection cameras are characterized by the unique optical center.By contrast, in non-central-projection models, we do not have a single optical center for each image.For the definition of non-central-projection models, we use Plücker coordinates.In this work, we summarize the models; however, a full explanation of the models and Plücker coordinates is described in Bermudez-Cameo et al. [2018]. Non-central panoramas have similarities with equirectangular panoramas.The main difference with central panoramas is that each column of the non-central panorama shares a different optical center.Moreover, since central panoramas are the projection of the environment into a sphere, non-central panoramas are the projection into a semi-toroid, as can be seen in Figure 4a.The optical center of the image is distributed in the trajectory of centers of the semi-toroid.This trajectory is defined as a circle, dashed line in Figure 4a, whose center is the revolution axis.The definition of the rays that go to the environment from the non-central system is obtained in Menem and Pajdla [2004], given as the result of Equation ( 5).The parameter R c is the radius of the circle of optical centers and (θ , ϕ) are spherical coordinates in the coordinate system.Finally, spherical and conical catadioptric systems are also described by non-central-projection models.Just as with non-central panoramas, a full explanation of the model can be found on Agrawal and Ramalingam [2013], Lin and Bajcsy [2006]. Even though the basis of non-central and central catadioptric systems is the same, we take a picture of a mirror from a perspective camera, the mathematical model is quite different.As in non-central panoramas, for spherical and conical mirror we also use the Plücker coordinates to define projection rays; see Figure 4b.For conical catadioptric systems, we define Z r = Z c + R c cot φ , where Z c and R c are geometrical parameters and cot φ = (z + r tan 2τ)/(z tan 2τ − r), where τ is the aperture angle of the cone.When these parameters are known, the 3D ray in Plücker coordinates is defined by: For the spherical catadioptric system, we define the geometric parameters as Given the coordinates at a point of the image plane, the Equation ( 7) defines the ray that reflects on the mirror: where Central Cameras Simulator In this section, we describe the simulator, the interaction with UnrealCV and how are the projection models are implemented.The method to obtain omnidirectional images can be summarized in two steps: • Image acquisition: the first step is the interaction with UE4 through UnrealCV to obtain the cube map from the virtual environment. • Image composition: the second step is creating the final image.In this step we apply the projection models to select the information of the environment that has been acquainted in the first step. For central-projection images, the two steps are independent from each other.Once we have a cube map, we can build any central-projection image from that cube map.However, for non-central-projection images, the two steps are mixed.We need to compute where the optical center is for each pixel and make the acquisition for that pixel.Examples of the images obtained for each projection model can be found in the appendix A. Image Acquisition The image acquisition is the first step to build omnidirectional images.In this step we must interact with UE4 through UnrealCV using Python scripts.Camera pose and orientation, acquisition field of view and mode of acquisition are the main parameters that we must define in the scripts to give the commands to UnrealCV. In this work, we call cube map to the set of six perspective images that models the full 360º projection of the environment around a point; concept introduced in Greene [1986].Notice that 360º information around a point can be projected into a sphere centered in this point.Composing a sphere from perspective images requires a lot of time and memory. Simplifying the sphere into a cube, as seen in Figure 5a, we have a good approximation of the environment without losing information; see Figure 5b.We can make this affirmation since the defined cube is a smooth atlas of the spherical manifold S 2 embedded in R 3 . To create central-projection systems, the acquisition of each cube map must be done from a single location.Each cube map is the representation of the environment from one position-the optical center of the omnidirectional camera.That is why we use UE4 with UnrealCV, where we can define the camera pose easily.Moreover, the real-time renderings of the realistic environments allows fast acquisitions of the perspective images to build the cube maps.Nevertheless, other virtual environments can be used to create central-projection systems whenever the cube map can be built with these specifications. Going back to UnrealCV, the plugin gives us different kinds of capture modes.For our simulator, we have taken 3 of these modes: lit, object mask and depth. In the lit mode, UnrealCV gives a photorealistic RGB image of the virtual environment.The degree of realism must be created by the designer of the scenario.The second is the object mask mode.This mode gives us semantic information of the environment.The images obtained have a colored code to identify the different objects into the scene.The main advantage of this mode is the pixel precision for the semantic information, avoiding the human error in manual labeling.Moreover, from this capture mode, we can obtain ground-truth information of the scene and create specific functions to obtain ground-truth data for computer vision algorithms, as layout recovery or object detection.The third mode is depth.This mode gives a data file where we have depth information for each pixel of the image.For the implementation of this mode in the simulator, we keep the exact data information and compose a depth image in grayscale. Image Composition Images from central-projection cameras are composed from a cube map acquired in the scene.The composition of each image depends on the projection model of the camera, but they follow the same pattern. The algorithm 1 shows the steps in the composition of central-projection images.Initially, we get the pixel coordinates from the destination image that we want to build.Then, we compute the spherical coordinates for each pixel through the projection model of the camera we are modeling.With the spherical coordinates we build the vector that goes from the optical center of the camera to the environment.Then, we rotate this vector to match the orientation of the camera into the environment.The intersection between this rotated vector and the cube map gives the information of the pixel (color, label, or distance).In contrast to the coordinate system of UE4, which presents a left-handed coordinate system and different rotations for each axis (see Figure 6a), our simulator uses a right-handed coordinate system and spherical coordinates, as shown in Figure 6b.Changing between the coordinate systems is computed internally in our tool to keep mathematical consistency between the projection models and the virtual environment. Algorithm 1: Central-projection composition Input: Set-up of final image (Camera, Resolution, type, ...) Load cube map while Go through final image do get pixel coordinates compute spherical coordinates -> Apply projection model of omnidirectional camera compute vector/light ray -> Apply orientation of the camera in the environment get pixel information -> Intersection between light ray and cube map end Equirectangular panoramas have an easy implementation in this simulator since equirectangular images are the projection of the environment into a sphere.Then, the destination image can be defined in spherical coordinates directly from the pixel coordinates as: where these equations define the range of the spherical coordinates as −π < θ < π and −π/2 < ϕ < π/2. In the cylindrical model we also use spherical coordinates, but defined with the restrictions of the cylindrical projection. In the definition of the equation ( 9) we can see that two new parameters appear.The FOV h parameter represents the horizontal field of view of the destination image, which can go up to 360º, and can be changed by the simulator user. The FOV v parameter models the height of the lateral surface of the cylinder from the optical center point of view. where the range of the spherical coordinates are In fish-eye cameras we have a revolution symmetry, so we use polar coordinates.We transform the pixel coordinates into polar coordinates with the next equation: where we define (u 0 , v 0 ) = (u max /2, v max /2). Given this definition, the range of the polar coordinates is 0 < r < (u 2 max + v 2 max )/4 and −π < θ < π.However, we crop rmax = min(u max /2, v max /2) in order to constrain the rendered pixels in the camera field of view.After obtaining the polar coordinates, we get the spherical coordinates for each pixel through the projection model.In Table 3 we have the relationship between the polar coordinate r and the spherical coordinate φ given by the projection model of fish-eye cameras.where f is the focal length of the fish-eye camera. Table 3: Relationship between r and φ from the fish-eye projection model. Equi-angular Stereographic Orthogonal Equi-Solid Angle On catadioptric systems, we define the parameters ξ and η that model the mirror as shown in Table 4.We use polar coordinates to select what pixels of the image are rendered, but we apply the projection model on the pixel coordinates directly. Table 4: Definition of ξ and η for central mirrors. For computing the set of rays corresponding to the image pixels, we use the back-projection model described in Puig et al. [2012].The first step is projecting the pixel coordinates into a 2D projective space: Then, we re-project the point into the normalized plane with the inverse calibration matrix of the catadioptric system as v = H −1 c v, (see section 2.2).Finally, through the inverse of the non-linear function h(x), shown in the equation ( 11), we can obtain the coordinates of the ray that goes from the catadioptric system to the environment. These projection models give us an oriented ray that comes out of the camera system.This ray is expressed in the camera reference.Since the user of the simulator can change the orientation of the camera in the environment in any direction, we need to change the reference system of the ray to the world reference.First, we rotate the ray from the camera reference to the world reference.Then we rotate again the ray in the direction of the camera inside the world reference.After these two rotations, we have changed the reference system of the ray from the camera to the world reference, taking into account the orientation of the camera in the environment.Once the rays are defined, we get the information from the environment computing the intersection of each ray with the cube map.The point of intersection has the pixel information of the corresponding ray. Non-central Camera Simulator The simulator for non-central cameras is quite different from the central camera one.In this case, we can neither use only a cube map to build the final images nor save all the acquisitions needed.The main structure proposed to obtain non-central-projection images is shown in algorithm 2. Since we have different optical centers for each pixel in the final image, we group the pixels sharing the same optical center, reducing the number of acquisitions needed to build it.The non-central systems considered group the pixel in different ways, so the implementations are different.From the projection model of non-central panoramas, we get that pixels with the same u coordinate share the same optical center.For each coordinate u of the image, the position for the image acquisition is computed.For a given center (X c ,Y c , Z c ) T , and radius R c , of the non-central system, we have to compute the optical center of each u coordinate. To obtain each optical center, we use equation ( 12), where θ is computed according to equation ( 13) and φ is the pitch angle of the non-central system.Once we have obtained the optical center, we make the acquisition in that location, obtaining a cube map.Notice that this cube map only allows the obtaining of information for the location it has acquired.This means that for every optical center of the non-central system, we must make a new acquisition. Once the acquisition is obtained from the correct optical center, we compute the spherical coordinates to cast the ray into the acquired images.From the equation ( 13) we already have one of the coordinates; the second is obtained from the equation ( 14). Table 5: Definition of cot φ and Z r for different mirrors. In non-central catadioptric systems, pixels sharing the same optical center are grouped in concentric circles.That means we go through the final image from the center to the edge, increasing the radius pixel by pixel.For each radius, we compute the parameters Z r and cot φ as in Table 5, which depend on the mirror we want to model (see section 2.2).The parameter Z r allows us to compute the optical center from where we acquire a cube map of the environment. Once the cube map for each optical center is obtained, we go through the image using polar coordinates.For a fixed radius, we change θ and compute the pixel to color, obtained from equation ( 15).Knowing θ and φ , from Table 5, we can cast a ray that goes from the catadioptric system to the cube map acquired.The intersection gives the information for the pixel. In these non-central systems, the number of acquisitions required depends on the resolution of the final non-central image.That means, the more resolution the image has, the more acquisitions are needed.For an efficient composition of images, we need to define as fast as possible the pose of the camera in the virtual environment for each optical center. That is one of the reasons we use Unreal Engine 4 as a virtual environment, where we can easily change the camera pose in the virtual environment making fast acquisitions, since the graphics work in real time. Results Working on an environment of Unreal Engine 4 EpicGames [2020] and the simulator presented in this paper, we have obtained a variety of photorealistic omnidirectional images from different systems.In the appendix A we have several examples of these images.To evaluate if our synthetic images can be used in computer vision algorithms, we compare the evaluation of four algorithms with our synthetic images and real ones.The algorithms selected are: • Corners For Layouts: CFL Fernandez-Labrador et al. [2020] is a neural network that recovers the 3D layout of a room from an equirectangular panorama.We have used a pre-trained network to evaluate our images. • Uncalibtoolbox: the algorithm presented in Bermudez-Cameo et al. [2015] is a MatLab toolbox for line extraction and camera calibration for different fish-eye and central catadioptric systems.We compare the calibration results from different images. • OpenVSLAM: a virtual Simultaneous Location and Mapping framework, presented in Sumikura et al. [2019], which allows use of omnidirectional central image sequences. • 3D line Reconstruction from single non-central image which was presented in Bermudez-Cameo et al. [2017, 2016] using non-central panoramas. 3D Layout Recovery, CFL Corners For layouts (CFL) is a neural network that recovers the layout of a room from one equirectangular panorama.This neural network provides two outputs: one is the intersection of walls or edges in the room and the second is the corners of the room.With those representations, we can build a 3D reconstruction of the layout of the room using Manhattan world assumptions. For our evaluation, we have used a pre-trained CFL (https://github.com/cfernandezlab/CFL)network with Equirectangular Convolutions (Equiconv).The training dataset was composed by equirectangular panoramas built from real images and the ground truth was made manually, which increases the probability of mistakes due to human error.The rooms that compose this dataset are 4-wall rooms (96%) and 6-wall rooms (6%). To compare the performance of CFL, the test images are divided into real images and synthetic images.In the set of real images, we have used the test images from the datasets STD2D3D Armeni et al. [2017], composed of 113 equirectangular panoramas of 4-wall rooms, and the SUN360 Xiao et al. [2012], composed of 69 equirectangular panoramas of 4-wall rooms and 3 of 6-wall rooms.The set of synthetic images are divided in panoramas from 4-wall rooms and from 6-walls rooms.Both sets are composed by 10 images taken on 5 different places in the environment in 2 orientations.Moreover, for the synthetic sets, the ground-truth information of the layout has been obtained automatically with pixel precision.The goal of these experiments is testing the performance of the neural network in the different situations and evaluate the results using our synthetic images and comparing with those obtained with the real ones.In the figures above the ground-truth generation can be seen, Figures 7a 7c 8a 8c, and the output of CFL, Figures 7b 7d 8b 8d, for a equirectangular panorama in the virtual environments recreated.On the 4-wall layout environment we can observe that the output of CFL is similar to the ground truth.This seems logical since most of the images from the training dataset have the same layout.On the other hand, the 6-wall layout environment presents poorer results.The output from CFL in this environment only fits four walls of the layout, probably due to the gaps in the training data. To quantitatively compare the results of CFL, in table 6 we present the results using real images from existing datasets with the results using our synthetic images.We compare five standard metrics: Intersection over union of predicted corner/edge pixels (IoU), accuracy Acc, precision P, Recall R and F1 Score F 1 . Uncalibtoolbox The uncalibtoolbox is a MatLab toolbox where we can compute a line extraction and calibration on fish-eye lenses and catadioptric systems.This toolbox makes the line extraction from the image and computes the calibration parameters from the distortion on these lines.The more distortion the lines present in the image, the more accurate the calibration parameters are computed.Presented in Bermudez-Cameo et al. [2015], this toolbox considers the projection models to obtain the main calibration parameter rvl of the projection system.This rvl parameter encapsulates the distortion of each projection and is related with the field of view of the camera. On this evaluation we want to know if our synthetic images can be processed as real images on computer vision algorithms.We take several dioptric and catadioptric images generated by our simulator and perform the line extraction on them.To compare the results of the line extraction, we compare with real images from Bermudez-Cameo et al.On the other hand, we compare the accuracy of the calibration process between the results presented in Bermudez-Cameo et al. [2015] and the obtained with our synthetic images.The calibration parameter has been obtained testing 5 images for each r vl and taking the mean value.Since we impose the calibration parameters in our simulator, we have selected 10 values of the parameter r vl , in the range 0.5 < r vl < 1.5, in order to compare our images with the results of Bermudez-Cameo et al. [2015].The calibration results are presented in the Figure 11. OpenVSLAM The algorithm presented in Sumikura et al. [2019] To evaluate the synthetic images generated with OmniSCV, we create a sequence in a virtual environment simulating the flight of a drone.Once the trajectory is defined, we generate the images where we have the ground truth of the pose of the drone camera. The evaluation has been made with equirectangular panoramas of 1920 × 960 pixels through a sequence of 28 seconds and 30 frames per second.The ground-truth trajectory as well as the scaled SLAM trajectory can be seen in Figure 12.In the appendix B we include several captures from the SLAM results as well as the corresponding frame obtained with OmniSCV.We evaluate quantitatively the precision of the SLAM algorithm computing the position and orientation error of each frame respect to the ground-truth trajectory.We consider error in rotation, ε θ , and in translation, ε t , as follows: where t gt , R gt are the position and rotation matrix of a frame in the ground-truth trajectory and t est , R est are the estimated position up to scale and rotation matrix of the SLAM algorithm in the same frame.The results of these errors are shown in Figure 13 3.4 3D Line Reconstruction from Single Non-Central Image One of the particularities of non-central images is that line projections contain more geometric information.In particular, the entire 3D information of the line is mapped on the line projection Teller and Hohmeyer [1999], Gasparini and Caglioti [2011]. For evaluating if synthetic non-central images generated by our tool conserve this property, we have tested the proposal presented in Bermudez-Cameo et al. [2016].This approach assumes that the direction of the gravity is known (this information could be recovered from an inertial measurement unit (IMU)) and lines are arranged in vertical and horizontal lines.Horizontal lines can follow any direction contained in any horizontal plane (soft-Manhattan constraint). The non-central camera captures a non-central panorama of 2048 × 1024 pixels with a radius of R c = 1m and an inclination of 10 degrees from the vertical direction.A non-central-depth synthetic image has been used as ground truth of the reconstructed points (see Figure 14b).In Figure 14a we show the extracted line projections and segments on the non-central panoramic image; meanwhile, Figure 15 presents the reconstructed 3D line segments.We have omitted the segments with low effective baseline in the 3D representation for visualization purposes. Discussion From our tool we have obtained RGB, depth and semantic images from a great amount of omnidirectional projection systems.These images have been obtained from a photorealistic virtual world where we can define every parameter.To validate the images obtained from our tool, we have made evaluations with computer vision algorithms that use real images. In the evaluation with CFL we have interesting results.On one hand, we have obtained results comparable to datasets with real images.This behavior shows that the synthetic images generated with our tool are as good as real images from the existing datasets.On the other hand, we have made different tests changing the layout of our scene, something that cannot be done in real scenarios.On these changes we have realized that CFL does not work properly with some layouts.This happens because existing datasets have mainly 4-wall rooms to use as training data and the panoramas have been taken in the middle of the room Armeni et al. [2017], Song et al. [2015].This makes it hard for the neural network to generalize for rooms with more than 4 walls or panoramas that have been taken in different places inside the room.Our tool can aid in solving this training problem.Since we can obtain images from every place in the room and we can change the layout, we can fill the gaps of the training dataset.With bigger and richer datasets for training, neural networks can improve their performance and make better generalizations. In the evaluation with uncalibtoolbox, we have tested catadioptric systems and fish-eye lenses.We have compared the precision of the toolbox for real and synthetic images.In the line extraction, the toolbox has no problems nor makes any distinction from one kind of images or the other.That encourages our assumptions that our synthetic images are photorealistic enough to be used as real images.When we compare the calibration results, we can see that the results of the images obtained from Bermudez-Cameo et al. [2015] and the results from our synthetic images are comparable.There are no big differences in precision.The only different values observed are in hyper-catadioptric systems.For the hyper-catadioptric systems presented in Bermudez-Cameo et al. [2015], the computed calibration parameters differ from the real ones while in the synthetic hyper-catadioptric systems, we have more accurate parameters.A possible conclusion of this effect is the absence of the reflection of the camera in the synthetic images.For those, we have more information of the environment in the synthetic images than in real ones, helping the toolbox to obtain better results for the calibration parameters.From the results shown, we can conclude that our tool can help to develop and test future calibration tools.Since we are the ones that set the calibration of the system in the tool, we have perfect knowledge of the calibration parameters of the image.However, real systems need to be calibrated a priori or we must trust the calibration parameters that the supplier of the system gives us. In the evaluation of the SLAM algorithm, we test if the synthetic images generated with our tool can be used in computer vision algorithms for tracking and mapping.If we compare the results obtained from the OpenVSLAM algorithm Sumikura et al. [2019], with the ground-truth information that provides our tool, we can conclude that the synthetic images generated with OmniSCV can be used for SLAM applications.The position error is computed in degrees due to the lack of scale in the SLAM algorithm.Moreover, we observe the little position and orientation error of the camera along the sequence (see Figure 13), keeping the estimated trajectory close to the real one.Both errors are less than 8 degrees and decrease along the trajectory.This correction of the position is the effect of the loop closure of the SLAM algorithm.On the other hand, we obtain ground-truth information of the camera pose for every frame.This behavior encourages the assumptions we have been referring to in this section: that synthetic images generated from our tool can be used as real ones in computer vision algorithms, obtaining more accurate ground-truth information too. Finally, in the evaluation of the non-central 3D line fitting from single view we can see how the non-central images generated with our tool conserve the full projection of the 3D lines of the scene.It is possible to recover the metric 3D reconstruction of the points composing these lines.As presented in Bermudez-Cameo et al. [2017] this is only possible when the set of projecting skew rays composing the projection surface of the segment have enough effective baseline. Conclusions In this work, we present a tool to create omnidirectional synthetic photorealistic images to be used in computer vision algorithms.We devise a tool to create a great variety of omnidirectional images, outnumbering the state of the art.We include in our tool different panoramas such as equirectangular, cylindrical and non-central; dioptric models based on fish-eye lenses (equi-angular, stereographic, orthogonal and equi-solid angle); catadioptric systems with different kinds of mirrors as spherical, conical, parabolic and hyperbolic; and two empiric models, Scaramuzza' and Kannala-Brandt's.Moreover, we get not only the photorealistic images but also labeled information.We obtain semantic and depth information for each of the omnidirectional systems proposed with pixel precision and can build specific functions to obtain ground truth for computer vision algorithms.Furthermore, the evaluations of our images show that we can use synthetic and real images equally.The synthetic images created by our tool are good enough to be used as real images in computer vision algorithms and deep-learning-based algorithms. A.2 Fish eye lenses Figure 2 :Figure 3 : Figure 2: Client-server communication between Unreal Engine 4 and an external program via UnrealCV. Figure 5: (a): Simplification of the sphere into the cube map; (b): Unfolded cube map from a scene. Figure 6: (a): Coordinate system used in graphic engines focused on first-person video games; (b): Coordinate system of our image simulator. Algorithm 2 : Non-central-projection composition Input: Set-up of final image (Camera, Resolution, type, ...) while Go through final image do get pixel coordinates compute optical center make image acquisition -> Captures from UnrealCV compute spherical coordinates -> Apply projection model get pixel information -> Intersection with acquired images end Figure 11 : Figure 11: Normalized result for the calibration parameters using different omnidirectional cameras.(a): Calibration results from Bermudez-Cameo et al. [2015]; (b): Calibration results using images from our simulator. Figure 12 : Figure 12: Visual odometry from SLAM algorithm.The red line is the ground-truth trajectory while the blue line is the scaled trajectory of the SLAM algorithm. Figure 13 : Figure 13: (a): Position error of the SLAM reconstruction.(b): Orientation error of the SLAM reconstruction.Both errors are measured in degrees. Figure 14: (a) Extracted line projections and segments on the non-central panorama.(b) Ground-truth point-cloud obtained from depth-map. Figure 15 : Figure 15: 3D line segments reconstructed from line extraction in non-central panorama.In red the reconstructed 3D line segments.In black the ground truth.In blue the circular location of the optical center and the Z axis.In green the axis of the vertical direction.(a) Orthographic view.(b) Top view.(c) Lateral view. Table 6 : Song et al. [2015]7]ts of images from different datasets.OmniSCV contains the images created with our tool on a 6-wall room and a 4-wall room.The real images have been obtained from the test dataset ofArmeni et al. [2017]andSong et al. [2015]. Schlegel et al. [2018]f a SLAM for different cameras, from perspective to omnidirectional central systems.This open-source algorithm is based on an indirect SLAM algorithm, such as ORB-SLAM Mur-Artal et al.[2015]and ProSLAMSchlegel et al. [2018].The main difference with other SLAM approaches is that the proposed framework allows definition of various types of central cameras other than perspective, such as fish-eye or equirectangular cameras.
2020-04-11T13:06:45.637Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "c2dadeb8ea2691c105c08e8d9ede06671faa645e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/20/7/2066/pdf?version=1586774397", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "96ada1444d93d19df276988e4f5a7b1bb721eb7e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
88478101
pes2o/s2orc
v3-fos-license
Phlegmasia cerulea dolens in the Emergency Department: A case report Background: Phlegmasia cerulea dolens is a rare form of deep venous thrombosis (DVT) characterised by massive thrombosis of the veins in a lower extremity. Presumptive diagnosis is made on clinical grounds and confirmed with a point-of-care ultrasound (POCUS). Diagnostic failure can lead to limb loss and even death. Case: We describe a case of a 38-year-old male who presented to the emergency department (ED) with painful cyanotic swelling of his right lower extremity. Clinical exam and POCUS confirmed the diagnosis of phlegmasia cerulea dolens. This patient was successfully treated with anticoagulation and endovascular thrombolytic therapy. Conclusion: This report emphasises the importance of early diagnosis of the disease with POCUS, which can confirm the diagnosis of phlegmasia within minutes of clinical suspicion. Additionally, it describes the successful therapy with anticoagulation and endovascular Introduction The incidence of DVT is increasing in developed nations, with a calculated incidence of about 1 per 1000 adults [1,2,10]. Significant outcomes of venous thrombosis include pulmonary embolism (PE), recurrence, post-thrombotic syndrome, phlegmasia, limb loss and even death. Phlegmasia is a rare, but severe form of DVT. It results from massive thrombosis that occurs within the deep veins of a lower limb causing ischemia and possible limb loss. The disease encompasses a spectrum beginning from phlegmasia alba dolens (PAD) to phlegmasia cerulea dolens (PCD). In the early stages of the disease (PAD), the significant burden of vein thrombosis occludes the deep veins of the limb, causing arterial ischemia, and slowing the venous drainage that remains somewhat patent through the spared collateral veins. This results in a milky white appearance of the limb. PCD occurs at a later stage due to the increased venous outflow pressures and occlusion of the collateral veins. This results in severe swelling, cyanosis and compartment syndrome. The final stage of the spectrum is venous gangrene [3]. Clinical symptoms include sudden onset of pain, swelling, cyanotic discolouration of the affected limb due to venous congestion. In more advanced cases, gangrene can be seen. Early diagnosis is critical as PCD is often reversible when treated promptly [2]. Clinical suspicion of PCD is confirmed with POCUS [4], a fast, affordable and noninvasive diagnostic imaging modality readily available in nearly all emergency departments. We present a rare case of phlegmasia cerulea dolens, which was quickly suspected in the ED on clinical grounds and confirmed with POCUS. This led to the prompt establishment of therapy and successful recovery of the patient. Case report A 38-year-old male presented to the emergency department complaining of painful swelling with purplish discolouration of his right lower extremity with associated weakness and numbness ( Fig. 1). Except for being an active smoker (8 pack-year smoking history), he had no other significant past medical history. Categorically, he denied any history of coagulation disorder or recent trauma. On physical exam, the right lower extremity was cyanotic and swollen. Both bilateral posterior tibial and dorsalis pedis artery pulses were palpable. PCD was clinically suspected, and the patient underwent a POCUS (Fig. 2), with a 2-point simplified compression test. The results showed a lack of compressibility of the right common femoral vein, right greater saphenous vein and right popliteal vein. Subcutaneous (SC) Fondaparinux was immediately administered, and a Doppler ultrasound of the right lower limb was requested as per protocol. The Radiology Department again confirmed the diagnosis of Doppler ultrasound. The patient was admitted to the hospital, and endovascular thrombolytic therapy was administered. The patient subsequently improved uneventfully. Discussion Phlegmasia cerulea dolens (PCD) is a rare, but extreme form of DVT that results in massive thrombosis of the deep venous system in a lower limb. Its incidence is not known. Risk factors include cancer, hypercoagulability, previous surgery, immobilisation, male gender or tobacco use. The early form of phlegmasia, PAD, is characterised by thrombosis of the major deep veins and arterial ischemia. It has a milky white appearance. The late form of phlegmasia, PCD, is due to increased venous outflow pressures causing occlusion of the deep and collateral veins, which result in a cyanotic and swollen appearance. If untreated, this can progress to the point of becoming an irreversible condition in the form of gangrene. Further complications include fluid sequestration and circulatory shock. Clinical findings and physical exam are key in the process of early diagnosis. However, it can be quickly diagnosed with POCUS [4,5] and if needed, confirmed with duplex ultrasonography. Imaging characteristics of DVT on ultrasound include Contrast-enhanced computed tomography (CT) is rarely used. However, if performed, it can show the extent of the thrombus, especially in the pelvis where ultrasound is less sensitive. For example, it can show the proximal extent of the thrombus, which can even reach the common iliac vein or go beyond the iliac bifurcation (Fig. 3). Treatment of phlegmasia should be started as soon as possible. Medical treatment includes anticoagulation with an elevation of the affected limb. Recent literature suggests that subcutaneous low-molecular-weight heparins such as Enoxaparin [6] or Fondaparinux [7] are safe and effective in the treatment of PCD. New oral anticoagulants, Factor Xa inhibitors (rivaroxaban, apixaban, edoxaban) and oral direct thrombin inhibitors (dabigatran) can be prescribed in DVT, but are not suitable for PCD [8]. Surgical treatment of PCD involves endovascular intervention or open surgical thrombectomy. Endovascular targeted thrombolytic therapy is the intervention of choice and has been proven to be effective, and safe (Fig. 4) [9]. In some cases, thrombolytic therapy can be combined with endovascular mechanical thrombectomy, depending on the extent of the thrombus burden and local experience. Absolute contraindications for endovascular thrombolysis include recent head trauma, recent cerebrovascular accident (less than two months), severe hypertension, allergy to thrombolytic agents or active bleeding in a noncompressible space. Open surgical thrombectomy is an alternative for patients who generally have contraindications for an endovascular approach. Conclusion This case report emphasizes the need for early clinical suspicion of phlegmasia cerulea dolens which can be quickly diagnosed with POCUS to expedite limb-saving or even life-saving treatment. Take home messages 1. Phlegmasia cerulea dolens is an extreme form of DVT that causes painful cyanotic swelling of a limb due to massive DVT. It can ultimately lead to irreversible gangrene, limb loss and even death. 2. Early suspicion of the disease is critical and can be quickly confirmed with Point of Care Ultrasound, performed in the ED. 3. Early diagnosis is critical to establish a prompt treatment. 4. Treatment should be started as soon as the diagnosis of PCD is made. Anticoagulation (unfractionated and lowmolecular-weight heparin) and endovascular thrombolysis are the mainstays of treatment.
2019-03-31T13:32:40.356Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "6004edd6c449bdc5232810c9e1a804899cd5fd57", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.5455/ijmrcr.phlegmasia-cerulea-dolens", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ceb8ffc57757efccea99dc4516d48fd54fd18191", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16749581
pes2o/s2orc
v3-fos-license
Ethnicity and the Multicultural City: Living with Diversity In the wake of the race disturbances in Oldham, Burnley, and Bradford in Summer 2001, the author explores the possibilities for intercultural understanding and dialogue. He argues that, although the national frame of racial and ethnic relations remains important, much of the negotiation of difference occurs at the very local level, through everyday experiences and encounters. Against current policy emphasis on community cohesion and mixed housing, which also tends to assume fixed minority ethnic identities, the author focuses on prosaic sites of cultural exchange and transformation, plural and contested senses of place, an agonistic politics of ethnicity and identity, and the limitations of the White legacy of national belonging in Britain. and ingrained normsöare seen to matter in quite crucial ways. Second, the paper, in step with a perspective that takes ethnicity as a mobile and incomplete process, seeks to work withöagainst current stereotypesöthe very real cultural dynamism of minority ethnic (and White) communities. Accordingly, it also interprets progressive interethnic relations as fragile and temporary settlements springing from the vibrant clash of an empowered and democratic public, rather than as the product of policy fixes and community cohesion or consensus. The paper focuses on the problem of interethnic intolerance and conflict in urban contexts where mixture has failed to produce social cohesion and cultural interchange. There are many neighbourhoods in which multiethnicity has not resulted in social breakdown, so ethnic mixture itself does not offer a compelling explanation for failure (for that matter, race hatred is frequent in White deprived areas). The first part of the paper attempts to uncover the forces behind entrenched ethnic suspicion and conflicts through an analysis of the triggers and enduring factors behind the civil unrest that erupted in some areas of the northern English mill towns in mid-2001. It explores in particular the dynamics of deprivation, segregation, and changing youth cultures. Although the prime purpose of the study is not to dwell on the 2001 riots, but to use the issues raised by them as a springboard to explore what it takes to combat racism and to live with difference in a multicultural and multiethnic society, the riots inevitably cast a long shadow over the general discussion. Thus the study's inflection towards mixed working-class areas, Muslim Asians, and youth cultures, has squeezed the space for any serious analysis of racism in nonmixed areas, the ethnic cultures and race proclivities of the White and non-White middle classes, the practices and aspirations of minority ethnic women, trends within the African-Caribbean community, the trials of mixed-race marriages, and the successes of hybrid cultures (for example, in music). Notwithstanding these limitations, the second part of the paper provides a general outline of possibilities for urban interculturalism. It does so first by emphasising the negotiation of difference within local micropublics of everyday interaction, and second by highlighting the role of certain structural influences and national rules of citizenship and belonging that influence the ability of people to interact fruitfully as equals. The study concludes with a discussion of how action to strengthen micropublics of negotiation might be framed. It also argues that the achievement of a genuinely intercultural society requires a new language from which the strong overtones of Whiteness are removed from understandings of British citizenship and national belonging, so that citizens of different colour and culture can coexist with the same right of claim to the nation. Given that the aim of this study is to open new political ground as well as to recognise the dynamic nature of experiences of race and ethnicity, the final discussion deliberately avoids the understandable trend among policymakers and advisors in the face of serious problems such as race killings and ethnic riots to rush out fine-tuned and top-down prescriptions to legislate for urban ethnic harmony. Urban ethnic conflict ö race matters? The civil unrest that erupted in Oldham, Bradford, and Burnley during the Spring and Summer of 2001 was a palpable reminder of the geography of racism and cultural intolerance in Britain. It highlighted how only too often interethnic relations are played out as a neighbourhood phenomenon, linked to particular socioeconomic conditions and cultural practices that coalesce into a local way of life. It was a reminder that the temptation to associate`race trouble' with entire cities or particular types of city (for example, provincial or metropolitan) should be avoided. The research on areas of visible racial antagonism seems to identify two types of neighbourhood. The first are old White working-class areas with successive waves of non-White immigrant settlement, characterised by continued socioeconomic deprivation and cultural or physical isolation between White residents lamenting the loss of a golden ethnically undisturbed past, and non-Whites claiming a right of place, often against each other (Alexander, 1996;Back, 1996;Mac an Ghaill, 1999). Their cultural dynamics are quite different from those of many other mixed neighbourhoods where greater social and physical mobility, a local history of compromises, and a supportive institutional infrastructure have come to support cohabitation of some sort. The second are`White flight' suburbs and estates dominated by an aspirant working class or an inward-looking middle class repelled by what it sees as the replacement of a homely White nation by another land of foreign cultural contamination and ethnic mixture. Here, frightened families, White youths, and nationalist/fascist activists disturbed by the fear (rarely the experience) of Asian and Black contamination terrorise the few immigrants and asylum seekers who happen to settle there (Back and Nayak, 1999;Hewitt, 1996). The latest unrest exemplifies the processes at work in the first type of neighbourhood, but also the White fear and antagonism characteristic of the second type of neighbourhood. Some of these processes are place specific, whereas others bear an uncanny resemblance to other urban`race riots' in Britain since the early 1970s. The specific triggers that sparked the unrest in Burnley, Oldham, and Bradford have been extensively debated. They include the visibility and popular manipulations of the BNP, and the actions of a police force that rounded on Asian youths more vehemently than on White racism. Then, media reporting itself, which sensationalised the disturbances, sparked further anger through a highly racialised account of events as they unfolded (for example, by talking of`no-go areas for Whites',`local authorities taken over by Asians',`tradition-bound communities', all of which pushed the disenfranchised White working class towards the BNP as the voice of a new victim community). The most significant trigger was the frustration of young Pakistani and Bangladeshi workingclass men with social marginalisation, the paternalism of their so-called community elders, vilification in the media, heavy-handed or insensitive policing, and the incursion of`outsider' claimants such as the BNP. As Virinder Kalra observes, their actions told the story of years of victimisation including``the racist killing of Tahair Akram in 1989; the arrests of Asian school children for defending themselves against racist attack; the expulsion of a young woman from a local school for wearing a head dress; the false accusations of`conspiracy to commit racist crime' which is now routinely used by the Police against Asian young people'' (Kalra, 2001, page 6). Underlying these triggersöeach of which requires specific attention öthere are longer term factors that need to be grasped and tackled. These are factors which tended to escape media coverage, but are vital in explaining the long history of cultural tension and social conflict within parts of these northern English towns. The three factors that stand out from the available research, discussed in turn below, are socioeconomic deprivation, segregation, and new youth politics. All three, importantly, cut across the ethnic divide and blunt the power of ethnicity^based explanations alone. Deprivation The history of the Pakistani and Bangladeshi communities is intimately tied to the histories of the Lancashire and Yorkshire towns on both sides of the Pennines as mill towns. After the war they provided the cheap labour that allowed the mills to face the growing international competition in the textile industry. Although this was sustainable for a while, after the mid-1960s, the employment base, which included a large proportion of women workers, shrank unremittingly as a result of job displacement by new technologies and the closure of mills unable to compete with cheaper textiles from the developing countries. There remained few other alternatives in these one-industry towns. Kundnani (2001, page 106) explains the consequences for the Asians:`A s the mills declined, entire towns were left on the scrap-heap. White and black workers were united in their unemployment. The only future now for the Asian communities lay in the local service economy. A few brothers would pool their savings and set up shop, a restaurant or a take-way. Otherwise there was minicabbing, with long hours and the risk of violence, often racially motivated. With the end of the textile industry, the largest employers were now the public services but discrimination kept most of these jobs for Whites.'' Old divisions in labour-market outlets for Whites and Asians were swept aside by mass unemployment, intense competition for public sector or low-paid and precarious work, and economic insecurity in general. For over twenty-five years, large sections of the population in these towns have faced severe economic hardship and uncertainty, with more than a generation living with unemployment (around 50% among young Asians in Oldham). The string of Pakistani and Bangladeshi communities across the Pennines that count as among Britain's most impoverished 1% (Kundnani, 2001), have come to share with many White working-class estates acute problems of social stigmatisation, low educational achievements, unpleasant housing and urban amenities, elevated health and drug-abuse problems, and a pathology of social rejection that reinforces family and communalist bonds. Ethnic resentment has been fuelled by socioeconomic deprivation and a sense of desperation. Economic collapse removed the workplace as a central site of integration and common fate. As Kundnani (2001, page 106) notes, the``textile industry was the common thread binding the White and Asian working class into a single social fabric. But with its collapse, each community was forced to turn inwards on to itself.'' Competition for scarce local opportunities combined with economic marginalisation to fuel resentment, especially as stories grew of Whites getting better jobs and better housing estates, and of Asians receiving preferential welfare support. Social deprivation too exacerbated ethnic differences, for it removed part of the material well-being and social worth that can help in reducing jealousy and aggression towards others seen to be competing for the same resources. Much media analysis has ignored this factor in preference for cultural explanations, but the`violence of the violated' on all sides of the ethnic divide cannot be grasped without an understanding of the contributing material privations. Segregation Both the Ouseley Report (2001) on community fragmentation in Bradford and the Home Office (2001a) report Building Cohesive Communities on the disturbances in Oldham, Burnley, and Bradford have identified ethnic segregation as a major longterm cause of the disturbances. Both highlight the long drift towards self-segregation among working-class Asians and Whites, barricaded in their own neighbourhoods, socialised through enclave ethnic cultures (Muslim or White preservationist), and educated in local schools of virtually no ethnic mixture. The Ouseley Report, for example, condemns: the lack of communication between communities; a political structure bowing to community leaders and regeneration programmes forcing communities to bid against each other; a poor public image of the area and poor public services, exacerbating White and minority ethnic flight; and a segregated school system that has failed to challenge negative attitudes and stereotypes and that has played a marginal role in brokering cultural shifts between family, school, and public life. These trends are said to have bred intercultural intolerance of a highly ethnicised nature, within a public realm of diminished commitment to the commons. The dynamics of segregation, however, need to be unpacked. For example, in the public debate following these two reports rather too much has been made of Asian retreat into inner-urban wards to preserve diaspora traditions and Muslim values, while not enough has been said about White flight into the outer estates, which has been decisively ethno-cultural in character öin escaping Asian ethnic contamination and wanting to preserve White Englishness. This imbalance is unfortunate, since there is no shortage of recommendations to get Asians to step out of their cultural shell (by learning English, giving up faith schools, moving into White areas, embracing British liberalism, questioning traditional beliefs and practices), while the cultural exclusions associated with White Englishness pass without comment. In reality though, it is not clear who has wanted to be put into an ethnic cultural cage in these northern towns. The segregation of the Asians and their cultural isolation have been forced to a large degree. As Whites moved out of cramped and dilapidated houses in the inner-city areas to new housing estates with the help of discriminatory council housing policies, poor Asians had little choice other than to settle in the abandoned areas. As Kundnani (2001, page 107) explains:`T he fear of racial harassment meant that most Asians sought the safety of their own areas, in spite of the overcrowding, the damp and dingy houses, the claustrophobia of a community penned in. And with Whites in a rush to flee the ghettoes, property prices were kept low, giving further encouragement to Asians to seek to buy their own cheap homes in these areas.'' Segregation in housing led to segregation in education and a record of poor results in both White and Asian areas because of deprivation, and because of a schooling system`m ired in a culture of failure'' (Kundnani, 2001, page 107) and family/community dissatisfaction. In this context, all manner of ethnic accusations and myths flourished, one of whichöperhaps a self-fulfilling mythöwas that Asians, now a majority, did not want to mix with the Whites, now a beleaguered minority. Awareness of the historical link between discrimination and segregation in these northern mill towns provides a vantage point for judging the widespread opinion that, since cultural isolation lies at the heart of the disturbances, the way forward lies in greater ethnic mixing. The Home Office report has recommended that future public housing schemes should be ethnically mixed, while other policy advisors have suggested that existing council estates should be divided into mini-villages and encouraged to develop schemes to foster interaction between ethnic groups (Power, 2000). The Ouseley Report presses for social unity, by proposing citizenship education in schools, equality and fair treatment standards within the public sector, and workplace reforms to meet multicultural needs. These are genuinely well-meaning proposals for cultural dialogue, but underlying them is a worrying assumption of cultural fixity and homogeneity within both the majority and minority ethnic communities, one that as a result perhaps makes too much of the demons of segregation. Kalra (2001) notes a number of problems with the assumption. First, cities such as Leicester, now seen as an example of progressive urban ethnicity (after many years of conflict and negotiation, it has to be said) is as ethnically segregated as Bradford. Put differently, many mixed neighbourhoods in a number of British cities are riddled with prejudice and conflict between Asian, White, and African-Caribbean residents. Second, therefore, there are other processes cutting across the spatial patterns of residence that shape cultural practices, such as the inwardness produced by deprivation and inequality, the suspicion and fear aroused by generalised racism, the experience of sustained discrimination or exclusion along racial and ethnic lines, and the stories that communitiesöproximate, distanciated, and virtualöend up telling of themselves and others. For Kalra (2001, page 14), the anger of the young men in the streets of the mill towns had to do with the``defence of their territories from the incursion of racist groups and from police harassment'', not cultural closure. Third, Kalra contests the assumption of cultural homogeneity and closure within the Asian community. He notes (2001, pages 12^13):`A young Asian Muslim born in Oldham has a deeply different structural upbringing from his sister who lives with him as well as his brother in Mirpur, Azad Kashmir. From a young age this young man will be exposed to an English language media promoting the dominant values of the society. From the age of four compulsory schooling formalises the process of value transmission. ... Even in those schools where the hijaab is a norm, where there is a prayer room for daily prayer, where halal meat is served at lunch times, the history curriculum will still consist almost entirely of European subjects and particularly of the British monarchy. ... It is the case that White children know nothing of the values of other traditions but certainly Asian Muslim young people are educated into the operative dominant values of the wider society.'' It therefore needs to be asked who will benefit from initiatives to engineer physical mixture, if cultural practices cannot be reduced to the ethnic composition of neighbourhoods [see Back (2001) for an account of the shared masculinity between the 2001 Asian`rioters' and White racists, revealed in charged Internet exchanges]. Generational change and a new youth counterpublic Like most inner-city race riots in Britain since the 1970s, those in Oldham, Burnley, and Bradford involved young men, whose defiance in the streets earned them the reputationöas in the pastöof being criminals, militants, ungrateful immigrants, and cultural separatists. The media gathered snippets of fact and fiction to demonise them as drug dealers, addicts or petty criminals, school drop-outs, car-cruisers, gratuitous attackers of elderly Whites, beyond the control of their families, women, and elders, disloyal subjects, Islamic militants. They were seen to be as bad as the gangs of White racists and other violent marginals, and possibly worse, especially when cast as budding terrorists by the frenzied Islamophobia that has followed September 11. There is, however, another narrative that puts their actions in 2001 in context (without necessarily denying a social pathology that includes some of the demonic occurrences). Once again, Kundnani (2001, page 108) succinctly explains:`B y the 1990s, a new generation of young Asians, born and bred in Britain, was coming of age in the northern towns, unwilling to accept the second-class status foisted on their elders. When racists came to their streets for a fight, they would meet violence with violence. And with the continuing failure of the police to tackle racist gangs, violent confrontations between groups of Whites and Asians became more common. Inevitably, when the police did arrive to break up a melee¨, it was the young Asians who bore the brunt of police heavy-handedness. As such, Asian areas became increasingly targeted by the police as they decided that gangs of Asian youths were getting out of hand.'' The setting for the riots as`Asian gang trouble' was in place. But, the young men might also be seen as a counterpublic with distinctive citizenship claims that cannot be reduced to ethnic and religious moorings nor to a passing youth masculinity. Their action was a strong claim of ownership of particular bits of turf in these towns of racialised space allocation ö including public spaces such as streets, parks, and neighbourhoods, no longer just private or closed spaces. It questioned the ethnic assumptions of belonging in Britain. The Asian youths have challenged those who want to keep them in their own minority spaces, and they have unsettled the majority opinion that minorities should behave in a certain way in public (essentially by giving up all but their folkloristic cultural practices). It is this disruption of the racialised coding of British civic and public culture that has made these riots so politically significant. Crucially, as a counterpublic, this new generation has its differences with its own ethnic elders and self-appointed Asian community leaders. As Kundnani (2001) explains, the``state's response to earlier unrest had been to nurture a black elite which could manage and contain anger from within the ranks of black communities'' (page 108). Thus``a new class of`ethnic representatives' entered the town halls from the mid 1980s onwards, who would be the surrogate voice for their own ethnicallydefined fiefdoms. They entered into a pact with the authorities, they were to cover up and gloss over black community resistance in return for a free rein in preserving their own patriarchy'' (page 108). The result was the subtle retreat from a politics of combating racism and economic and social inequality to a politics of ethnic recognition and ethnic cultural preservation (around mosques, special schools, and the like) which kept the Asian patriarchs in place and the White leadership one remove from the violence of the violated (Black and White). The new politics, however, bottled up difficult problems such as gender inequality and a growing drug problem within the Asian community, it fragmented the Asian community as different ethnic groups were pressed into competing with each other for grants, and it allowed White communities and White activists to develop a language of victimhood based on special state deals for Asians. But, above all, it suppressed the voice of younger Asians öa voice mixing tradition and modernity, diaspora and English belongings. This is evident from the desire of young women for better and longer education and a choice over marriage partners, perhaps within a frame of commitment to Islam and kinship ties (Dwyer, 2000;Macey, 1999), and from the desire of young men to mix consumer cultures and meet racist insult with attitude, but also not to question existing gender inequalities and diaspora beliefs. There is a complexity to the cultural identity of the Asian youths that cannot be reduced to the stereotype of traditional Muslim, Hindu, Sikh lives, to the bad masculinities of gang life (although the masculinity of the rioters cannot be denied), to the all too frequently repeated idea of their entrapment between two cultures. These are young people who have grown up in Britain routinely mixing`Eastern' and`Western' markers of identity, through language, bodily expression, music, and consumer habits, who are not confused about their identities and values as cultural`hybrids', and who, partly because of racial and ethnic labelling and the rejection that comes with deprivation, have developed strong affiliations based on kinship and religious ties. Their frustration and public anger cannot be detached from their identities as a new generation of British Asians claiming in full the right to belong to Oldham or Burnley and the nation, but whose Britishness includes Islam, halal meat, family honour, and cultural resources located in diaspora networks (Dwyer, 2000;Qureshi and Moores, 1999). They want more than the ethnic cultural recognition that was sought by their community leaders in recent decades. Their actions in Summer 2001 were about claiming the public space as bona fide British subjects, without qualification, and unhinged from the politics of community practised by their so-called representatives. This connection between multiple and mobile youth`ethnicities' and a new politics of turf is widespread. There is a sophisticated literature on the anthropology of young British Bengalis and Pakistanis (Alexander, 2000;Alibhai-Brown, 2000), British African Caribbeans (Alexander, 1996;Back, 1996) and British Whites (Back and Nayak, 1999;Hewitt, 1996) living in poor mixed urban neighbourhoods. Claire Alexander (2000), in a subtle and compassionate study shows that the young Bengalis she worked with in a London neighbourhood are both far more and far less than their typecasting as violent and criminal`Asian gangs'. Their acts of violence are shown to be contradictory and spontaneous, the product of racist name calling, group rivalries, insensitive school exclusions, rejecting but also playing up to easy labelling (by the police, by community elders, by teachers), strong friendship loyalties, and, above all, pretty miserable socioeconomic circumstances. Such contextualisation is not meant to diminish the significance of any acts of violence, but to puncture the reduction to ethnic characteristics of the youths behind the acts, to grant them a multiple and evolving identity that could take them in different directions:`. .. these are the same young men who are now three-quarters of the way through their Duke of Edinburgh bronze award; who pored over books about Bangladeshi history, religion and language for their cultural display [a fashion event called Style and culture 96 ]; who practised routines for nearly two months; and who turned up on the day with white boxer shorts, a neat row of shirts from the dry cleaners andöthe biggest sacrifice of all öno hair gel. And if at times none of them felt they would make it, the motivation to show what they were capable of, given the chance, overrode everything else. On the night they were foot-perfect, acne-free and, when they walked on in traditional Bengali dress, they brought the house down'' (Alexander, 2000, page 22). How these complex identities mingle with the everyday local public culture to shape youth race politics is tellingly revealed in Les Back's (1996) ethnography of White and Black youth identities in two adjacent South London neighbourhoods ö`Riverview', a run-down area of White flight and marked racism, and`Southgate', a`no-go Black' area that, in fact, is consciously less racist and more open to cultural exchange. Southgate, with its higher number and street power of Black people, its Black cultural institutions, its history of steady ethnic mixture, its relatively higher social and geographical mobility, and its sense of place shared by White and Black people, has produced an inclusive`our area' local semantic system (as opposed to Riverview's local semantic system based on`White flight') that does not tolerate popular racism (though institutional racism remains a problem). For Back, the young people's negotiations through Southgate's inclusive social semantics have opened up the possibility of genuine cultural syncretism, resulting in``a new ethnicity that contains a high degree of egalitarianism and anti-racism'' (page 123) and reorients meanings of race and belonging. He explains that these everyday negotiations have nudged White youths to vacate concepts of Whiteness and Englishness, creating``a cultural vacuum into which a host of Black idioms of speech and vernacular culture were drawn'' (page 241), while Black youths have developed a nondefensive notion of Blackness based on diaspora connections, a local vernacular, a reworking of Britishness by claiming a Black aspect to it, new hybrid musical forms, and mixed-race identities. Identities and attitudes on the move on different sides of the ethnic divide, and in this case, towards each other. To conclude this first part of the study, the analysis has emphasised the role of three sets of factors behind the 2001 protests ö deprivation, segregation, and new generational demands. The Home Office (2001a) report, which was published as this paper was being drafted, identified nine specific factors: (1) the lack of a strong civic identity or shared values; (2) the fragmentation and polarisation of communities on a scale that amounts to segregation; (3) disengagement of young people from the local decisionmaking process, intergenerational tensions, and an increasingly territorial mentality in asserting identities; (4) weak political and community leadership; (5) inadequate provision of youth facilities; (6) high levels of unemployment; (7) activities of extremist groups; (8) weaknesses and disparity in the police response to community issues; and (9) irresponsible coverage of race stories by sections of the local media. Although my analysis overlaps with some of these factors, its tone and emphasis are different, especially regarding four aspects: the implications of physical segregation; the so-called culturally homogeneity of the Asian community; the complexities of Asian youth identity; and minority citizenship seen as a struggle over rights and claims rather than as questions of civic identity or shared social values. Rights to the multicultural city The issues raised by the 2001 riotsöand other examples of marked racial and ethnic antagonism in Britainöare not unique. They are part of the broader question of what it takes to combat racism and live with difference and cultural exchange in a multiethnic society. This question too is influenced by the extent and depth of racism (popular, organised, and institutional), differentials of inequality and deprivation, discourses of immigration and minority rights, and patterns of cultural contact. This section discusses the possibilities for urban`interculturalism' at this more general level. (The term`intercultured' is used to stress cultural dialogue, to contrast with versions of multiculturalism that either stress cultural difference without resolving the problem of communication between cultures, or versions of cosmopolitanism that speculate on the gradual erosion of cultural difference through interethnic mixture and hybridisation.) The literature on race, multiculturalism, and citizenship has tended to discuss this question at the level of national rights and obligations, individual or collective. My emphasis, in contrast, falls on everyday lived experiences and local negotiations of difference, on microcultures of place through which abstract rights and obligations, together with local structures and resources, meaningfully interact with distinctive individual and interpersonal experiences. This focus on the microcultures of place is not meant to privilege bottom-up or local influences over top-down or general influences, because both sets make up the grain of places. It is intended to privilege everyday enactment as the central site of identity and attitude formation. The section begins with a discussion of the nature of the local spaces in which intercultural exchange can occur, and then goes on to discuss the aspects of national belonging and citizenship which sustain a democratic everyday urbanism. From mixed spaces to prosaic negotiations How interethnic understanding and engagement might be achieved is a matter of considerable contemporary debate. There is an emerging consensus that a crucial factor is the daily negotiation of difference in sites where people can come to terms with ethnic difference and where the voicing of racism can be muted (Allen and Cars, 2001). What is the nature of these sites, and what kind of engagement or outcome can be expected? This is where the debate is on less firm ground. One line of thought, with roots in republican urban theory, has long looked to visibility and encounter between strangers in the open spaces of the city. The freedom to associate and mingle in cafe¨s, parks, streets, shopping malls, and squares is linked to the development of an urban civic culture based on the freedom and pleasure to linger, the serendipity of casual encounter and mixture, and public awareness that these are shared spaces. Diversity is thought to be negotiated in the city's public spaces. The depressing reality, however, is that in contemporary life, urban public spaces are often territorialised by particular groups (and therefore steeped in surveillance) or they are spaces of transit with very little contact between strangers (Amin and Thrift, 2002;Amin et al, 2000;Rosaldo, 1999). The city's public spaces are not natural servants of multicultural engagement. This is not to claim the futility of action attempting to make public spaces inclusive, safe, and pleasant. It is not to diminish the significance of efforts in cities such as Singapore, Vancouver, Leicester, or Birmingham to publicise multiculturalism by using public sites to support world cultures, minority voices, ethnic pluralism, and alternative local histories. For example, Birmingham officially supports a history of the city as one of global connections and layers of White and non-White migration. In Leicester the year``is punctuated with events that are celebrated especially by one community but enjoyed by all'' (Winstone, 1996, page 39). These include council-supported celebrations for Eid, Hannuka, the Leicester Caribbean carnival, Diwali, an Asian`Mela' or fair, and the City of Leicester Show, which``includes Asian and African music and food as well as traditional English pastimes such as horse racing'' (Winstone, 1996, page 39). These are important signals of a shifting urban public culture. However, there is a limit to uses of public space for intercultural dialogue and understanding, for even in the most carefully designed and inclusive spaces, the marginalised and the prejudiced stay away, while many of those who participate carry the deeper imprint of personal experience that can include negative racial attitudes [see, for example, Parker's (2000) ethnography revealing the uneven and racialised power geometry of the Chinese takeaway]. In the hands of urban planners and designers, the public domain is all too easily reduced to improvements to public spaces, with modest achievements in race and ethnic relations. A similarly ambiguous space is mixed housing. As already discussed, housing segregation has been blamed for the legacy of`parallel lives' (Home Office, 2001a) in the northern mill towns. There is now much policy interest in mixed housing, as a site where people from diverse backgrounds can engage as a community with shared interests (Power, 2000). It is worth noting, however, that many mixed estates are riddled with racism, interethnic tension, and cultural isolation. They too contaiǹ parallel lives'. In addition, many neighbourhoods that are dominated by a single ethnic group are not trouble spots and do manage to maintain a fragile social pact, as Baumann (1996) has shown in the case of Southall. The colour composition of an area is a poor guide to what goes on in it. Engineering ethnic mixture through housing is problematic. Past attempts have resulted in White flight and deep resentment or violence from the older settled White community [see Wrench et al (1993) for evidence on a New Town], or in reinforcing a pathology of`neighbourhood nationalism' among incomers and older residents (Allen, 2000;Back, 1996;Back and Keith, 1999). Then, a worrying political implication of the current interest in mixed housing estates is that it is the working classöthe usual target of public housing schemesöthat is asked to do all the mixing, while the middle class, equally implicated in racial and ethnic discrimination (see final section), escapes any such obligation and can``push off elsewhere, pretend not to be racist'' (Doreen Massey, personal communication). This is not to deny the significance of imaginative attempts to break down ethnic barriers in mixed estates. For example, in a comparative study of`estates on the edge' in different European cities, Anne Power (1999) describes change in Taastrupgaard, a once unattractive and dehumanised mixed-ethnic estate on the outskirts of Copenhagen. In the mid-1980s, a redevelopment project was launched called the Environmental Project, based on``tenant involvement, local responsiveness and community development ... a central focus of the initiative'' (page 225). The initiative galvanised a considerable level of involvement from residents of different ethnicity in redesigning the estate, deciding on the uses of communal areas, and actual regeneration work. For example,``all the garden work was done by the tenants. On some blocks, 40 or 50 people joined in. The Turkish families, many of whom were of recent peasant origin, knew a lot more about gardening than the Danish households, who usually came from inner Copenhagen'' (page 127). While Power admits that at the end of the project``formal relations continued to be strained between ethnic communities'' (page 231), she suggests that the estate has become more attractive, and possesses greater resident confidence in the estate's viability, perhaps even as a multicultural venture. The contact spaces of housing estates and urban public spaces, in the end, seem to fall short of inculcating interethnic understanding, because they are not structured as spaces of interdependence and habitual engagement. Les Back (personal communication) has suggested that the ideal sites for coming to terms with ethnic difference are where`prosaic negotiations' are compulsory, in`micropublics' such as the workplace, schools, colleges, youth centres, sports clubs, and other spaces of association. If these spaces come segregated at the start, the very possibility of everyday contact with difference is cut out, as highlighted by the current debate on the implications of faith-based schools and by the cultural closure to be found in predominantly White or Asian schools in so many inner-city and outer-estate schools in Britain. Here too, however, contact is a necessary but not sufficient condition for multicultural understanding, for these are sites of mercurial social interaction, divided allegiances, and cultural practices shaped also beyond the school gates. Mairtin Mac an Ghaill's (1999) study of multiethnic urban schools, for example, tells a story of multiple and segregated ethnicities involving White English working-class children resentful of Asian students seen as`successful' and beneficiaries of special`race' treatment; other White students proud to be English and in the context of a multiethnic Britain, but disapproving of White girls who step out with Asian boys; English-born Asian boys dismissive of`tradition-bound' recent arrivals from Pakistan or Bangladesh; street-wise African-Caribbean boys mocking clever Asians; and so on [see also Alexander (2000) and Back (1996) for a similar anthropology of urban youth centres]. The political implication is that the gains of interaction need to be worked at in local sites of everyday encounter. But there is no formula here other than perhaps the engineering of endless talk and interaction between adversaries or provision for individuals to broaden horizons, because any intervention needs to work through, and is only meaningful in, a situated social dynamic. In one youth project, for example, a tough stance against racist language and behaviour might maintain the peace, while in another one the imagination and persistence of committed youth workers to garner friendships and sociability across ethnic boundaries might yield a positive result. In one housing estate, the enforcement of strict rules on antisocial behaviour and tough action against racial harassment might be effective for some families and individuals. In another one, action on flash points of conflict such as rubbish dumping and nighttime noise might be effective, while elsewhere, carefully managed resident meetings that are able to steer discussion without stifling views (with the help of effective conflictresolution methods) might garner understanding (Allen, 2000;Norman, 1998). Similarly, in one school, discussions of national identity, citizenship, and multiculturalism through the curriculum, or twinning with a school of different ethnic composition (as suggested by the government in the aftermath of the 2001 riots) may reach the minds and hearts of some children, while in another school, efforts to involve children from different ethnic backgrounds in common ventures might prove more effective. The anthropology of everyday interaction in a given place at a given time plays a decisive role in influencing possibilities for intercultural understanding, and for this, undermines blanket policy prescriptions. Habitual contact in itself, is no guarantor of cultural exchange. It can entrench group animosities and identities, through repetitions of gender, class, race, and ethnic practices. Cultural change in these circumstances is likely if people are encouraged to step out of their routine environment, into other everyday spaces that function as sites of unnoticeable cultural questioning or transgression. Here too, interaction is of a prosaic nature, but these sites work as spaces of cultural displacement. Their effectiveness lies in placing people from different backgrounds in new settings where engagement with strangers in a common activity disrupts easy labelling of the stranger as enemy and initiates new attachments. They are moments of cultural destabilisation, offering individuals the chance to break out of fixed relations and fixed notions, and through this, to learn to become different through new patterns of social interaction. Cultural transgression potentially could be worked into a new urban politics of cultural innovation, around existing sites of prosaic interaction. Moments of mobility and transition could be exploited. For example, colleges of further education, usually located out of the residential areas which dominate the lives of the young people, are a critical threshold space between the habituation of home, school, and neighbourhood on the one hand, and that of work, family, class, and cultural group, on the other hand. For a short period in the lives of the young people, the colleges constitute a relatively unstable space, bringing together people from varied backgrounds engaged in a common venture, unsure of themselves and their own capabilities, potentially more receptive to new influences and new friendships. These openings do not automatically lead to cultural exchange (especially when past friendships and acquaintances carry over to reinforce strong herd instincts), but joint projects across ethnic divisions and the sheer contrast of the sociality of this space with that of home and neighbourhood can help. Similarly unsteady social spaces are some nighttime/weekend leisure spaces for young people. For example, sports associations and music clubs draw on a wide cross-section of the population, they are spaces of intense and passionate interaction, with success often dependent upon collaboration and group effort, their rhythms are different from those of daily habits, and they can disrupt racial and ethnic stereotypes as excellence often draws upon talents and skills that are not racially or ethnically confined. But, here too, the transformational element of interaction needs to be made explicit and worked at in efforts to make them intercultural spaces, through experiments that fit with local circumstances. They need to be made different from the many sports clubs and music clubs that are segregated on ethnic lines precisely as a means of preserving White and non-White communal traditions, often against a background of majority rejection of minority members. The potential for cultural transgression öbased on multiethnic common ventures ö could be explored within the heart of residential areas. Ventures run by residents and community organisations (for example, communal gardens, community centres, neighbourhood-watch schemes, child-care facilities, youth projects, regeneration of derelict spaces) are a good example. Often these initiatives are characterised by lack of involvement from all sections of the community, by long-standing racial and ethnic tensions within the experiments and by being dominated by activists and intermediaries. But they could become sites of social inclusion and discursive negotiation, through the careful use of discursive strategies in order to build voice, help arbitrate over disputes, inculcate a sense of common fate or common benefit, publicise shared achievements, and develop confidence in proposals that emerge from open-ended discussion (Allen and Cars, 2001). Here too, cultural change is based on small practical accommodations that work their way around, or through, difference, rather than on any conscious attempt to shift the cultural identities and practices of local residents. The key lies in the terms of engagement:`W e must ... come to processes of learning how to collaborate, how to be together, both in our difference and in our unity. There is work to be done in which we hold the cultural differences in community and communication as both basic problematics to be worked out and opportunities for enrichment. Groups and communities coming together can be seen as places of emergence, creation and transformation'' (Grand, 1999, page 484). But, there are also other, more radical, options explicitly designed for cultural confrontation and change through interaction. One example is legislative theatre, based on audience participation and oriented towards raising consciousness through enactment and response to difficult issues in a community (Boal, 2000). The performances, which are engaging as they are run by professional artists, can be emotionally charged as they unravel controversial local issues and deeply held prejudices within the community. The theatrical event is a means of questioning entrenched views and altering opinions through enactment. This form of theatre has been used to tackle urban racism and ethnic relations. Sophie Body-Gendrot (2000) cites the example of the Theatre-Forum in Marseilles, which puts on plays written with residents of tough mixed neighbourhoods, based on their experiences. The plays encourage``role exchanges and audience participation during the play, thus de-dramatizing daily life problems'' (page 207) and encouraging interethnic and inter-generational understanding. Similarly, some organisations in South Yorkshire have become involved in a project called Race to Train, which explores issues of race and diversity within the workplace. In the project,`v olunteers from the organisations work with writers and directors talk about their experiences, which are then presented to an audience of employees in a play entitled Crossing the Line'' (Housing Today 22 November 2001, page 19). Then,``the audience is split into workshop groups where the issues raised in the play are investigated further through a series of mini plays, and general discussion.'' The plays highlight problems in a direct and poignant way, helping not only to shake opinions and attitudes, but also to suggest solutions based on employee participation. Legislative theatre has an important role to pay in an imaginative urban policy. The principle highlighted by legislative theatre is that prosaic cultural shifts rely upon displacement, more precisely, the practice of negotiating diversity and difference, an intercultural ethics based on`wisdoms' of social engagement (Varela, 1999). There are many other examples that could be pursued through bold urban policy initiatives, including, as Body-Gendrot (2000) describes in the case of St Denis near Paris, hiring youths bent on writing graffiti to create urban murals, establishing auto-e¨coles (`selfschools') that use a loose curriculum and ad hoc methods to reintegrate youths who have dropped out of the school system, organising adolescents from around the world to come and play in an international football tournament, holding regular public debates on themes of relevance to residents, and bringing live music to a hospital to break down ethnic and cultural barriers. The politics of community? The discussion so far, with its emphasis on prosaic negotiations and transgressions, raises some important questions about the normative pitch of a politics of local cultural interchange. As noted earlier, in the aftermath of the 2001 riots, a consensus that has grown among politicians, policy advisors, and media commentators is that civic agreement and shared values are needed to reconcile intercultural differences. The spotlight has come to shine on local community and a shared sense of place as solutions. This is certainly the tenor of the Cantle Report (Home Office, 2001b) that led to the Home Office Report on the 2001 riots. It offers the term`community cohesion' as the foundation for positive multicultural engagement:`C ommunity cohesion ... is about helping micro-communities to gel or mesh into an integrated whole. These divided communities would need to develop common goals and a shared vision. This would seem to imply that such groups should occupy a common sense of place as well'' (Home Office, 2001b, page 70). Cantle identifies five domains of community cohesion: (1) common values and a civic culture, based in common moral principles and codes of behaviour; (2) social networks and social capital, based on a high degree of social interaction within communities and families, voluntary and associational activity, and civic engagement; (3) place attachment and an intertwining of personal and place identity; (4) social order and social control, based in absence of general conflict, effective informal social control, tolerance, and respect for differences; and (5) social solidarity and reductions in wealth disparities, based in equal access to services and welfare benefits, redistribution of public finances and opportunities, and ready acknowledgement of social obligations. Whereas the last two domains are clearly matters of national social standards and policies, the first three can be read as an attempt to (re)engineer localities as`integrated communities' and, in turn, to mobilise community bonds for social progress. The idea of a cohesive local society, that makes the most of diversity by inculcating trust, reciprocity, and collective commitments, has come to the centre of a new policy discourse supported by influential US academic literature on communitarian values or social capital rooted in local networks of interpersonal connections and ties (Putnam, 1993;. But is community cohesion, thus defined, the key resource for cultural understanding and cohabitation in neighbourhoods marked by strong ethnic polarities, decades of neglect, and socioeconomic deprivation? Indeed, are community cohesion and community coherence feasible in these circumstances? The work on urban youth anthropologies that I have referred to actually confirms the existence of a strong sense of place among both White and non-White ethnic groups, but one based on turf claims, or when shared, defended in exclusionary ways. This suggests, instead of the pursuit of a unitary sense of place, the need for initiatives that exploit the potential for overlap and cross-fertilisation within spaces that in reality support multiple publics. The distinctive feature of mixed neighbourhoods is that they are communities without community, each marked by multiple and hybrid affiliations of varying social and geographical reach, and each intersecting momentarily (or not) with another one for common local resources and amenities. They are not homogeneous or primarily place-based communities (especially for residents with strong diaspora connections and those with virtual and/or mobile lifestyles). They are simply mixtures of social groups with varying intensities of local affiliation, varying reasons for local attachment, and varying values and cultural practices. This blunts any idea of an integrated community with substantial overlap, mutuality, and common interest between its resident groups. Mixed neighbourhoods need to be accepted as the spatially open, culturally heterogeneous, and socially variegated spaces that they are, not imagined as future cohesive or integrated communities. There are limits to how far community cohesionörooted in common values, a shared sense of place, and local networks of trustöcan become the basis of living with difference in such neighbourhoods. The examples of prosaic negotiation and transgression discussed earlier suggest a different vocabulary of local`accommodation'öa vocabulary of rights of presence, bridging difference, getting along. They mark places as process, as meeting places, as open ended, not as sites of single or fixed identities (Massey, 1999). What goes on in them are not achievements of community or consensus, but openings for contact and dialogue with others as equals, so that mutual fear and misunderstanding may be overcome and so that new attitudes and identities can arise from engagement. If common values, trust, or a shared sense of place emerge, they do so as accidents of engagement, not from an ethos of community. The decisive factor is the nature of the local public sphere, more specifically the micropolitics that make up a place and determine the terms of social engagement. A progressive place politics is one that draws on an`agonistic' political culture, that is, a culture that values participatory and open-ended engagement based on thè`v ibrant clash of democratic political positions'' (Mouffe, 2000, page 104) between free and empowered citizens respectful of each other's claims. This is a politics of emergent solutions and directions based on the process of democratic engagement. Open and critical debate, mutual awareness, and a continually altering subjectivity through engagement are the watchwords of agonistic politics, replacing the watchwords of trust, consensus, and cohesion that dominate the communitarian position. Agonism may well leave conflicts and disagreements unresolved, which is the nature of bringing distant and inimical subjects together, but its strength lies in making transparent reasons for resentment and misunderstanding as well as the pathos of the aggrieved, so that future encounters (essential in an agonistic public culture) can build on a better foundation. Local multicultures are born out of the continual renewal of an equal and discursive public, so that the contest between claimants can become one between friendly enemies (agonism) rather than antagonists. A good example of the always ambivalent/unresolved politics of such engagement is provided by Engin Isin and Myer Siemiatycki's (2002) study of disputes surrounding applications in the mid-1990s to establish mosques in Toronto. The study shows that, for all the official multiculturalism in Canada that supports the practices of a variegated citizenship, the proposals were hotly contested because for many, Islam and its visible signs on the landscape were somehow`non-Canadian', requiring proof of the right of public presence. It also reveals, however, that after many compromises, the proposals were eventually approved, as the product of open and frank debate at hearings and in the media, supported by democratic and fair planning procedures, channels for minority ethnic representation, permissive legislation, and sensitive mediation between the local authorities and other stakeholder organisations. All these factors combined to form a civic space of vibrant opposition and negotiationöwithout question one full of power play and jostling between vested interestsöbut open to the discursive clashes of distributed citizenship. Such a politics of active citizenshipöirreducible to a politics of communityöcomes without guarantees, but it can flourish under certain conditions to ensure that minority interests can be advanced and to maximise the scope for new meanings through engagement. Much of this, as already argued, has to do with the practice of citizenship, but it is also intricately linked to the structures that define the terms on which people see themselves and others as citizens. The process failsöas confirmed by the 2001 riotsöif the social context supports or tolerates racism or inequality along ethnic lines, because in such a context rights are perceived to be unevenly distributed and ethnically coded, bracketing people from a minority ethnic background as second-class citizens. In this sense, the Cantle report is right to identify what it chooses to call``social order and social control'', and``social solidarity and reductions in wealth disparities'' as two of its five domains of community cohesion. Without effective policing of racism, without strong legal, institutional, and informal sanctions against racial and cultural hatred, without a public culture that stops bracketing minorities as`guests' or worse in Britain, and without better minority ethnic representation and influence in mainstream organisations, the ethnic inequality that flows from a national culture assuming White supremacy will not be tackled. Similarly, a democracy of a universal commons (Amin and Thrift, 2002) based on more widely distributed economic prosperity (through the enlargement of opportunity, the redistribution of income, and reductions in wealth disparities) and the guarantee of high-quality public and welfare services for all, can help to contain the politics of envy between excluded groups as well as strengthen social solidarity and loyalty to a national project based on universal rights. Reforms to the structures of citizenship and belonging that might improve racial and ethnic relations have been discussed in detail in the much publicised Parekh Report (Runnymede Trust, 2000) and in ways that can both support cultural autonomy and strengthen intercultural solidarity in a multiethnic Britain. There is little gained from repeating the recommendations here. In a democratic multiethnic society, if community cohesion remains elusive, the key challenge is to strike a balance between cultural autonomy and social solidarity, so that the former does not lapse into separatist and essentialised identities and so that the latter does not slide into minority cultural assimilation and Western conformity. This question has come to the fore in the contemporary debate on the strengths and limitations of multiculturalism. Bhikhu Parekh (2000) has suggested that the political structure of a multicultural society based on a strong sense of unity but also ingrained respect for diversity, should draw on the two political philosophies ö liberalism, with its emphasis on the rights and freedoms of the individual, and multiculturalism, with its emphasis on the rights and freedoms of group identities and cultures. Its purpose should be to inculcate a sense of belonging to a common political community:`[ the] sense of belonging cannot be ethnic or based on shared cultural, ethnic and other characteristics, for a multicultural society is too diverse for that, but political in nature and based on a shared commitment to the political community. ... The commitment to a political community... does not involve sharing common substantive goals, for its members might deeply disagree about these, nor a common view of its history which they may read differently, nor a particular economic or social system about which they might entertain different views. Decocted to its barest essentials, commitment to the political community involves commitment to its continuing existence and well being'' (Parekh, 2000, page 341). Parekh proposes a binding national framework to support a multiculturalism based on political community, including: (a) a collectively agreed constitution based around fundamental rights (along the lines of the Canadian Charter of Rights and Freedoms), and backed up by a Supreme Court; (b) impartial justice by the state in policing, employment, education, public services, and the law, within a frame of equal rights and opportunities (including cultural ones) for all citizens; (c) recognition of collective or group rights (for example, the right of Sikh men to wear a turban or the right of Muslims to pray at work), but measured against the standard of contribution to human well-being; (d) realms for equal cultural interaction (for example, via measures to ensure equal interaction, provision of opportunities for groups and cultures to meet, and explicit official celebration of multiculturalism; (e) multicultural education based on a mixed and open curriculum that reflects the nation's historical and contemporary cultural diversity and its place in the wider world, and (f) a shared national identity based on politico-institutional values (for example, human rights, universal welfare) rather than ethno-cultural ones, so that national belonging can be based on multiple identities and cultural affiliations. In the context of urban questions, the idea of a political commons steers us away from community consensus and a unitary sense of place. For example, Steven Vertovec (1996), drawing on the experience of multiculturalism in Leicester, has suggested that à`r enegotiated political culture of the public domain'' can be achieved through local`f acilitation of multiple modes of minority representation and local government interface'' (page 66). This is a democracy based on widespread bottom-up organisation that, in addition to supporting multicultures, yields checks, balances, and overlaps between associations and a local state, thus nourishing a common public culture. Leicester has a long history of antiracist organisation and affirmative action, selforganisation and civic activism within the minority ethnic communities, and an official policy of pride in cultural diversity and support for minority ethnic associations, harnessed to a commitment that cultural events and services should benefit all residents (Winstone, 1996). In the mid-1990s there were over 400 minority ethnic associations in Leicester, many possessing contracts with the city council to carry out particular services. This institutional structure has made local authority consultation with the associations``an essential element in the management of change'', but on the basis of``a complex mixture of organizations including separate groups of women, youth and older people'' (Winstone, 1996, page 38), rather than reliance on a small group of`community leaders' speaking for everybody. In turn, through public incorporation, political office, the experience of self-organisation, and frequent contact with other minority and nonminority bodies, the ethnic associations``have been able to champion also [the needs] of the majority who are disadvantaged through poverty, homelessness and low payöproblems shared by all'' (Winstone, 1996, page 38). For Vertovec (1996), such a model of multiculturalism, involving``a variety of modes of incorporation'', works because:`i t can (a) promote more democratic functions surrounding`community leaders' (by recognising a breadth and depth of leadership through effective neighbourhood groups, umbrella organizations, and civic representatives all democratically elected); (b) stimulate more active civil participation among minority group members (who have come to realize that they can, indeed, successfully elect and interact with, important public figures from their own ranks); (c) publicize more positive images of minorities (by it being shown that they can produce effective organizations and leaders who contribute in many ways to various civic activities and decisions), and (d) generally foster, among members of the`majority' population as well as among ethnic groups, a more open and malleable understanding of`culture' (through being seen to be able to perpetuate a variety of practices, meanings and values drawn from complex and varying backgrounds and seen to be open to hybridized forms without threat to collective identities'' (pages 66^67). These four elements of a`renegotiated public culture' have obvious implications for places like Bradford and Oldham steeped as they are in a politics of elitist, segregated, and exclusionary democracy that has failed to bind group interests into a local commons. Local questions, national questions The emphasis of this study has fallen on the microcultures of place both as routes into racism or discrimination and as routes of escape. The underlying argument in the first part of the study was that although factors such as deprivation and social exclusion, Islamophobia, popular and institutional racism, and media stereotyping cast a long shadow across the nation, additional local factors and the particularities of place explain spatial variation in the form and intensity of racial and ethnic inequalities. Bradford, Oldham, and Burnley too have been marked by processes common to other flash points of urban civic and ethnic unrest in Britain in the last three decadesöfrom ethnic isolation along ethnic lines and the hopelessness or resentment caused by poverty and marginalisation (White and non-White), to insensitive policing, the provocations of racists, institutional ignorance, and youth anger. But each situation has been the product of unique combinations, new forces (for example, the role of community leaders and of segregation in the latest disturbances) and a layered local history of resentments and accommodations. Every combination highlights the powers of situated everyday life in neighbourhoods, workplaces, and public spaces, through which historical, global, and local processes intersect to give meaning to living with diversity. The significance of the microcultures of place is highlighted by the achievements of prosaic negotiation and transgression in dealing with racism and ethnic diversity. The second part of the study argued that, ultimately, coming to terms with difference is a matter of everyday practices and strategies of cultural contact and exchange with others who are different from us. For such interchange to be effective and lasting, it needs to be inculcated as a habit of practice (not just copresence) in mixed sites of everyday contact such as schools, the workplace, and other public spaces. Alternatively, it can be organised as an experience of cultural displacement in transitory sites such as colleges of further education, youth leisure spaces, communal gardens, urban murals, legislative theatre, and initiatives inculcating civic duty. The policy implication of this argument is that, although the micropublics can be identified (through, for example, case studies of good practice around the world), as can the general principles of effective communication and constructive dialogue (for example, conflict resolution techniques, stakeholder empowerment, deliberative strategies, effective leadership and intermediation), success remains the product of local context and local energies. This is why a search for national and international examples of best practice, seeking to implant them in different settings or to derive a common standard from them is futile, because it removes the site-specific circumstances and social relations that made a local solution workable. The exercise also loses sight of the national public culture that structures the rights and obligations that guide local practices, such as immigration and citizenship rules, national and local integration policies, attitudes to minorities, and sanctions against racism and ethnic discrimination. These are two reasons why the approach of social and urban regeneration policies needs to shift towards attending to``the best in the worst'' (Judith Allen, personal communication), that is, to possibilities that spring out of, and resonate with, the dynamics of social engagement in particular places. Another shift in policy approach implied by the discussion on agonism concerns the problematic nature of attempts to build community and local consensus, and the limitations of seeing`difficult' areas as places of fixed identities and social relations. I have suggested that the problems of interactionöand therefore also their resolutionöare fundamentally related to the political culture of the public domain, more specifically, to the scope there is for vigorous but democratic disagreement between citizens constituted as equals. This shift in register from the language of policy fixes to that of democratic politics is important, first because it highlights the significance of questions of empowerment, rights, citizenship, and belonging in shaping interethnic relations; second, because it shows that an open public realm helps to disrupt fixed cultural assumptions and to shift identities through cultural exchange; and, third, because it reveals that living with diversity is a matter of constant negotiation, trial and error, and sustained effort, with possibilities crucially shaped by the many strands that feed into the political culture of the public realmöfrom the entanglements of local institutional conflict, civic mobilisation, and interpersonal engagement, to national debates on who counts as a citizen, what constitutes the good society, and who can claim the nation. These latter intimations of citizenship and national belonging ö and the general idea of a relationally defined public sphere ö question the adequacy of framing the problems of a multicultural society through the language of race and minority ethnicity alone. This is not to gloss over the very real and distinctive problems faced by minority ethnic groups in Britain or to imply that their subjectivity and place in British society is not influenced by ethnic and racial markers which function to separate them from the mainstream. It is not an excuse for not tackling racism and ethnic discrimination, or failing to recognise the legitimacy of minority or subaltern cultures (Modood, 2000;Solomos, 1993). But, the ethnicisation/racialisation of the identities of non-White people is also part of the problem. It stifles recognition of the many other sources of their identity formation based on experiences of gender, age, education, class, and consumption, which are shared with other groups and which cut across ethnic lines. These crossings also disrupt assumptions of intraethnic homology, notably those concerning gender practices and identities (Brah, 1996;Mirza, 1997). Cultural complexity is amply illustrated by the affiliations of young Black and Asian people, whose anthropology reveals mixtures that cross and subvert ethnic boundaries and stereotypes, and whose politics of resistance gather around ethnic exclusions as well as other cleavages (for example, generational and gender conflicts, youth nonconformity, gang masculinities). But, cast in a racialised frame of belonging, they are not conceded the multiple and shifting identities that are assumed to be normal for White people. This kind of simplification on grounds of ethnicity also brackets them as people whose claims can only ever be minor within a national culture and frame of national belonging that is seen to be defined by others and their majority' histories, read as histories of White belonging and White supremacy (Hage, 1998;Parekh, 2000). Not for them the history of Englishness/Britishness based on centuries of ethnic mixture and considerable cultural interchange with the colonies and beyond (Alibhai-Brown, 2001;Cannadine, 2001;C Hall, 1996;Ware, 1996). The claims of the Asian youths of the northern mill towns and those of Black Britons (S Hall, 1998), however, amount to more than a desire for minority recognition in Britain. Theirs is a bid for the centre and the mainstream, both in terms of the right of visibility and the right to shape it. It is a claim of full citizenshipö a rejection of the assumption that to be British/English is to be White or part of White culture. But, as long as this assumption remains intact, the status of minority ethnic people as British citizens will remain of a different order to that of White Britonsö to be proven, under question, inferior, incomplete, reluctant (Alibhai-Brown, 1999;. The latest manifestation is the government's proposal that new immigrants should be required take an oath of allegiance to British cultural norms (such as fair play) and citizenship norms (presumably liberal). This kind of act perpetuates the idea that immigrants (subtly also those born and brought up in Britain) need to prove their loyalty and their national cultural credentials, while the identity and affiliations of White Britons ö who presumably also include racists, internationalists, anticapitalists, socialists, Muslims, antinationalists, cosmopolitans, eco-globalists ö remains unproblematic. The political implicationöone of fundamental importanceöis that in order to enable all citizens in Britain, regardless of colour and cultural preference, to lay claim to the nation and contribute to an evolving national identity, the ethnic moorings of national belonging need to be exposed and replaced by criteria that have nothing to do with Whiteness. This imperative will also remain even if Britain more consciously adopts the multicultural model of nationhood, seen by many as the most progressive solution for multiethnic societies, through its offer of special rights and measures for minorities and its official state endorsement of cultural diversity. Ghassan Hage (1998), in an excoriating critique of the Australian model of multiculturalism, has argued that, underlying the opposing ethics and politics of multiculturalists and White Australians who have become anxious about ethnic mixture, there is a common fantasy of White nation. For Hage,``many of those who position themselves as`multicultural' and`anti-racists' are merely deploying a more sophisticated fantasy of White supremacy'' (page 23), because buried under the language of tolerance, welcome, and positive action for immigrants is a benign White nationalist governmentality:``those who tolerate are the ones who fantasize that it is up to them whether people speak Arabic on the streets or not, whether more migrants come or not ... .Such people are claiming a dominant form of governmental belonging and are inevitably White Australians ... . Those in a dominated position do not tolerate, they just endure'' (page 88). The (non-White) immigrantsödespite their Australian nationalityöare placed in a national``space that is not naturally theirs'' (page 90) and their subjectivity as citizens is determined by others. Hage suggests that this``nationalist practice of inclusion'' (page 90) is simply the mirror opposite of the``nationalist practice of exclusion'' (page 91) manifest in the White backlash against state multiculturalism and immigration, and epitomised by the now-familiar language of White victimhood (for example, complaints that Whites are downtrodden and neglected), cultural pollution and incompatibility, and nostalgia for a halcyon pre-immigration White culture of national cohesion and prosperity. Both responses, suggests Hage,`a re rituals of White empowermentöseasonal festivities where White Australians renew the belief in their possession of the power to talk and make decisions about Third World-looking Australians'' (page 241). The issues alluded to by Hage are exactly those confronting a multiethnic society such as Britain, with its national imaginary steeped in memories of colonial rule and racialised assumptions of national identity and belonging (from Whiteness to village cricket and British fair play). The objections and practices of those caught up in the tide of White backlash are exactly those of their Antipodean counterparts, perhaps worse because of the stronger legacy of White rule and White nostalgia and because of the more pronounced overt racism and ethnic discrimination that exists in Britain. Similarly, the discourse of multiculturalism in Britain masks a White`nationalist practice of inclusion', possibly of a much cruder nature, given that the national debate is at an earlier stage and that policy practices fall short of those in Australia and Canada. This is all too well illustrated by the frequent reference to people of a non-White colour purely in terms of their ethnicity, the endless public talk about the rights, obligations, and allegiances of new and settled immigrants, the constant questioning of the Englishness or Britishness of non-Whitesöwith none of this asked of White Britons. But, at a more subtle level, benign multicultural attitudes have allowed the liberal middle classes to pretend not to be racially or ethnically blinkered, thereby passing the burden of guilt and reform to be placed on others, most notably the urban working classöWhite and, when troublesome, non-White. Such racial and ethnic coding of national belongingöbenign and malignöneeds to be revealed and publicly debated so that the``racial ontology of sovereign territory'' (Gilroy, 2000, page 328) can be recognised and contested, perhaps by thinking``postnationally'' (Anderson, 2000). Without such moves, there will be little in the armoury to deal with the increasingly sophisticated and popular claim of racists and White worriers that for reasons of cultural incompatibility the majority and the minority should remain separate. Nor will there be an end to the treatment of minority ethnic people as a different sort of British subject. Race and ethnicity need to be taken out of the definition of national identity and national belonging and replaced by ideals of citizenship, democracy, and political community (in the sense suggested by Parekh, 2000) as the basis upon which nationhood is constructed. This is not the place to discuss the strands of this politicallyörather than culturally or raciallyödefined sense of national citizenship, but the principle is clear that it has to construct citizens as empowered subjects (so that genuine agonism is made possible), as equals in the right to claim the nation, and as members of an open and plural political community. It requires imagination of the nation as something other than a racial territorial space, perhaps via a``planetary humanism'' (Gilroy, 2000) that returns the nation as a space of travelling cultures and peoples with varying geographies of attachment. Then, the problems faced by the ethnic minorities and the anxieties of marginalised White working-class communities can be tackled as problems of citizenship and social justice in a country for all, with differences of ethnicity not overblown or played up for exclusionary political gain.
2014-10-01T00:00:00.000Z
2002-06-01T00:00:00.000
{ "year": 2002, "sha1": "76744b991fea705bf03398582b8a448cd9608221", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1068/a3537", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "76744b991fea705bf03398582b8a448cd9608221", "s2fieldsofstudy": [ "Political Science", "Sociology" ], "extfieldsofstudy": [ "Sociology" ] }
119450015
pes2o/s2orc
v3-fos-license
Vacuum stress around a topological defect We show that a dispiration (a disclination plus a screw dislocation) polarizes the vacuum of a scalar field giving rise to an energy momentum tensor which, as seen from a local inertial frame, presents non vanishing off-diagonal components. The results may have applications in cosmology (chiral cosmic strings) and condensed matter physics (materials with linear defects). We show that a dispiration (a disclination plus a screw dislocation) polarizes the vacuum of a scalar field giving rise to an energy momentum tensor which, as seen from a local inertial frame, presents non vanishing off-diagonal components. The results may have applications in cosmology (chiral cosmic strings) and condensed matter physics (materials with linear defects). It is fairly well known that a needle solenoid carrying a magnetic flux makes virtual charged particles to run around the solenoid inducing a non vanishing current density (see e.g. Ref [1]). We wish to consider what seems to be a gravitational (geometric) analogue of this Aharonov-Bohm effect, by computing the vacuum expectation value of the energy momentum tensor of a massless and neutral scalar field far away from a dispiration. Let us begin by presenting the geometry of the background (units are such that c =h = 1), where the points labeled by (t, r, θ, z) and (t, r, θ+ 2π, z) are identified [2,3]. When α = 1 and κ = 0 Eq (1) becomes the line element of the flat spacetime written in cylindrical coordinates. Borrowing terminologies in condensed matter physics, the parameters α and κ correspond to a disclination and a screw dislocation, respectively. We should remark that Eq (1) may be associated with the gravitational background of certain chiral cosmic strings [4] (as has been suggested in Ref. [2]), as well as can describe (in the continuum limit) the effective geometry around a dispiration in an elastic solid (see Ref. [5] and references therein). The definitions ϕ := αθ and Z := z + κθ lead to (2) * delorenci@unifei.edu.br † moreira@unifei.edu.br which should be considered together with the peculiar identification Although Eq. (2) expresses the fact that the background is locally flat, due to Eq. (3) we cannot use Eq. (2) (which is a local statement) to infer that the global symmetries of the background are the same as those of the Minkowski spacetime (in this sense Eq. (2) is singular). In fact, Eq. (2) disguises a curvature singularity on the symmetry axis [2] (when κ = 0, in the context of the Einstein-Cartan theory, there is also a torsion singularity at r = 0 [3,6]). The vacuum expectation value of the energy momentum tensor is obtained by applying a differential operator to the renormalized scalar propagator around a dispiration (see e.g. Ref. [7]), We have recently obtained D (α,κ) (x, x ′ ) (classical propagators have been considered in Ref [8]) by using the Schwinger proper time prescription combined with the completeness relation of the eigenfunctions of the d'Alembertian operator [9]. Such eigenfunctions have the form R(r)χ(ϕ) exp{i(νZ − ωt)} which, by observing Eq. (3), leads to This boundary condition is typical of the Aharonov-Bohm set up where νκ is identified with the flux parameter eΦ/2π. If we carry over to the four-dimensional context lessons from gravity in three dimensions [10,11], it follows that the charge e and the magnetic flux Φ should be identified with the longitudinal linear momentum ν and 2πκ, respectively [2]. When κ/r → 0, Eq. (4) yields for the diagonal components the expressions of the vacuum fluctuations around an ordinary cosmic string (κ = 0) [12]. Regarding the other components, the prescription in Eq. (4) kills off the dominant contribution in the renormalized propagator [9], resulting that the subleading contribution yields two non vanishing off-diagonal components, and where B(α) depends on the disclination parameter only [9]. Unlike the diagonal components, T ϕ Z and T Z ϕ do not depend on the coupling parameter ξ. When α = 1, B = 1/60π 2 which corresponds approximately to the value of α in the physics of formation of ordinary cosmic strings [13]. It is instructive to display both disclination and screw dislocation effects in a same array. When ξ = 1/6 (conformal coupling), for example, T µ ν with respect to the local inertial frame [cf. Eq. acknowledgments This work was partially supported by the Brazilian research agencies CNPq and FAPEMIG.
2014-10-01T00:00:00.000Z
2003-01-27T00:00:00.000
{ "year": 2003, "sha1": "1a0be9569a29a875c4b6a20d60163f136095ef3b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/0301219", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1a0be9569a29a875c4b6a20d60163f136095ef3b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
13786058
pes2o/s2orc
v3-fos-license
Effects of Intensive Speech Treatment for an Individual with Spastic Dysarthria Secondary to Stroke Objective: This study investigated the impact of an intensive speech treatment on listener-rated communication success and functional outcome measures of communication for an individual with spastic dysarthria secondary to stroke. Method: A single-subject A-B-A-A experimental design was used to measure the effects of an intensive speech treatment that incorporated principles of motor learning to drive activity-dependent changes in neural plasticity. The primary dependent variables were listener-rated communication success (comprehensibility transcription in two conditions and listener perceptual ratings of speech and voice), and functional outcome measures as rated by the participant and his spouse. Secondary dependent variables included acoustic factors: vowel space area, phonatory stability, and vocal dB SPL during speech tasks. Results: Multiple comparisons with t-tests were used to determine statistically significant changes in primary and secondary dependent variables. Statistically significant changes (p<0.05) were present immediately post-treatment in listener perceptual ratings for speech naturalness in sentences (p=0.00), but demonstrated a preference for pre-treatment sustained vowel phonation (p=0.04). All functional outcome measures reflected the participant’s perception of increased communicative effectiveness, decreased psychosocial impacts of dysarthria, and increased social participation. There were statistically significant changes in secondary variables at post-treatment including phonatory stability in amplitude perturbation quotient (p=0.02), and vocal dB SPL during sustained vowel phonation (p=0.01), and sentence reading (p=0.03). Vowel space area increased by 13% at post-treatment. Three months following treatment, there were statistically significant changes in listener comprehensibility at the single word length (p=0.02) and sentence length (p=0.03), and listener perceptual ratings of speech naturalness (p=0.02). All functional outcome measures displayed maintained post-treatment effect. Vowel space area increased by 25% compared to pre-treatment. There were no statistically significant changes in phonatory stability or vocal dB SPL three months following treatment. Conclusions: Treatment outcomes were specific to the research participant’s individual characteristics. The improvements measured immediately postand three months following treatment cannot be generalized beyond this individual with dysarthria secondary to stroke. However, the positive treatment effects for STR03 indicated that individuals in the chronic stages of recovery with dysarthria can improve and maintain speech comprehensibility as well as increase communication effectiveness and reduce some of the negative emotional and social components of chronic dysarthria, even four years post-onset, warranting further investigation. examined the impact of an intensive behavioral speech treatment that targeted clear speech with an adult who had spastic dysarthria secondary to a stroke. This first chapter of the thesis presents the background of the study, specifies the problem of the study, describes its significance and presents an overview of the methodology used. Background It is reported that approximately 795,000 individuals experience a new or recurrent stroke each year, but the fatality rate is in decline (Go et al., 2014). Therefore, stroke is one of the leading causes of long-term disability in the United States. An estimate of the incidence of dysarthria post-stroke is around 40% (Flowers, 2013). Dysarthria is the collective term for a neurological speech disorder resulting from changes in strength, speed, range, steadiness, tone, or accuracy of speech movements. Dysarthria is further categorized and defined by the location of damage to the nervous system. Spastic dysarthria results from bilateral damage to the direct and indirect activation pathways in the central nervous system, which can result in changes to speech components including respiration, resonation, articulation, phonation, and prosodic variation (Duffy, 2012). Very few studies have documented the efficacy of specific treatment approaches for individuals with dysarthria secondary to stroke (Sellars et al., 2005;Mackenzie, 2011). Even fewer studies describe specific treatment approaches for individuals over nine months post-onset . Many of the studies available emphasize the effects of treatment on acoustic factors of speech such as decibel sound pressure level (dB SPL), voice parameters, and vowel space area or the effects of treatment on listener intelligibility. A complete look at the effects of treatment should also include measurements of communication success and patient and/or family reported functional outcomes to determine the overall impact of treatment on activities of daily living. Significance Individuals with dysarthria secondary to stroke have reported feelings of marginalization and stigmatization, as well as emotional and social changes including changes in self-identity and relationships (Walshe et al., 2009). Social and emotional effects of dysarthria may be disproportionate to the severity of the communication disorder (Dickson et al., 2008) and can contribute to the negative impact of dysarthria on quality of life. Given the lack of research in this area and the significant social and emotional consequences associated with dysarthria after stroke, the purpose of this study was to determine the effect of a well-defined and intensive speech treatment for an individual with dysarthria secondary to stroke in the chronic stage of recovery with the goal of improving comprehensibility, and increasing participation in functional communication. Methodology Overview This Phase I study utilized a single-subject A-B-A-A experimental design (Robey, 2004). This design was selected because it was appropriate for making initial observations about the impact of an intensive speech treatment on an individual with spastic dysarthria secondary to stroke. The primary aim of the study was to determine the effect of treatment on listener-rated communication success and functional outcome measures. Changes in communication success from pre-to post-treatment and pre-to 3-months following treatment were assessed using listener comprehensibility ratings of the participant's speech in two conditions: 1) using the acoustic signal alone and 2) using the acoustic signal plus visual information as the participant spoke. Listeners also rated voice quality and speech to assess perceptual characteristics of voice and speech. The impact of treatment on functional outcome measures including participation in functional communication and communicative effectiveness were assessed using two patient and spouse-reported outcome measures, the Communicative Effectiveness Index-Modified (CETI-M; Yorkston et al., 1999), and the Dysarthria Impact Profile (DIP; Walshe et al., 2009). Additional qualitative input was obtained from the participant's and spouse's interviews pre-, post-, and 3months following treatment, and field notes taken during treatment. The following were the study hypotheses: Listener-rated Communication Success: 1) Listener comprehensibility ratings will increase following treatment using the acoustic signal alone. 2) Listener comprehensibility ratings will increase to a greater extent following treatment using the acoustic signal plus visual information. 3) Listeners will rate perceptual characteristics of voice and speech better following treatment when compared to pre-treatment. Functional Outcome Measures 4) The participant and his spouse will rate communicative effectiveness higher following treatment. 5) The participant will rate psychosocial impacts of dysarthria lower following treatment. 6) The participant and his spouse will describe overall increases in social participation following treatment. A secondary aim of the study was to evaluate the impact of treatment on acoustic variables of speech including the first two formants (F1 and F2) of the corner vowels /i/, /u/, and /a/, measures of phonatory stability, and vocal dB SPL during speaking tasks. METHODOLOGY The methods section of this thesis provides a study overview, information about the participant, protocol for the specific treatment approach, a rationale for the dependent variables of the study, explanation of assessment procedures, description of data analyses and statistical analyses, and a discussion about reliability in this study. Study Overview This study examined the administration of an intensive behavioral speech treatment that incorporates principles of motor learning to drive activity-dependent changes in neural plasticity that can contribute to our understanding of how motor learning theory applies to treatment of dysarthria and how we can administer effective treatment efficiently. The primary dependent variables of interest were speech comprehensibility in two conditions, listener perceptual ratings of voice and speech, and changes in communicative effectiveness, the psychosocial impact of dysarthria, and social participation based on questionnaire responses and interviews. Speech comprehensibility was measured by listener transcriptions of phonetically balanced single word and sentence length materials (Kent et al. 1989;Nilsson 1994) using an audio recording of the participant alone, and using audio and video recordings of the participant. Perceptual voice quality and speech naturalness were measured with listener ratings of sustained vowel phonation and sentence reading samples comparing pre-and post-treatment and pre-and 3-month follow-up (FU). Communicative effectiveness was measured using the CETI-M. The participant and his spouse rated communicative effectiveness in 10 different scenarios. (Doyon and Benali, 2005). An individual stores past experiences and learns new behaviors through a process of neural plasticity. Neural plasticity has also been identified as the mechanism by which an individual rehabilitates and relearns processes following brain injury (Kleim & Jones, 2008). There are ten principles of experience-dependent neural plasticity defined by Kleim and Jones, 2008. These principles include, "use it or lose it", "use it and improve it", "specificity", "repetition matters", "intensity matters", "time matters", "salience matters", "age matters", "transference", and "interference". These principles were translated to serve as guidelines for behavioral treatment of motor systems, defined as principles of motor Implicit learning (if you are using a cue, why are you calling this "implicit learning" when you are explicitly highlighting clear speech?) was utilized through the use of a single cue for "clear speech" throughout the treatment, which minimized the cognitive load for the individual while allowing the clinician to change the way this is modeled based on the client's specific speech patterns. Augmented feedback was provided based on the needs of the client and decreased systematically throughout the treatment course to support generalization and increased independence (Duffy, 2012;Maas et al., 2008;Kleim & Jones, 2008). Increasing intelligibility and naturalness are common goals of speech treatment for individuals with dysarthria. Providing cues for loudness, reducing rate of speech, and cueing for clear speech have been studied as ways to improve intelligibility for neurologically normal individuals as well as individuals with dysarthria secondary to multiple sclerosis and Parkinson's disease (Smijanic & Bradlow, 2009;Uchanski, 2005;Tjaden, 2014). Tjaden et al. (2014) established that speaker' intelligibility ratings increased with cues for either increased loudness or clear speech. A cue for clear speech may be a more effective cue for individuals with spastic dysarthria who may not benefit from a cue to "speak loud" or "slow down" speech due to the speech components and patterns that these individuals present with. Despite this evidence, there are very few studies reporting on the impact of a clear speech treatment protocol for individuals with dysarthria, or more specifically spastic dysarthria. Cueing and modeling were important components of the treatment process. Direct modeling can provide the participant with an understanding of what is meant by the cue for "clear speech". Cueing during non-speech tasks emphasized increasing or maintaining effort level. Appropriate cueing for non-speech tasks with the Iowa Oral Performance Instrument (IOPI) are "Push, push, push!" or "Go, go, go". Examples of appropriate cueing during speech tasks include "Remember to use your clear speech" and "Speak clearly". The participant received positive reinforcement following speech tasks such as "Great clear speech" and "That's the speech that people will understand". Cueing and modeling were decreased throughout the course of treatment to promote independence and increase carry-over outside of the clinic setting. Data were collected during each session including kPa (pressure measurement) during lip and tongue IOPI exercises, duration of sustained vowel phonation, percentage of accurate articulation in minimal pair repetition, and the loudness of sustained vowel phonation, salient sentence reading, and the hierarchy reading task. The consistent speech sound errors noted during pre-treatment evaluations included voicing errors, deletion errors, and vowel errors. STR03's speech sound errors were targeted through minimal pair tasks (i.e., pairs of words which differ by only one phoneme; e.g. bad and pad). Particular emphasis during the minimal pair task was placed on voicing errors due to their frequency in STR03's speech. The frequency of voiced/voiceless cognates in typical speech interfered with STR03's communication success in the pre-treatment evaluation. A list of minimal pair sets used during treatment is displayed in Appendix C. Homework consisting of treatment tasks and a carryover task (e.g. using clear speech to order movie tickets) were assigned each day to increase treatment intensity and promote generalization of clear speech to activities of daily living. Dependent Variables Primary Aim, Hypotheses 1-3: Listener-rated Communication Success Listener-rated communication success was measured using listener transcriptions of comprehensibility at the single word and sentence level in two conditions and using listener perceptual ratings of speech and voice. The goal of speech treatment is to increase communication success in functional conversation so outcome variables need to capture these functional changes. Comprehensibility is differentiated from intelligibility because the listener is provided with the communication context of the utterance (Barefoot et al., 1993). Measuring comprehensibility entails providing the listener with information other than the acoustic signal. This information may be in the form of semantic, syntactic, or physical context (Yorkston et al., 1996). Lindblom (1991) suggests that speech and listener perceptions of speech are adaptive to the needs of the situation. Therefore, speech perception is not always simply signal-dependent. Listener perception may require background knowledge or shared context when speech is disordered or distorted. Comprehensibility was selected as a primary variable because it provides the listener with some context for determining whether the participant was successful in conveying his message. We compared how providing the listener with visual information through video and audio input impacts listener transcriptions of single word and sentence length material compared with audio input alone. Several other studies have used both audio and audio and video listener conditions for transcriptions (Keintz et al., 2007;Hunter et al., 1991;Garcia and Cannito, 1996). (Threats, 2012). Addressing the concerns of the individual receiving treatment is an essential component of the treatment process. Qualitative measurement of the participant's personal experience is critical for evaluating a treatment (Kovarsky, 2008). The participants' perceptions of treatment outcomes are particularly important due to the impact of acquired dysarthria on social participation and psychosocial factors (Dickson et al., 2008). Communicative effectiveness is measured using the Communicative Effectiveness Index-Modified (CETI-M) in this study. Lomas et al. (1989) introduced the CETI as a measure of functional communication for adults with aphasia. The authors of the CETI demonstrated the measure's internal reliability (Split-half r=0.90), inter-rater reliability (r=0.73), test-retest reliability (r=0.94), and construct validity using an n of 22 (Lomas et al., 1989). Secondary Aim: Acoustic Factors Acoustic measurements in this study included vowel space area, phonatory stability, and vocal dB SPL. The selected acoustic measurements were analyzed for the purpose of understanding potential factors contributing to changes in listener comprehensibility ratings. There is no direct correlation between perceptual features and acoustic variables but acoustic analysis can be informative and supportive of perceptual findings (Kent et al., 1999). Vowel formants are important measurements in the analysis of speech production as they have been linked to articulatory precision. Vowel space area was determined by measurement of the first and second formants (F1 and F2) of three corner vowels: /a/, /i/, and /u/ in the sentence "The boot on top is packed to keep". These three corner vowels are selected because of their representation of extreme articulatory movements of the tongue. demonstrated that lower intelligibility ratings were associated with greater overlap among vowel formants, relating to "reduced articulatory working space" (192). Vowel space area analysis will help to determine the impacts of treatment on articulatory precision in speech production. Kent et al. (2003) validated the use of the Multidimensional Voice Profile (MDVP Advanced; CSL 4500) to assess voice data collected from individuals with dysarthria secondary to hemispheric and brainstem stroke. This study identified several potentially deviating acoustic measurements associated with this population such as variation in fundamental frequency (vf0,) smoothed pitch perturbation quotient (sPPQ), absolute shimmer (ShdB), relative shimmer (Shim), smoothed amplitude perturbation quotient (sAPQ), peak amplitude variation (vAm), and amplitude perturbation quotient (APQ). All of these acoustic measurements fall into categories of either frequency perturbation or amplitude parameters and are considered measures of phonatory stability. Vocal loudness is determined by the intensity of the sound signal, which is measured in dB SPL. The speaker's vocal loudness impacts the listener's understanding of the message. Assessment Procedures Dependent variables were assessed three times during the study. Each of the three evaluations included four consecutive days of testing. Initial data collection took place immediately prior to treatment (Pre), the second occurred during the week immediately following completion of treatment (Post), and the third was a follow-up evaluation, which took place three months following treatment (FU). Data Analyses Primary Aim, Hypotheses 1-3: Listener-rated Communication Success A total of sixty listeners with normal hearing and no history of neurological disorder or head injury assessed comprehensibility by transcribing single word and sentence length materials. One group of thirty listeners transcribed words and sentences from audio input only, and one group of thirty listeners transcribed using both audio and visual input to measure and compare comprehensibility conditions. Ten listeners from each group transcribed pre-treatment samples, ten listeners from each group transcribed post-treatment samples, and ten listeners from each group transcribed FU-treatment samples. and five samples out of the group were randomly selected as repeated measures for determination of intra-rater reliability. Primary Aim, Hypotheses 4-6: Functional Outcome Measures The impact of treatment on the functional outcomes was measured in three ways. The CETI-M was used to provide a quantitative measure of change in the level of communicative effectiveness in daily living situations over the treatment course (Lomas et al. 1999;Yorkston et al., 1999). Voice dysfunction and targeted acoustic parameters of voice were assessed using acoustic software, MDVP. MDVP was used to analyze phonatory stability measures during sustained vowel phonations. Vocal sound pressure level (dB SPL) during speech tasks was collected throughout the evaluation sessions. Vocal sound pressure level was also measured during each treatment session using an SLM. Statistical Analyses Multiple comparisons with t-tests determined the significance of any changes to the dependent variables following treatment at Post or FU evaluations. Effect size using Cohen's d determined the magnitude of treatment effect. Average percentage and standard deviation of listener ratings for sustained vowel phonation and sentence reading were calculated to determine overall listener preference and the magnitude of preference for samples. The means of F1, F2, and vowel duration from 20 corner vowels repeated in "The boot on top is packed to keep" were used to create pre-, post-, and 3-month follow-up mean vowel space area, calculate vowel space area change, and determine changes in vowel duration. Measurement Reliability The clinician who administered the treatment (CP) did not participate in evaluations to limit potential bias. Intra-rater reliability was calculated using percent agreement for vowel space area analysis on 25% of the data at 2-4 months following the initial analysis. There is typical agreement in the literature that percent agreement above 70% is acceptable (Stemler, 2004). Intra-rater reliability for vowel space area using PRAAT formant analysis was 87.5%, calculated based on differences in formant data over 50 Hz during the second analysis. Intra-rater reliability for vowel duration using PRAAT was 75%, calculated based on differences in duration data over 50 ms during the second analysis. Listener studies were conducted in the IAC treated sound booth. Participants listened to samples at a consistent volume. A random number generator was used to randomize HINT sentences repeated during evaluation tasks and presented to listeners during the transcription task. Individual rater variability for each component of the listener transcription task is displayed in Appendix E. Any individual listener percentage that was two standard deviations below or above the mean was extracted from the data set to reduce the effects of inter-rater variability. Listeners participating in the perceptual rating task evaluated a randomized selection of 20 pairs of sustained vowel phonations and 20 pairs of sentence repetitions ("The boot on top is packed to keep") collected during evaluations. Twenty percent of sentence pair and sustained vowel phonation combinations were randomly selected and repeated to determine intra-rater reliability with this task. Intra-rater and inter-rater reliability for the listener preference study was calculated using ReCal 0.1 Alpha, a statistics application on the Internet (Freelon 2010;Freelon 2013), which performed a calculation of average pairwise percent agreement and Cohen's Kappa (Dewey 1983). Cohen's Kappa was designed as a reliability measurement to eliminate the amount that raters may agree by chance alone. Landis and Koch (1977) suggested that Cohen's Kappa coefficients between 0.41-0.60 represent moderate agreement, and coefficients above 0.60 represent substantial agreement. However, other studies suggest greater stringency when interpreting inter-rater and intra-rater reliability coefficients. Listener intra-rater reliability for the sustained vowel phonation listener preference task was 74%, with an average pairwise Cohen's Kappa of r=0.61. Listener intra-rater reliability for the sentence reading listener preference task was 74%, with an average pairwise Cohen's Kappa of r=0.53. Listener inter-rater reliability for sustained vowel phonation preference at pre-post and pre-FU was 60.5% and Cohen's Kappa was r=0.40. Listener inter-rater reliability for sentence reading preference at pre-post and pre-FU was 74.8% and Cohen's Kappa was r=0.54. STR03 did not receive any co-occurring speech treatment during the treatment phase of this study. However, he received speech, physical therapy, and occupational therapy 2 days/week for two months following treatment (between post-treatment and 3-month follow-up evaluations). RESULTS The results of this study are presented in five categories: treatment data, biweekly probe data, listener-rated communication success (comprehensibility in audio-only and audio+visual conditions and listener perceptual ratings), functional outcome measures (CETI-M, DIP, and interview), and acoustic variables of speech and voice (vowel space area, phonatory stability, and vocal dB SPL). Treatment Data The data collected during each treatment session for vocal dB SPL and lip and tongue pressure in kPa were compiled for an average per week to determine trend changes from week 1 to week 6. Lip pressure increased by an average of 0.6 kPa. Biweekly Probe Data Vocal dB SPL data were taken during the sentence reading task and the picture description task and compared to the pre-treatment evaluation data for these tasks. These data demonstrate an overall decrease in vocal dB SPL during the biweekly probes as compared to the pre-treatment evaluation. The summary vocal dB SPL data for biweekly probes during sentence reading and picture description is displayed in Table 2. Sustained vowel phonation from the biweekly probes was analyzed through MDVP and compared to the pre-treatment evaluation data. The data collected at the biweekly probes displayed considerable variability. Few patterns emerged from this data, aside from a considerable decrease in vAM displayed at all three probes. Summary MDVP data from biweekly probes is displayed in Table 3. Hypothesis 1: Audio-Only Condition-Single Word Comprehensibility There was not a statistically significant difference between single word comprehensibility measured during the audio-only condition from pre-to posttreatment (p=0.22). Single word comprehensibility, however, increased significantly from pre-to FU-treatment (p=0.02) with a medium effect size (r=0.58). Quantitative changes of single word percent comprehensibility in the audio-only condition from pre-, post-, and follow-up-treatment evaluations are displayed in Table 4. Table 5. There was not a statistically significant difference between single word comprehensibility in the video condition from pre-to post-treatment (p=0.32). The difference between pre-and FU-treatment evaluations was also not statistically significant during this condition (p=0.38). Quantitative changes of single word percent comprehensibility in the video condition from pre-, post-, and follow-up treatment evaluations are displayed in Table 6. There was no statistically significant difference between sentence comprehensibility in the video condition from pre-to post-treatment (p=0.25). The difference between pre-and FU-treatment evaluations was also not statistically significant during this condition (p=0.25). Quantitative changes in sentence percent comprehensibility in the video condition from pre-, post-, and FU-treatment evaluations are displayed in Table 7. There was a statistically significant preference for pre-treatment sustained vowel phonation compared with post-treatment sustained vowel phonation (p=0.04) with a large effect size (r=0.80). Table 8 illustrates the individual listener preference ratings including the frequency and magnitude of preference for the pre-treatment sustained vowel phonations. There was a statistically significant preference for post-treatment sentence reading compared with pre-treatment sentence repetitions (p=0.00) with a large effect size of (r=0.98). Table 8 illustrates the individual listener preference ratings for posttreatment sentences. There was a statistically significant preference for sentence reading at FUtreatment (p=0.02) with a large effect size (r=0.86). Table 11 shows the individual listener preference ratings for FU-treatment samples of sentence repetitions. Primary Aim: Functional Outcome Measures Hypothesis 4 "Dysarthria relative to other worries and concerns", in which he is asked to rank his dysarthria within four other personal and health related concerns. STR03 reported that speech was a primary concern during pre-and post-treatment evaluations, but that the Throughout the treatment, STR03 reported that he required a high level of effort to speak. Discussion about this throughout treatment indicated that he felt as though he would speak more frequently if it didn't require so much effort. His wife also commented during the FU evaluation that the effort level for speaking was STR03's biggest complaint for the three years following his stroke. STR03 temporally located a substantial change in the amount of effort required during his FU evaluation: "the effort has gone away…since we finished." Vowel Space Analysis Pre-, post-, and follow-up vowel triangles were obtained by analyzing F1 and F2 values of vowels /u, a, i/ to calculate vowel space area. Vowel space area for pretreatment was 111,645 Hz 2 and 120,150 Hz 2 at post-treatment, indicating an increase of 8,505 Hz 2 (13%). Vowel space area continued to increase at the 3-month follow-up evaluation, to 139,933 Hz 2 , indicating an increase of 28,288 Hz 2 (25%) from pretreatment. Figure 1 is a visual depiction of pre, post-, and follow-up evaluation visual space areas. Figure 3. Vowel Space Area at Pre-, Post-, and Follow-Up Evaluations There were statistically significant changes in F1 and F2 values during the post-and FU evaluations. All values for F1 and F2 /u, a, i/ changed significantly at FU. Quantitative changes in F1 and F2 for /u/, /a/, and /i/ are illustrated in Table 12. Primary Aim, Hypotheses 1-3: Listener-rated communication success The six-week intensive treatment appeared to be a feasible intervention for increasing listener-rated communication success for the individual in this study. There were increases in the audio-only comprehensibility condition with sentences and single words at post-treatment, but the changes were not statistically significant. The increase in comprehensibility was supported by the listener perceptual study at posttreatment, which revealed a statistically significant preference for post-treatment sentences. However, both sentence length and single word comprehensibility significantly increased from the pre-treatment level to the 3-month follow-up. This demonstrated that the participant continued to make progress following treatment. percentage in this condition may also have made it more difficult to measure a statistically significant improvement. Therefore, the measurement of comprehensibility using visual information plus audio information was a less sensitive measurement of treatment effectiveness than the audio-only condition for this study at the sentence level. The single word comprehensibility measurement taken at follow-up from the audio+visual condition was 2% greater than that taken during the audio-only condition. Comprehensibility measured during sentence length materials in the audio+visual condition was approximately 0.6% greater at FU than the audio-only condition. Therefore, the treatment had a clinically meaningful effect of increasing understandability using acoustic information alone to a level consistent with communication supported by visual information. This could have a meaningful effect on functional communication and conversation with others, in which visual information is not consistently available such as conversation while driving in a car, conversation on the phone, or conversation while walking/pushing a wheelchair. Listener perceptual ratings identified significant preference for post-and FUtreatment speech samples when compared to pre-treatment. This indicated that the participant's speech was perceived as more natural following treatment, which was reflected in the increase in listener comprehensibility ratings. However, listener preference for voice quality during sustained phonation was greater at pre-treatment when compared to post-treatment. Due to STR03's baseline increased vocal loudness, it is possible that listener preference ratings for voice quality were related to the statistically significant increases in loudness recorded during post-treatment evaluations. The subsequent decrease in vocal loudness from post-to FU-treatment coincided with the preference for vocal quality in FU sustained vowel phonation, and an increase in comprehensibility ratings at FU-treatment. Intra-rater and inter-rater reliability for listener perceptual rating tasks were challenges in this study. Listeners demonstrated moderate-substantial intra-rater reliability for perception of voice in sustained vowel phonation and speech naturalness in sentences. Listeners demonstrated weak-moderate inter-rater reliability for perceptual ratings of voice and speech, respectively. The listener perceptual rating task was subjective, and listeners demonstrated poorer reliability with rating voice quality in sustained vowel phonations than rating speech naturalness in sentence reading. These challenges with reliability highlight the difficulty with using perceptual measures as treatment effectiveness variables. Primary Aim, Hypotheses 4-6: Functional Outcome Measures The participant and his spouse reported increases in communicative effectiveness and decreases in the psychosocial impacts associated with dysarthria following treatment. STR03 and his wife reported increases in the quantity of information he provided in conversation, and the frequency with which he contributed. The six-week intensive treatment appeared to provide social stimulation, practice with specific speech tasks, and a decreased level of effort necessary for speech, which 4.3c Vocal dB SPL The treatment had a statistically significant effect of increasing vocal dB SPL for speech tasks including sentence reading and sustained vowel phonation. These increases displayed large effect sizes. This finding was unexpected because increased loudness was not directly trained during treatment. STR03 presented with loudness levels greater than normal limits at pre-treatment, which was consistent with his diagnosis of spastic dysarthria. Decreased loudness and easy-onset of phonation during treatment tasks was modeled, but not directly stated throughout treatment to preserve the singular cue for "clear speech". STR03 presented with laryngeal tension, and severe strain-strangled voice quality, which likely contributed to greater loudness during speech tasks. He received cues to bring the effort to his lips and tongue, and away from his throat during treatment tasks. He frequently produced several utterances following cueing with reduced vocal loudness, but did not achieve independence from this cue during the treatment course. The increased vocal dB SPL level during the post-treatment is consistent with continued dependence on cues for decreased loudness in the presence of the high effort training necessary for clear speech. The decreased loudness at the 3-month FU back to baseline level suggests that Limitations There are inherent limitations related to single-subject research designs. Findings are specific to the individual, and therefore, cannot be further generalized to other individuals with the same disorder. However, positive treatment results can provide a rationale for future investigatory research. Inherent small sample sizes in data collection for a single-subject design relate to challenges with internal validity. Vowel Space Analysis Paragraph Reading Participant will read through the Farm Passage, (Crystal & House, 1982) Loudness data and specific sound errors Picture Descriptions Participant will describe the picnic scene from the Western Aphasia Battery (WAB), (Kertesz, 1982) Task Description Patient will describe how to do a stated task (e.g. "Describe how to make a peanut butter and jelly sandwich") Hearing in noise test Participant will repeat a series of sentences (Nilsson, 1994) Sentence-level speaker intelligibility and comprehensibility Single word reading Participant will read through a series of 70 single words (Kent et al., 1989) STR03 IOPI Lip and Tongue Pressure Lip Tongue
2018-04-29T23:25:04.504Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "cd05a6120390c3d3f65f09fb9b469be75a9e4a75", "oa_license": "CCBY", "oa_url": "https://digitalcommons.uri.edu/cgi/viewcontent.cgi?article=1783&context=theses", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "c82988249961cf532e2b22d7ef88b22f0aa1e254", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
253198143
pes2o/s2orc
v3-fos-license
The effects of workers’ job stress on organizational commitment and leaving intention in commercial sports centers The purpose of this study is to identify the factors affecting the job stress of commercial sports center workers on organizational commitment and turnover intention. The causal relationship between their demographic characteristics and job stress was investigated for 261 out of 300 workers working at commercial sports centers in Seoul, Incheon, and Gyeonggi province. There was a difference by work type, and it was found that there was a difference between regular and contract workers. Role-related and interpersonal relationships had a negative influence relationship in terms of tenure commitment, normative commitment, and affective commitment. Job characteristics, role-related, interpersonal relationship, and compensation system had an influence on turnover intention. In other words, it was found that role-related and interpersonal relationships had a positive effect on turnover intention, and job characteristics and compensation system had a negative effect on turnover intention. It was found that job stress affects both organizational commitment and turnover intention. The results of this study indicate that job stress, organizational commitment, and turnover intention should be dealt with mainly in order to reduce turnover intention of commercial sports center workers. In other words, clearer guidelines on the role of commercial sports center workers and various welfare programs for improving human relationships should be provided. Therefore, commercial sports centers should continuously research and develop ways to maximize job satisfaction from the perspective of workers in order to reduce job stress and induce positive organizational commitment. INTRODUCTION As sports centers become saturated, the reality is that management deteriorates, and there are many problems such as job stress and turnover of employees. Although professional workers are assigned to each field, the number of workers is significantly insufficient compared to the number of members. As a result, the working environment for workers is getting worse. Therefore, the reality is that most sports center workers are under a lot of work stress due to excessive work. Ultimately, it is necessary to investigate the effect of job dissatisfaction caused by excessive work of sports center workers on the turnover of workers. Therefore, through a study on job stress, the factors causing job stress in commercial sports center workers were identified and organizational commitment and how it affects turnover intention. An empirical study is required to establish a management plan for effective job stress coping of sports center service staff. Occupational stress means that it occurs when a sports organization fails to provide a work environment that matches the motivation or ability of its members, or when a member's ability cannot handle the work environment required and provided by the organization. In this study, the job stress scale developed by Parker and DeCotiis (1983) was quantified. Job stress is composed of factors such as job characteristics, role-related, interpersonal relationships, and compensation system and organizational characteristics. Organizational commitment is a term that describes the rela- tionship between an individual and an organization, and is used as a concept to express attachment to the goals or values pursued by the organization, the will to work hard for the organization, and the will to remain a member of the organization. In this study, organizational commitment was classified into affective, tenure, and normative commitment, and it refers to organizational commitment measured by the "organizational commitment test tool" that translated the scale developed by Allen and Meyer (1990). Turnover intention is a deliberate and thoughtful thought to voluntarily leave the organization (Teet and Meyer, 1993) and a psychological state to leave the current job (Hom et al., 1992). Actions that move out of the organization. In this study, it means the intention of commercial sports center workers to deviate from their current job without being able to settle in the organization. The purpose of this study is to empirically analyze and identify the relationship between job-related factors and turnover among members of commercial sports center organizations that are playing a leading role in improving people's quality of life. Therefore, the purpose of this study is to provide basic data so that workers in commercial sports centers can become more efficient working environments by examining the effects of job stress on organizational commitment and turnover intention according to the personal characteristics of commercial sports center workers. Research subject Employees 300 working in Seoul, Incheon, and Gyeonggi regions were selected. Commercial sports centers with 10 or more workers were used as the standard. A total of 300 recovered questionnaires were extracted, and 261 copies of data were actually analyzed, excluding 39 who responded insincerely. The demographic characteristics of specific survey subjects are shown in Table 1. Questionnaire composition The composition of the questionnaire is shown in Table 2. Job stress Factors related to job stress were constructed as follows based on the views of Parker and DeCotiis (1983). It consists of 5 items, including job characteristics, role-related, human relations, compensation system, and organizational characteristics, and a total of 25 items. The scale used by Cooper and Davidson (1982) was modified and supplemented to match the purpose and subject of this study, and 25 items were measured. The scale composition of the questionnaire consisted of a 5-step Likert scale of 1 point for "not at all," 2 points for "disagree," 3 points for "average," 4 points for "agree," and 5 points for "strongly agree." Organizational commitment The survey tools used in this study to measure organizational commitment are the scales defined by Allen and Meyer (1990). Commitment Scale). This was corrected and supplemented to match the purpose and subject of this study, and 24 items were measured. The scale composition of the questionnaire consisted of a 5-step Likert scale. Intention to leave Based on the items developed by Blau (1985), three items were measured by modifying and supplementing them to meet the purpose and subject of this study. It was constructed on a 5-step Likert scale. Validity and reliability The validity verification method was supplemented through the development of questionnaires based on the collection of literature and expert opinions, expert meetings, and preliminary examinations. The draft questionnaire completed through this process was reviewed for content validity and item suitability through an expert group. In addition, in this study, exploratory factor analysis was performed to verify the validity of the questionnaire. First, an exploratory factor analysis was performed to verify validity. For factor extraction, the maximum likelihood method was used, and for the exploratory factor analysis, principal component analysis and orthogonal rotation method varimax were used. In this study, Cronbach's α value was used to verify the reliability of the questionnaire. Data processing For the data processing of this study, the collected questionnaires were coded for statistical processing. Statistics were processed using IBM SPSS ver. 18.0 (IBM Co., Armonk, NY, USA). Exploratory factor analysis and Cronbach α were performed to verify the validity and reliability of the survey tool. The t-test and one-way analysis of variance were performed to verify the difference between two or more groups according to demographic characteristics. The significance level was set as P<0.05. Differences in job stress according to general characteristics As a result of examining the difference in job stress according to general characteristics, there was a difference by work type, and regular workers were higher than contract workers. As a result of examining the differences in job stress (job characteristics) according to general characteristics, there were differences by education level and occupation, and the level of education was university graduate was the highest, followed by college graduates and high school graduates. In terms of occupation, managers were higher than leaders. In terms of job stress (role-related) according to general characteristics, there were no statistically significant differences in gender, age, education level, working years, monthly salary, working type, and general characteristics of occupation. According to the general characteristics, there were no statistically significant differences in the general characteristics of gender, age, education level, years of service, salary, work type, and job type in job stress (interpersonal relationship). In terms of job stress (organizational characteristics) according to general characteristics, there were no statistically significant differences in gender, age, education level, years of service, salary, work type, and general characteristics of the job. As a result of examining the difference in job stress (compensation system) according to general characteristics, there was a difference by education level, with high school graduates showing the highest, and college graduates and university graduates appeared in the order. Effect of job stress on organizational commitment It was found that job stress (job characteristics, role-related, human relations, organizational characteristics, compensation system) affects the role-related, human relations, organizational characteristics, and compensation system of organizational commitment (service commitment). In other words, role-related and interpersonal relationships were found to have a negative effect on employee commitment, and organizational characteristics and compensation systems were found to have a positive effect on employee commitment. Role-related, interpersonal relationship, organizational characteristics, and compensation system were found to have an influence. In other words, role-related and interpersonal relationships were found to have a negative effect on normative commitment, and organizational characteristics and reward systems were found to have a positive effect on normative commitment. Role-related, interpersonal relationship, organizational characteristics, and compensation system were found to have an influence. In other words, role-related and interpersonal relationships were found to have a negative effect on affective commitment, and organizational characteristics and compensation systems were found to have a positive effect on affective commitment. Effect of job stress on turnover intention As a result of examining the effects of job stress (job characteristics, role-related, human relations, organizational characteristics, compensation system) on turnover intention, the following results were found. It was found that job characteristics, role-related, interpersonal relationships, and compensation system had an effect on turnover intention. In other words, role-related and interpersonal relationships were found to have a positive effect on turnover intention, and job characteristics and compensation system were found to have a negative effect on turnover intention. The effect of organizational commitment on turnover intention The results of examining the effect of organizational commitment on turnover intention were as follows. Organizational commitment was found to have an effect on turnover intention. In other words, organizational commitment was found to have a negative effect on turnover intention. DISCUSSION Job stress refers to all stress related to job performance, and the concept is expressed differently depending on the research approach, such as environmental characteristics, stimuli and responses, and interactions between individuals and the environment. Parker and DeCotiis (1983) defined job stress as a dysfunctional emotion or consciousness that a specific individual feels as a result of perceived conditions or events at the workplace, and an individual's feelings of trying to leave the workplace. Beehr and Newman (1978) conceptualize job stress as a mental and physical condition that causes an individual to deviate from normal functioning due to an interaction situation between work-related factors and the worker. Organizational commitment was identified as a more positive and active tendency toward the organization and divided into three categories. First, it refers to identification as a strong belief in accepting the purpose and values of the organization. Second, it means attachment as a will to exert considerable effort for the sake of the organization. Third, there is a strong desire to remain in the organization. Reichers (1986) pointed out that there is a problem in looking at organizational commitment as an attitude through their definition, because the intention to put effort into the organization and the desire to stay in the organization refer to the individual's intention to act, not the individual's psychological attitude. Among the factors of organizational commitment, emotional commitment is related to work autonomy and meaning, skill diversity, supervisor feedback, and participatory management. Normative commitment is related to organizational dependence and participatory management, and sustaining commitment is related to age, tenure, and career satisfaction (Dunham et al., 1994). Turnover, in a broad sense, includes all movement of members of an organization into and out of an organization, and in a narrow sense, it means leaving one's current job and moving to another job or job. In order to investigate the effect of job stress of commercial sports center workers on organizational commitment and turnover intention, this study selected 300 workers working in commercial sports centers with 10 or more people located in Seoul, Incheon, and Gyeonggi province. As a result of examining the difference in job stress according to general characteristics, there was a difference by work type, and regular workers were higher than contract workers. The reason is thought to be that job stress is high when a job environment that matches the individual's motivation or ability cannot be provided or when the individual's ability is difficult to handle the job environment. Job stress (job characteristics, role-related, human relations, organizational characteristics, compensation system) affects organizational commitment, service commitment, normative commitment, and affective commitment), and role-related, human relationship, organizational characteristics, and compensation system all affect. It was found that role-related and interpersonal relationships had a negative effect on retention commitment, and organizational characteristics and compensation system had a positive effect on retention commitment. As a result of examining the effect of job stress (job characteristics, role-related, human relations, organizational characteristics, compensation system) on turnover intention, role-related and human relations had a positive effect on turnover intention. Characteristics and compensation system showed a negative effect. This suggests that the results of this study and commercial sports center workers have more meaning in the causal relationship and role-related aspects in the expectations and attractiveness of the current job than in the compensation system, since there are many younger subjects. As a result of correlation analysis to find out the relationship between organizational commitment and turnover intention, it was found that the correlation coefficient between organizational commitment and turnover intention had a negative effect. Specifically, it was said that the relationship between conflict of commitment and intention to leave has a stronger negative correlation with longer tenure. Therefore, in order to enhance organizational commitment through job stress reduction, measures are needed for job autonomy, etc., among job characteristics that have a relatively strong influence on organizational commitment. In order to block the negative effects of job stress on the organizational commitment of commercial sports center workers, redesign of the job, clear job regulations, establishment of a smooth communication system, periodic interviews at a certain time, education and training for the development of competence of members, etc. A systematic effort is required to remove the factors that induce job stress through the method of If these various policies are prepared, it will be an opportunity to naturally reduce the turnover rate of workers.
2022-10-29T15:09:13.694Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "bbcbd15fe1ab72e6648208ed1e721c1093774846", "oa_license": "CCBYNC", "oa_url": "https://www.e-jer.org/upload/jer-18-5-294.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a3c8f57ae6db221be841af1d1c43c19c214f47ce", "s2fieldsofstudy": [ "Business", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
300275
pes2o/s2orc
v3-fos-license
The evolution of genetic architectures underlying quantitative traits In the classic view introduced by R. A. Fisher, a quantitative trait is encoded by many loci with small, additive effects. Recent advances in QTL mapping have begun to elucidate the genetic architectures underlying vast numbers of phenotypes across diverse taxa, producing observations that sometimes contrast with Fisher's blueprint. Despite these considerable empirical efforts to map the genetic determinants of traits, it remains poorly understood how the genetic architecture of a trait should evolve, or how it depends on the selection pressures on the trait. Here we develop a simple, population-genetic model for the evolution of genetic architectures. Our model predicts that traits under moderate selection should be encoded by many loci with highly variable effects, whereas traits under either weak or strong selection should be encoded by relatively few loci. We compare these theoretical predictions to qualitative trends in the genetics of human traits, and to systematic data on the genetics of gene expression levels in yeast. Our analysis provides an evolutionary explanation for broad empirical patterns in the genetic basis of traits, and it introduces a single framework that unifies the diversity of observed genetic architectures, ranging from Mendelian to Fisherian. A quantitative trait is encoded by a set of genetic loci whose alleles contribute directly the trait value, interact epistatically to modulate each others' contributions, and possibly contribute to other traits. The resulting genetic architecture of a trait (Hansen, 2006) influences its variational properties (Kroymann and Mitchell-Olds, 2005;Carlborg et al., 2006;Rockman and Kruglyak, 2006;Mackay et al., 2009) and therefore affects a population's capacity to adapt to new environmental conditions (Jones et al., 2004;Carter et al., 2005;Hansen, 2006). Over longer timescales, genetic architectures of traits have important consequences for the evolution of recombination (Azevedo et al., 2006), of sex (de Visser and Elena, 2007) and even reproductive isolation and speciation (Fierst and Hansen, 2010). Although scientists have studied the genetic basis of phenotypic variation for more than a century, recent technologies, as well as the promise of agricultural and medical applications, have stimulated tremendous efforts to map quantitative trait loci (QTL) in diverse taxa (Ungerer et al., 2002;Flint and Mackay, 2009;Visscher, 2008;Manolio et al., 2009;Rockman et al., 2010;Emilsson et al., 2008;Ehrenreich et al., 2012). These studies have revealed many traits that seem to rely on Fisherian architectures, with contributions from many loci (Orr, 2005), whose additive effects are often so small that QTL studies lack power to detect them individually Rockman, 2012;Yang et al., 2010). Other traits, however, are encoded by a relatively small number of loci -including the large number of human phenotypes with known Mendelian inheritance. The subtle statistical issues of designing and interpreting QTL studies in order to accurately infer the molecular determinants of a trait are already actively studied Rockman, 2012;Yang et al., 2010). Nevertheless, distinct from these statistical issues of inferences from empirical data, we lack a theoretical framework for forming a priori expectations about the genetic architecture underlying a trait (Rockman and Kruglyak, 2006;Hansen, 2006). For instance, what types of traits should we expect to be monogenic, and what traits should be highly polygenic? More generally, how does the genetic architecture underlying a trait evolve, and what features of a trait shape the evolution of its architecture? To address these questions we developed a mathematical model for the evolution of genetic architectures, and we compared its predictions to a large body of empirical data on quantitative traits. Results and Discussion Genetic architectures predicted by a population-genetic model Our approach to understanding the evolution of genetic architectures combines standard models from quantitative genetics (Lande, 1976) with the Wright-Fisher model from population genetics (Ewens, 2004). In its simplest version, our model considers a continuous trait whose value, x, is influenced by L loci. Each locus i contributes additively an amount α i , so that the trait value is defined as the mean of the α i values across contributing loci. This trait definition means that a gene's contribution to a trait is diluted when L is large, which prevents direct selection on gene copy numbers when genes have similar contributions (Proulx and Phillips, 2006;Proulx, 2012). We discuss this definition below, along with alternatives such as the sum. The fitness of an individual with trait value x is assumed Gaussian with mean 0 and standard deviation σ f , so that smaller values of σ f correspond to stronger stabilizing selection on the trait (Lande, 1976). Individuals in a population of size N replicate according to their relative fitnesses. Upon replication, an offspring may acquire a point mutation that alters the direct effect of one locus, i, perturbing the value of α i for the offspring by a normal deviate; or the offspring may experience a duplication or a deletion in a contributing locus, which changes the number of loci L that control the trait value in that individual (see Methods). Point mutations, duplications, and deletions occur at rates µ, r dup , r del , which have comparable magnitudes in nature (table S1; Lynch et al., 2008;Watanabe et al., 2009;Lipinski et al., 2011;van Ommen, 2005). Finally, an offspring may also increase the number of loci that contribute to its trait value by recruitment -that is, by acquiring a recruitment mutation, with probability µ × r rec , in some gene that did not previously contribute to the trait value (see Methods). Over successive generations in our model, the genetic architecture underlying the trait -that is, how many loci contribute to the trait's value, and the extent of their contributions -varies among the individuals in the population, and evolves. The genetic architectures that evolve in our model represent the complete genetic determinants of a trait, which may include -but do not correspond precisely tothe genetic loci that would be detected based on polymorphisms segregating in a sample of individuals in a QTL study. We discuss this important distinction below, when we compare the predictions of our model to empirical QTL data. We studied the evolution of genetic architectures in sets of 500 replicate populations, simulated by Monte Carlo, with different amounts of selection on the trait. We ran each of these simulations for 50 million generations, in order to model the extensive evolutionary divergence over which genetic architectures are assembled in nature. The form of the genetic architecture that evolves in our model depends critically on the strength of selection on the trait. In particular, we found a striking non-monotonic pattern: the equilibrium number of loci that influence a trait is greatest when the strength of selection on the trait is intermediate (Fig. 1). Moreover, the variability in the contributions of loci to the trait value (Fig. S1) and the effects of deleting or duplicating genes (Fig. S2) are also greatest for a trait under intermediate selection. In other words, our model predicts that traits under moderate selection will be encoded by many loci with highly divergent effects; whereas traits under strong or weak selection will be encoded by relatively few loci. We also studied how epistatic interactions among loci influence the evolution of genetic architecture. To incorporate the influence of locus j on the contribution of locus i we introduced epistasis parameters β ji so that the trait value is now given by where f β is a standard sigmoidal filter function (Azevedo et al., 2006, see Methods and Fig. S4). As with the direct effects of loci, the epistatic effects were allowed to mutate and vary within the population, and evolve. Although significant epistatic interactions emerge in the evolved populations (Fig. S3B), the presence of epistasis does not strongly affect the average number of loci that control a trait (Figs. S3A and S4). Epistasis is not required for the evolution of large L, nor does it change the shape of its dependence on the strength of selection. Intuition for the results There is an intuitive explanation for the non-monotonic relationship between the selection pressure on a trait and the number of loci that control it. For a trait under weak selection (high σ f ), changes in the trait value have little effect on fitness. Thus, even if deletions, recruitments and duplications change the trait value, these changes are nearly neutral (Fig. 2). As a result, the number of loci controlling the trait evolves to its neutral equilibrium, which is small because deletions are more frequent than duplications and recruitments (see Methods, Figs. 1 and S3). On the other hand, when selection on a trait is very strong (low σ f ), few point mutations, and only those with small effects on the trait, will fix in the population. As a result, all loci have similar contributions to the trait value ( Fig. 2 -row 1), and so duplications or deletions again have little effect on the trait or on fitness ( Fig. 2 -rows 2 and 3). In this case, the equilibrium number of loci is given by the value expected when deletions and duplications, but not recruitments, are neutral (Figs. 1 and S3). Only when selection on a trait is moderate can variation in the contributions across loci accrue and impact the fixation of deletions and duplications ( Fig. 2 -row 4), by a process called compensation: a slightly deleterious point mutation at one locus, which perturbs the trait value, segregates long enough to be compensated by point mutations at other loci (Rokyta et al., 2002;Meer et al., 2010;Kimura, 1985;Poon and Otto, 2000). Compensation increases the variance in the contributions among loci (Fig. 2, row 1), as has been observed for many phenotypes in plants and animals (Rieseberg et al., 1999). Finally, even though duplications and deletions are mildly deleterious in this regime, there is a bias favoring duplications over deletions ( Fig. 2 -row 3). This bias arises because duplications increase the number of loci in the architecture, which attenuates the effect of each locus on the trait (Fig. 2 -row 2). Thus when selection is moderate, duplications and recruitments fix more often than deletions and drive the number of contributing loci above its neutral expectation ( Fig. 2 -rows 4 and 5). As the number of loci increases the bias is reduced (Fig. 2 -rows 4 and 5), and so L equilibrates at a predictable value (Figs. 1 and S3). Duplications and recruitments might also be slightly favored over deletions under intermediate selection, because architectures with more loci also have reduced genetic variation (Wagner et al., 1997). This effect -which would positively select for an increase in gene copy numbers -is likely weak in our model, as duplications and recruitments are deleterious on average under intermediate selection, only less so than deletions ( Fig. 2 -rows 4 and 5). Robustness of results to model assumptions The predictions of our model -notably, that the number of loci in a genetic architecture is greatest for traits under intermediate selection -are robust to choices of population-genetic parameters. The nonmonotonic relation between selection pressure on a trait and the size of its genetic architecture, L, holds regardless of population size; but the location of maximum L is shifted towards weaker selection in larger populations (Fig. S5). This result is compatible with our explanation involving compensatory evolution: selection is more efficient in large populations, and so compensatory evolution occurs at smaller selection coefficients. Likewise, when the mutation rate is smaller the resulting equilibrium number of controlling loci is reduced (Fig. S6). This result is again compatible with the explanation of compensatory evolution, which requires frequent mutations. Increasing the rate of deletions relative to duplications also reduces the equilibrium number of loci in the genetic architecture, but our qualitative results are not affected even when r del is twice as large as r dup (Fig. S7). Finally, increasing the rate of recruitment r rec (or the genome size) increases the number of loci contributing to all traits except those under very strong selection, as expected from Fig. 2. Our prediction that traits under intermediate selection are encoded by the richest genetic architectures is insensitive to changes in this parameter, and it holds even in the absence of recruitment (Fig. S8). Our analysis has relied on several quantitative-genetic assumptions, which can be relaxed. First, we assumed that all effects of locus i (i.e. α i and all β ij and β ji ) are simultaneously perturbed by a point mutation. Relaxing this assumption, so that a subset of the effects are perturbed, does not change our results qualitatively (Fig. S9). Second, we assumed that point mutations have unbounded effects so that variation across loci can increase indefinitely. To relax this assumption we made mutations less perturbative to loci with large effects (see Methods). Even a strong mutation bias of this type led to very small changes in the equilibrium behavior (Fig. S10). Third, we assumed no metabolic cost of additional loci, even though additional genes in Saccharomyces cerevisiae are known to decrease fitness slightly (Wagner, 2005(Wagner, , 2007. Nonetheless, including a metabolic cost proportional to L does not alter our qualitative predictions (Fig. S11). Finally, we defined the trait value as the average of the contributions α i across loci, as opposed to their sum. This definition reflects the intuitive notion that a gene product's contribution to a trait will generally depend on its abundance relative to all other contributing gene products. Moreover, this assumption that increasing the number of loci influencing a trait attenuates the effect of each one is supported by empirical data: changing a gene's copy number is known to have milder phenotypic effects when the gene has many duplicates (Gu et al., 2003;Conant and Wagner, 2004). Nonetheless, alternative definitions of the trait value, which span from the sum to the average of contributions across loci, generically exhibit the same qualitative results (text S1 and Fig. S12). Although robust to model formulation and parameter values, our results do depend in part on initial conditions. When selection is strong, the initial genetic architecture can affect the evolutionary dynamics of the number of loci (Fig. S14). This occurs because the initial architecture may set dependencies among loci that prevent a reduction of their number. This result indicates that only those architectures of traits under very strong selection should depend on historical contingencies. We have also studied a multitrait version of our model, where genes participating in other traits can be recruited or lost through The consequences of gene duplications, recruitments and deletions in a population-genetic model. Populations were initially evolved with a fixed number of controlling loci L (line 1), and we then measured the effects of recruitments, deletions and duplications on the trait value (line 2) and on fitness (line 3). From the latter, we calculated the rate at which deletions, recruitment and duplications enter and fix in the population (line 4), and the resulting rate of change in the number of loci contributing to the trait (line 5). Line 1: For L > 1, the variation in direct effects (αi) and indirect effects among controlling loci ( j (βji)) increases as selection on the trait is relaxed. Line 2: As a consequence of this variation among loci, the average change in the trait value following a duplication or a deletion also increases as selection on the trait is relaxed. Line 3: Changes in the trait value are not directly proportional to fitness costs, because the same change in x has milder fitness consequences when selection is weaker (larger σ f ). As a result, the average fitness detriment of duplications and deletions is highest for traits under intermediate selection. Line 4: Consequently, the fixation rates of duplications and deletions are smallest under intermediate selection. Line 5: The equilibrium number of loci controlling a trait under a given strength of selection is determined by that value of L for which duplications and recruitments on one side, and deletions on the other, enter and fix in the population at the same rate. For example, when σ f = 10 −1.5 these rates are equal when L is close to 12 (black arrow), so that the equilibrium genetic architecture contains ≈ 12 loci on average (compare Fig. S3 black arrow). mutation. Even though this model features pleiotropy, and the effects of recruitments evolve neutrally, our qualitative results remained unaffected (text S3 and Fig. S15). The dynamics of copy number Previous models related to genetic architecture have been used to study the evolutionary fate of gene duplicates. These models typically assume that a gene has several sub-functions, which can be gained (neo-functionalization; Ohno, 1970) or lost (sub-functionalization; Force et al., 1999;Lynch and Force, 2000) in one of two copies of a gene. Such "fate-determining mutations" (Innan and Kondrashov, 2010) stabilize the two copies, as they make subsequent deletions deleterious. Such models complement our approach, by providing insight into the evolution of discrete, as opposed to continuous or quantitative, phenotypes. Yet there are several qualitative differences between our analysis and previous studies of gene duplication. Most important, our model considers the dynamics of both duplications and deletions, in the presence of point mutations that perturb the contributions of loci to a trait. This co-incidence of timescales is important in the light of empirical data (Lynch et al., 2008;Watanabe et al., 2009;Lipinski et al., 2011;van Ommen, 2005) showing that changes in copy numbers occur at similar rates as point mutations (table S1). Under these circumstances, a gene may be deleted or acquire a loss-of-function mutation before a new function is gained or lost. Our model includes these realistic rates, and accordingly we find that duplicates are very rarely stabilized by subsequent point mutations. Instead, the number of loci in a genetic architecture may increase, in our model, because compensatory point mutations introduce a bias towards the fixation of duplications as opposed to deletions. Comparison to empirical eQTL data Like most evolutionary models, our analysis greatly simplifies the mechanistic details of how specific traits influence fitness in specific organisms. As a result, our analysis explains only the broadest, qualitative features of how genetic architectures vary among phenotypic traits, leaving a large amount of variation unexplained. This remaining variation may be partly random (as predicted by the distributions of the number of evolving loci, see e.g. Fig. 1), and partly due to ecological and developmental details that our model neglects. Due to this variation, a quantitative comparison between our model and empirical data would require information about the genetic architectures for at least hundreds of traits (see below, for our analysis of expression QTLs). Nevertheless, the qualitative, non-monotonic predictions of our model ( Fig. 1) may help to explain some well-known trends in the genetics of human traits. For instance, in accordance with our predictions, human traits under moderate selection, such as stature or susceptibility to midlife diseases like diabetes, cancer, or heart-disease, are typically complex and highly polygenic; whereas traits under very strong selection, such as those (e.g. mucus composition or blood clotting) affected by childhood-lethal disease like Cystic fibrosis or Haemophilias are often Mendelian; and so too traits under very weak selection (such as handedness, bitter taste, or hitchhiker's thumb) are often Mendelian. Our analysis provides an evolutionary explanation for these differences, and it delineates the selective conditions under which we may expect a Mendelian, as opposed to Fisherian, architecture. We tested our evolutionary model of genetic architectures by comparison with empirical data on a large number of traits. Such a comparison must, of course, account for the fact that our model describes the true genetic architecture underlying a trait, whereas any QTL study has limited power and describes only the associations detected from polymorphisms segregrating in a particular sample of individuals. Accounting for this discrepancy (see below), we compared our model to data from the study of , who measured mRNA expression levels and genetic markers in 112 recombinant strains produced from two divergent lines of S. cerevisiae. For each yeast transcript we computed the number of non-contiguous markers associated with transcript level, at a given false discovery rate (see Methods). We also calculated the codon adaptation index (CAI) of each transcript -an index that correlates with the gene's wildtype expression level and with its overall importance to cellular fitness (Sharp and Li, 1987). We found a striking, non-monotonic relationship between the CAI of a transcript and the number of loci linked to variation in its abundance (Fig. 3A). Thus, assuming that CAI correlates with the strength of selection on a transcript, detected more loci regulating yeast transcripts under intermediate selection than transcripts under either strong or weak selection. We compared the empirical data on yeast eQTLs (Fig. 3A) to the predictions of our evolutionary model. In order to make this comparison, we first evolved genetic architectures for traits under various amounts of selection (Fig. S3), and for each architecture we then simulated a QTL study of the exact same type and power as the yeast eQTL study: that is, we generated 112 crosses from two divergent lines using the yeast genetic map (text S2). As expected, the simulated QTL studies based these 112 segregants detected many fewer loci linked to a trait than in fact contribute to the trait in the true, underlying genetic architecture ( Fig. 3B versus Fig. 1). This result is consistent with previous interpretations of empirical eQTL studies . The simulated QTL studies revealed another important bias: a locus that contributes to a trait under weak selection is more likely to be correctly identified in a QTL study than a locus that contributes to a trait under strong selection (Fig. S16). Furthermore, our simulations demonstrate that the number of associations detected in such a QTL study depends on the divergence time between the parental strains used to generate recombinant lines (Fig. S17). Finally, traits under weaker selection may be more prone to measurement noise, which we also simulated (Fig. S18). Despite these detection biases, which we have quantified, the relationship between the selection pressure on a trait and the number of detected QTLs in our model ( Fig. 3B and Figs. S18 and S19) agrees with the relationship observed in the yeast eQTL data (Fig. 3A). Importantly, both of these relationships exhibit the same qualitative trend: traits under intermediate selection are encoded by the richest genetic architectures. Conclusion Many interesting developments lie ahead. Our model is far too simple to account for tissue-and timespecific gene expression, dominance, context-dependent effects, etc Ala-Korpela et al., 2011). How these complexities will change predictions for the evolution of genetic architectures remains an open question. Nonetheless, our analysis shows that it is possible to study the evolution of genetic architecture from first principles, to form a priori expectations for the architectures underlying different traits, and to reconcile these theories with the expanding body of QTL studies on molecular, cellular, and organismal phenotypes. Model We described the evolution of genetic architectures using the Wright-Fisher model of a replicating population of size N , in which haploid individuals are chosen to reproduce each generation according to their relative fitnesses. The fitness of an individual with L loci encoding trait value x is where G denotes the density at x of a Gaussian distribution with mean 0 and standard deviation σ f , and the second term denotes the metabolic cost of harboring L loci, which depends on a parameter c. The trait value of such an individual, given the direct contributions α i and epistatic terms β ji is described by Eq. (1) where is a sigmoidal curve, so that the epistatic interactions either diminish or augment the direct contribution of locus i depending on whether j β ji is positive or negative (Fig. S4). In general, loci do not influence , for traits within each bin of CAI. Greyscale indicate the number of transcripts in each bin (darker means more data). Mean numbers of detected eQTLs are represented by circles. B: For the simulated experiment, we evolved 100 populations of genetic architectures, using the parameters corresponding to Fig. S3. From each such population, we then evolved two lines independently for 25, 000 generations in the absence of deletions, duplications and recruitment, to mimic the divergent strains used in the yeast cross of . From these two divergent genotypes we then created 112 recombinant lines following the genetic map from . We then analyzed the resulting simulated data with R/qtl in the same way as we had analyzed the yeast data (text S2). The distribution of QTLs detected and their means are represented as in Fig. 1, for each value of selection strength σ f . themselves (β ii ≡ 0) and, in the model without epistasis, all β ji ≡ 0 and f β ≡ 1. If an individual chosen to reproduce experiences a duplication at locus i then the new duplicate, labelled k, inherits its direct effect (α k = α i ) and all interaction terms (β kj = β ij and β jk = β ji for all j = i, k), with the interaction terms β ik and β ki initially set to zero. Recruitment occurs with probability r rec per mutation of one of the 6, 000 genes not contributing to the trait. The initial direct contribution α i of recruited locus i is drawn from a normal distribution with mean zero and standard deviation σ m ; its interaction terms with other loci (k), β ik and β ki , are initially set to zero. Note that this assumption is relaxed in the multilocus version of our model, where the direct and indirect effects of recruitments evolve neutrally (text S3 and Fig. S15). In general a point mutation at locus i changes its contribution to the trait, α i , and all its epistatic interactions, β ij and β ji , each by an independent amount drawn from a normal distribution with mean zero and standard deviation σ m . The normal distribution satisfies the assumptions that small mutations are more frequent than large ones (Orr, 1999;Eyre-Walker and Keightley, 2007), and that there is no mutation pressure on the trait (Lande, 1976). We relaxed the former assumption by drawing mutational effects from a uniform distribution without qualitative changes to our results (Fig. S13). In order to relax the latter assumption we included a bias towards smaller mutations in loci with large effects, so that the mean effect of a mutation at locus i now equals −b α × α i and −b β × β ij , respectively for α i and β ij (Rajon and Masel, 2011). We also considered a model in which a mutation at locus i affects only a proportion p em of the values α i , β ij , and β ji . By default, simulations were initialized with L = 1 and α 1 = 0; alternative initial conditions were also studied, as shown in Fig. S14. Markov chain for neutral changes in copy number When deletions and duplications are neutral, and recruitments strongly deleterious, the evolution of the number of loci L in the genetic architecture is described by a Markov-chain on the positive integers. The probability of a transition from L = i to L = i + 1 equals r dup × i, and that of a transition from i to i − 1 is r del × i. We disallow transitions to L = 0, assuming that some regulation of the trait is required. We obtained the stationary distribution of L by setting the density of d 1 of individuals in stage 1 to 1 and calculating the density d i of individuals in the following stages as The equilibrium probability of being in state i was calculated as and the expected value of L was calculated as ∞ i=1 p i × i. With r dup = 10 −6 and r del = 1.25 × 10 −6 , we found an equilibrium expected L of 2.485. When deletions, duplications and recruitments are all neutral, equation (4) can be replaced by: This equation illustrates the fact that the rates of deletions (which include loss of function mutations) and duplication depend on the number of loci in the architecture, whereas the rate of recruitments does not. With µ = 3 × 10 −6 and r rec = 5 × 10 −5 , we found an equilibrium expected L of 4.705. Calculation of s and p fix We first evolved populations to equilibrium with a fixed number of controlling loci L, and we then measured the effects of deletions, duplications or recruitments introduced randomly into the population. We simulated the evolution of the genetic architecture with L fixed in 500 replicate populations, over 8 × 10 6 generations for deletions and 10 × 10 6 generations for duplications, reflecting the unequal waiting time before the two kinds of events. We used 10×10 6 generations for recruitment as well, although different durations did not affect our results. For each genotype k in each evolved population, we calculated the fitness ω k (i) of mutants with locus i deleted or duplicated. We calculated the corresponding selection coefficients as: where < ω > denotes mean fitness in the population. We calculated s as the mean across loci and genotypes of s k (i), weighted by the number of individuals with each genotype. We calculated the probability of fixation of a duplication, deletion or recruitment as and obtained the mean p fix using the same method as for s. Rates of deletions and duplications fixing were calculated per locus (Fig. 2) as r del or r dup times p f ix . The total probability of a duplication or a deletion entering the population and fixing is, of course, also multiplied by L. However, recruitment rates remain constant as L changes. Therefore, we divided the rate of recruitments by L in Fig. 2, for comparison to the per-locus duplication and deletion rates. Number of loci influencing yeast transcript abundance We used the R/qtl (Broman et al., 2003;R Development Core Team, 2011) package to calculate LOD scores for a set of 1226 observed markers and 3223 uniformly distributed pseudomarkers separated by 2 cM, by Haley-Knott regression. We calculated the LOD significance threshold for a false discovery rate (FDR) of 0.2 as the corresponding quantile in the distribution of the maximum LOD after 500 permutations (a FDR of 0.01 and a fixed LOD threshold of 3 produced qualitatively similar results). The number of detected loci linked to the expression of a transcript was calculated as the number of non-consecutive genomic regions with a LOD score above the threshold. We downloaded S. cerevisiae coding sequences from the Ensembl database (EF3 release), and calculated CAI values with the seqinr (Charif and Lobry, 2007) package, using codon weights from a set of 134 ribosomal genes. Supplementary material Text S1: Alternative definitions of the trait As described in the main text, the non-monotonic relationship between the strength of selection on a trait and the number of loci in its underlying genetic architecture depends on the unequal fitness consequences of deletions and duplications. This behavior should therefore be absent when the trait value x is the sum, rather than the average, of the contributions across loci. To explore this issue, we generalized our definition of the trait by introducing an additional parameter ǫ: As the parameter ǫ ranges from 0 to 1 the trait definition ranges from a sum to an average. As expected when ǫ = 0, the number of loci in the equilibrium genetic architecture is greatly reduced under intermediate selection as compared to the results in the main text (Fig. S15). Interestingly, the mean number of loci also shows a non-monotonic trend in this situation, with a small peak at log 10 (σ f ) = −0.5. This trend is likely driven by the fixation rate of recruitment mutations (as seen in Fig. 2). This explains why it is much less pronounced than those in Figs. 1 and S3-A, which involve a higher fixation rate of duplications over deletions. For any other value of ǫ < 1, we found a non-monotonic relationship similar to the one reported in the main text. Thus, our qualitative results hold for all models provided the trait is not defined strictly as the sum of contributions across loci. When the trait value equals the sum of the contributions of all locus (ǫ = 0), the effect of a gene deletion, knock-out or knock-down is independent of the number of copies of the gene. Conversely when the trait value is the mean of the contributions (ǫ = 1), or is some function between the mean and the sum (0 < ǫ < 1), the effect of a deletion decreases with the number of loci in the genetic architecture. As shown by Conant and Wagner (2004) in C.elegans, the number of detectable knock-down phenotypes decreases with the number of copies of genes in a gene family, suggesting that ǫ does indeed exceed 0 in this species. A similar stronger effect of the deletion of a singleton compared to that of a duplicate has also been observed in S. cerevisiae (Gu et al., 2003). Text S2: QTL detection in simulated populations We analyzed the genetic architectures that evolved under our population-genetic model using a simulated QTL study of the exact same type and power as the yeast eQTL study . Specifically, 100 evolved populations were taken from simulations with parameters corresponding to Fig. S2 for the model with epistasis. From each population, we evolved two lines independently for T generations in the absence of deletions, duplications and recruitment. We then used the most abundant genotype from each line to create parental strains, mimicking the diverged BY and RM parental strains in . A few populations were polymorphic for the number of loci initially, sometimes resulting in two lines with different values of L, which we discarded. In each parent, we assigned the L contributing loci randomly among 1226 simulated marker sites, and also assigned their associated α i values and the interactions β ij between loci. We constructed 112 recombinant haploid offspring by mating these two parents according to the genetic map inferred from Brem et al. Each offspring inherited each α i value, and the set of interactions towards other loci (β ij ∀j), from either one or the other parent. The trait value in each offspring was calculated as in eq. (1) and then was perturbed by adding a small amount of noise (normally distributed with mean 0 and standard deviation σ n ), to simulate measurement noise. We then analyzed these artificial genotype and phenotype data following the same protocol we used for the real yeast eQTLs data (i.e. using Rqtl). We repeated this entire process with 100 different pairs of parents for each value of σ f . Fig. S18 shows the relationship between the selection pressure σ f and the number of linked loci detected in this simulated QTL study, for different divergence times between the two lines and different values of σ n . In Fig. S19, we increased σ n proportionally to log 10 (σ f ), from 0.0001 to 0.001. We also calculated the probability that a locus known to influence the trait in the true architecture (Fig. S3) is in fact detected in the QTL study. This probability is plotted as a function of σ f for different values of the noise σ n (Fig. S16) and of the time of divergence (Fig. S17). Text S3: Multitrait model We simulated the evolution of the genetic architecture underlying multiple traits with a model slightly modified from the single-trait version. In this model, the phenotype consists of 10 traits, each trait k under a different selection pressure σ f (k) (the values of σ f (k) are those used in independent simulations of the single-locus model; see the x-axis of Fig. S15). In the multiple traits version, L denotes the total number of loci forming the architecture of the 10 traits. L can change when loci are duplicated at rate r dup and deleted at rate r del . Each locus participates to a set of traits. The direct effect of locus i on trait t is now denoted α it and the indirect effect of locus i on the part of locus j that contributes to t is denoted β ijt . To allow for partial gains and losses of function, we define two new matrices A and B, which have the same dimensions as α and β. The functions corresponding to α it and β ijt are 'on' when A it = 1 or B ijt = 1, respectively, and are 'off' otherwise. Similarly to eq. (1) in our single-trait model, we calculate the trait value t as: where f β is the sigmoidal function defined in eq (3). Point mutations of locus i alter all α it and β ijt by a normal deviate. Moreover, a mutation can change A it and B ijt to 0 with probability 0.1 and to 1 with probability 0.005. Over successive generations, the genetic architecture underlying each trait evolves through gene deletions and duplications, and through recruitments and losses of new functions. In this model, only the L genes in the simulated architecture can be recruitedi.e. we do not assume a fixed number of genes that can be recruited at any time. Therefore, the phenotypic effects of recruitment evolve during our simulation, instead of being sampled from a given distribution. If i A it = 0 for any trait t, the individual is considered non-viable and fitness ω k equals 0. Otherwise, fitness is the product of Gaussian functions for each trait times the cost associated to the number of loci, as follows: We simulated the evolution of the genetic architecture through a Wright Fisher process, with population genetics parameters identical to the default values in table S2, except c = 10 −4.5 (Wagner, 2005(Wagner, , 2007). The results of 200 simulations are represented in Fig. S15. Additional reference Xu, L, et al, 2006. Average gene length is highly conserved in prokaryotes and eukaryotes and diverges only between the two kingdoms. Mol Biol Evol 23:1107-8. Table S1. Estimates of rates of mutations µ, gene duplications r dup and deletions r del . All rates are per gene per generation. µ is the rate of non-silent mutations (Lynch et al., 2008) (0.75× the per-nucleotide mutation rate). When the mutation rate was given per nucleotide, we multiplied it by the average gene length in Eukaryotes (Xu et al., 2006) (1346. For D. melanogaster (Watanabe et al., 2009), the rate of detectable mutations was used, after correcting for the length of the 3 loci in the study. The scale of analysis can be the whole genome (WG), or a specific set of loci, in which case the number of loci is denoted in the table. Species µ r dup r del Scale Refs S. cerevisiae 3.33 × 10 −7 3.4 × 10 −6 2.1 × 10 −6 WG (Lynch et al., 2008) D. melanogaster 9.18 × 10 −7 4 × 10 −7 4 × 10 −7 3 (Watanabe et al., 2009) C. elegans 2.02 × 10 −6 1.25 × 10 −7 1.36 × 10 −7 WG (Lipinski et al., 2011) H. sapiens 1.5 × 10 −5 10 −5 6.67 × 10 −5 1 (van Ommen, 2005) (table S2). A mutation rate of 3 × 10 −6 was used to sample recruitment events, so the overall probability of recruitment remains constant. Figure S13. The evolution of L is not strongly affected by mutation biases in α or β. A strong bias (bα = b β = 0.4) reduces the maximum variation across loci and therefore reduces L when log 10 (σ f ) > −2.5. All values represent the ensemble average of 500 replicate simulations run for 5 × 10 7 generations. All other parameters are set to their default values (Table S2). One data point was omitted: L ≈ 61 at log10(σ f ) = −1.5 and bα = b β = 0.2. Figure S15. The number of loci contributing to a trait is a non-monotonic function of σ f whenever ǫ is higher than 0 (Text S1). All values represent the ensemble average of 500 replicate simulations run for 5 × 10 7 generations. All other parameters are set to their default values (table S2). ). In this model the overall rate of recruitments of new loci is reduced, and therefore so too the equilibrium number of loci per trait. Nevertheless, the qualitative relationship between selection pressure and number of loci is similar to that in the single-locus version of our model. Figure S19. The probability to detect a locus in the true architecture increases as selection becomes weaker (σ f increase). Detection is more accurate as the noise decreases. Error bars represent the mean ± one standard error, calculated over 100 replicate QTL simulations. Figure S20. The probability to detect a locus in the true architecture increases as selection becomes weaker (σ f increase). Detection is more accurate as the divergence time increases. Error bars represent the mean ± one standard error, calculated over 100 replicate QTL simulations. Fig. 3B in the main text, but we changed the time of divergence between the two lines in the experiment. The noise in traits measurement increases proportionally to log 10 (σ f ), from 0.0001 to 0.001.
2013-01-22T19:27:19.000Z
2012-10-31T00:00:00.000
{ "year": 2012, "sha1": "940babf085dd534eee2a978a030395205a5331df", "oa_license": null, "oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rspb.2013.1552", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "b194c50684cfa4b6462167f58cdf9e142329402f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
56342345
pes2o/s2orc
v3-fos-license
Cognitive and emotional control in college drinkers and the relationship to comorbid disorders Introduction: Alcohol abuse during university years is associated with long term deficits and higher rates of alcohol use disorders, a pervasive psychiatric problem. Due to the ongoing neuromaturation and cognitive development youth drinking may impact and be impacted by disordered thinking; factors which may relate to comorbid psychiatric disorders. Participants: One hundred and ninety seven university students were recruited and categorized in to different levels of alcohol consumption based on two self-report measures. Method: Cognitive performance was assessed through six tasks: Wisconsin Card Sorting Test, Delay Discounting Task, One Touch Stockings of Cambridge, Trail Making Task (A and B), the Behavioral Rating Inventory of Executive Function, and the Dysexecutive Questionnaire. Results/Conclusions: Significant findings were noted in two MANOVAs comparing various drinking groups and nondrinkers; both p < .05. Primary differences were noted in subscales of the Behavioral Rating Inventory of Executive Function and the Dysexecutive Questionnaire related to metacognition and self-regulation. Disparities in Wisconsin Card Sorting Task performance were also significant. Though the deficits were not as vast as hypothesized, the inability for binge drinkers to complete an equal number of categories in the WCST as their nondrinking peers holds interesting conclusions. Those which are discussed relate to binge drinkers’ similarities in dysfunction between drinkers and mental health disorders. Correspondence to: Barbara C. Banz, Yale University School of Medicine, Church Street, 7th Floor, New Haven, CT 06519, USA, Tel: 973 879-9931; Fax: 203 737-3591; E-mail: barbara.banz@yale.edu Introduction Alcohol use disorders (AUD) are widespread, and possibly the most preventable psychiatric disorder [1]. Alcohol abuse is particularly pervasive during college [2]. College drinkers are at higher risk for deleterious behavior [3] and AUDs [4]. Therapies exist for AUD and reducing university drinking, though widespread comorbidity with other mental health disorders such as depression and anxiety reduces efficacy of these methods [5,6]. Depression and anxiety present with altered executive functioning and emotion regulation [7,8]. In typical college-aged individuals, the regions which facilitate executive functioning and emotion regulation, the prefrontal cortex (PFC) and limbic system, are maturing [9]. During maturation, these regions are particularly sensitive to the neurotoxic effects of alcohol [10,11]. This is evident through executive and cognitive functioning measures [12]. As such, a multifaceted understanding of executive functioning in college drinkers is needed to understand potential confounding comorbidities to develop more effective interventions. Our study aimed to develop our understanding of the neurocognitive profile of college drinkers that may elucidate functioning similarities between college drinking and other mental illnesses that are prevalent in college students. The current measures allowed for the evaluation of interactions between separate processes, cognitive/executive functioning, and behavioral/psychological factors. These interactions were thought to be evident due to differences in drinking risk factors associated with the deficits in interpersonal awareness, constructs which are relate to various cognitive deficits and the PFC [13]. Methods and Materials Participants: One hundred ninety seven (75 males) participants were recruited from Introduction to Psychology courses at a midsize university. Individuals received extra credit or class credit for attending the study in order to fulfill course research requirements. Of this group, those that were over the age of 25 or did not report an age were excluded from further analyses (17 total). In accordance with previously used practices, those individuals with a history of neurological or psychological diagnoses (68) were not included in the initial data analyses. Additionally, in order to control for confounds due to familial drug or alcohol addiction or abuse, those with this history were excluded (24 total). Procedure: Each participant was asked to read and complete an approved consent form, and a demographics form which included questions regarding sex-specific binge-drinking (e.g. "If you are a female, please answer the following: Have you consumed 4 or more drinks on at least one occasion during the 2 weeks before survey? If so, how many drinks in one sitting?"). This specific questioning was included as binge-drinking it the most prevalent form [2,14] and most deleterious pattern of drinking in a maturing population [11]. The demographic form and the following tasks were administered in a counterbalanced order across participants. Alcohol Use Disorder Identification Test The Alcohol Use Disorder Identification Test (AUDIT) [15,16] was used to assess a more detailed view of drinking and consequential behavior within these individuals in order to more appropriately categorize individuals in to drinking groups. Executive Functioning Measures Wisconsin Card Sorting Test (WCST): The WCST, a computerized card sorting task during which participants sorted cards based on three categories: color, form, or number. Four stimulus cards allowed individuals to see representations of the categories as one red circle, two green stars, three blue squares, and four yellow plus signs (+). Participants were told that they will be informed whether their categorization is "correct" or "wrong", however, no direction was given as to which category is correct; the correct category changes after ten trials. [Strauss Sherman Spreen 2006]. The number of categories completed was the parameter used for analysis. Trail Making Task (TMT): This paper and pencil neuropsychological measure requires participants connect dots in a sequential manner. Two versions were administered to each participant; version A asked participants to connect dots labeled 1-25, version B requires connecting in an alphanumeric manner, A-1 through L-13. The ratio calculated from the time to successfully complete trials A and B was used for analyses in the current study; greater impairment is reflected through larger ratios. [17] One Touch Stockings of Cambridge (OTS): During OTS administration, individuals were asked to report how many moves it would take to arrange a given set of billiard balls in stockings to mirror an example on a computer tablet. Unlike SOC, one does not get the opportunity to move the billiard balls, items are rearranged mentally. [11] The two outcome measures used in this investigation were mean latency to correct choice and mean choices to correct choice. Delay-Discounting Task (DDT): The DDT was developed by [18] to assess individual choices. Over 27 items, participants were asked to choose if they would like to receive a smaller, immediate reward (SIR) or a larger, delayed reward (LDR). The LDRs were divided in to three categories; S: $25-35, M: $50-60, L: $75-85. Behavioral Ratings Inventory of Executive Function -Adult Version (BRIEF-A): Comprised of 75 items, this measure is effective at evaluating the everyday aspects of executive function [16]. Dysexecutive Questionnaire (DEX): The DEX is a 20-item questionnaire assesses behavior, cognition, motivation, and emotion and personality; cognitive regulation [19]. Each item is scored on a 5 point Likert scale, 0-4 for "Never" to "Very Often", higher scores implying greater dysexecutive function. Analytic Approach A multivariate analysis of variance (MANOVA) was used in order to evaluate differences between three groups (ND, BD, PBD) and the potential effects the current variables had on the other outcome measures. In order to explore differences within binge-drinkers further, a second MANOVA was used (groups: ND, LBD, HBD, PBD). Significance was set with alpha level .05 using Wilks' Lambda effects. Results and Discussion After consideration of the demographic exclusion factors, 88 individuals remained for continued analysis. Our initial groupings were non-drinkers (37 total; ND), binge-drinkers (30 total; BD), and problematic binge-drinkers (21 total; PBD). Individuals categorized in these groups met the following criteria: ND (no binge drinking reported through 4/5 questionnaire and zero scores on AUDIT), BD (binge drinking reported through 4/5 questionnaire and AUDIT scores between one and seven), and PBD (those binge drinkers which also scored an eight or above on the AUDIT with binge drinking reported through the 4/5 demographic question). To evaluate potential differences in very low and moderate drinkers we reevaluated our BD categorization. These individuals were categorized as low binge-drinkers (12 total; LBD) and high bingedrinkers (18 total; HBD) Individuals categorized in these groups met the following criteria: LBD (binge drinking reported through 4/5 questionnaire and AUDIT scores between one and four), HBD (binge drinking reported through 4/5 questionnaire and AUDIT scores between five and seven). These divided groups allowed for a more detailed view of individuals that still binge drink but have divergent rates. The initial MANOVA model was significant F(28, 144) = 1.64, p = .03. Significant differences were found in the DEX subscales Metacognition (F(2, 85) = 3.10, p = .03; PBD scores were significantly greater than ND ( p < .01)) and Behavioral-Emotional Self-Regulation (F(2, 85) = 3.68, p = .03; PBD significantly higher than ND (p < .01) and BD (p < .05)), and the BRIEF-A subscales MI (F(2, 85) = 3.55, p = .03; scores were significantly higher for PBD than ND (p < .05) and BD (p < .05)) and BRI (F(2, 85) = 4.43, p = .02; scores were significantly higher for PBD than ND (p < .05) and BD (p < .01). Comparisons can be found in Figure 1. The nonlinear relationship between alcohol consumption and the current measures suggest potential similarities between ND and HBD, and LBD and PBD. Patterns in WCST performance may relate to task completion through trial and error rather than evaluating, planned manner, suggestive of anterior cingulate cortex (ACC) and PFC impairment. These data also support similarities within depression and anxiety related populations [20,21]. The ACC, part of the limbic system, has been related to impulse control, reward anticipation, decision making, social cognitions, and emotion, all facets prevalent during neurodevelopment [9], risky decisions [22,23] and mental health disorders [7,8]. Differences in ACC activity have been suggested as a risk-factor for alcohol use in adolescents [24,25]. This disordered thinking suggests problem drinking populations may have difficulty adjusting to changes in their social setting, and physiological and Further support is found within the emotion component of the DEX. This is emphasized by the high comorbidity of depression and anxiety in college drinkers [26]. Additionally, these data relate to motivation to drink in college students. Motivations to drink for college students are typically categorized as enhancement, social, conformity, and coping [27], with particular concern for those drinking to cope or enhance feelings. The motivations "drinking to cope" or "enhance feelings" are associated with underlying emotion regulation issues, long-term AUD, and other disorders [28] due to the development of faulty constructs. As such, the consideration of a relationship between monitoring deficits (metacognition), emotional regulation, and motivation to drink may help develop focused intervention methods. Though we feel our methodological and analytical methods were sound, certain limitations need be noted. First, a formal diagnostic screen was not employed (e.g., a version of the SCID or MINI) as such, we cannot directly compare relationships with a formal clinical diagnosis. However, individuals who reported a previous history of a psychological disorder were excluded, strengthening our arguments of similarities between college alcohol users and mental health. Additionally, due to the nature of our sample (volunteers, no pre-screen) our samples were not adequately proportioned to evaluate gender-related differences. This facet is becomes increasingly important when considering motivation, and trajectory differences [31]. Therefore, we feel that future studies should employ a recruited population with more equal representations of gender. College drinkers are a unique population. Alcohol consumption during these formative years is often thought of as "coming of age" behavior [29]. However, individuals who partake are at an increased risk-of developing a long-term AUD [30,31] and higher depression and anxiety later in life [32]. Therefore, studies, such as the current investigation, are necessary in order to understand underlying dysfunction. This will lead to the development of intervention methods which would provide long-term reduction in behavior, and reduce the negative impact [32]. In sum, through the combined use of self-report and task-based evaluation of executive functions, cognitive control, and neurocognition we believe our data offer a valuable insight into a pervasive early life addiction precursor. With most AUD originating during this this time [30,33], and high comorbidity with mental health disorders [34,35] disentangling factors which may prevent an addiction are imperative. Disordered thinking related to dysfunctional emotional processing, regulation, self-monitoring, and awareness noted in the current study offer important suggestions for future directions, experimentally and therapeutically. Authorship and Contributions Both authors declare substantive intellectual contributions to the current work. BCB developed testing methods, performed data analysis, and majority manuscript preparation. DBD assisted in data interpretation and manuscript preparation. Funding Writing of this manuscript was supported in part by NIAAA and NIDA T32 Grants ((BCB) AA015496; DA007238). NIAAA and NIDA had no role in the study design, collection, analysis or interpretation of the data, writing of the manuscript, or the decision to submit the paper for publication. Declarations of Conflict Authors declare no conflicts of interest.
2019-05-10T13:07:12.114Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "4e8a9c0721c3737b5da5b86a59a4e050f42a07a0", "oa_license": "CCBY", "oa_url": "https://oatext.com/pdf/MHAR-1-108.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "bc68497b48685365adb7c1ddf8d4d1ac931d71fc", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
265427704
pes2o/s2orc
v3-fos-license
Development and external validation of dual online tools for prognostic assessment in elderly patients with high-grade glioma: a comprehensive study using SEER and Chinese cohorts Background Elderly individuals diagnosed with high-grade gliomas frequently experience unfavorable outcomes. We aimed to design two web-based instruments for prognosis to predict overall survival (OS) and cancer-specific survival (CSS), assisting clinical decision-making. Methods We scrutinized data from the SEER database on 5,245 elderly patients diagnosed with high-grade glioma between 2000-2020, segmenting them into training (3,672) and validation (1,573) subsets. An additional external validation cohort was obtained from our institution. Prognostic determinants were pinpointed using Cox regression analyses, which facilitated the construction of the nomogram. The nomogram’s predictive precision for OS and CSS was gauged using calibration and ROC curves, the C-index, and decision curve analysis (DCA). Based on risk scores, patients were stratified into high or low-risk categories, and survival disparities were explored. Results Using multivariate Cox regression, we identified several prognostic factors for overall survival (OS) and cancer-specific survival (CSS) in elderly patients with high-grade gliomas, including age, tumor location, size, surgical technique, and therapies. Two digital nomograms were formulated anchored on these determinants. For OS, the C-index values in the training, internal, and external validation cohorts were 0.734, 0.729, and 0.701, respectively. We also derived AUC values for 3-, 6-, and 12-month periods. For CSS, the C-index values for the training and validation groups were 0.733 and 0.727, with analogous AUC metrics. The efficacy and clinical relevance of the nomograms were corroborated via ROC curves, calibration plots, and DCA for both cohorts. Conclusion Our investigation pinpointed pivotal risk factors in elderly glioma patients, leading to the development of an instrumental prognostic nomogram for OS and CSS. This instrument offers invaluable insights to optimize treatment strategies. Introduction High-grade gliomas rank as the predominant and most virulent primary brain tumors in adults, constituting a significant fraction of malignant gliomas (1).In individuals aged 65 and over, the occurrence of these tumors is 2.63 times that of their younger counterparts (2), presenting amplified challenges due to typically poorer prognoses in this older demographic (3).Given the dire survival statistics, it is imperative to dissect the prognostic factors for overall survival (OS) and cancer-specific survival (CSS) in the elderly to refine clinical decision-making and treatment modalities.Elderly patients with glioma encounter unique challenges compared to their younger counterparts.These challenges include systemic aging, multiple comorbidities which make tolerating the toxic effects of intensive treatments difficult, and a focus on treatment strategies that prioritize improving quality of life.Declines in cognitive and functional status can influence patient compliance, while the surgical risks and incidences of complications and adverse reactions are elevated.For glioma patients, age and overall health status significantly influence prognosis.Despite this, many existing prognostic models for glioma either overlook the nuances of elderly patients or exclude them based solely on age.Such models fail to offer accurate prognostic predictions for individual elderly patients, hindering effective clinical decision-making.Addressing this deficiency, our study aims to develop tailored prognostic assessment tools for the elderly, facilitating personalized outcome predictions and treatment choices. Clinical and tumor-centric prognostic models can be instrumental in predicting individual risk and outcomes for elderly glioma patients.Nomograms, statistical models that generate personalized probabilities of clinical outcomes like survival based on various predictors (4), have gained traction in oncological decision-making due to their enhanced prognostic precision over conventional staging systems (5).The prognostic nomogram integrates a range of clinical and pathological factors, assigning scores and weights to each based on regression analysis, to quantify a patient's prognostic risk.Unlike traditional staging systems, nomograms excel in offering individualized, quantitative outcome predictions.By generating risk predictions tailored to a patient's clinical and pathological profile, they equip physicians with vital insights for devising personalized treatment strategies.For instance, patients with favorable prognoses might be advised to undergo aggressive treatments, including surgery and chemoradiotherapy.Conversely, for those with unfavorable prognoses, considering the potential for tumor progression and complications, a more conservative approach may be recommended to prioritize quality of life.Yet, a conspicuous gap exists in the provision of nomograms specifically calibrated for OS and CSS predictions in elderly patients with high-grade gliomas.While a plethora of prognostic tools populate the academic landscape, only a scant few embrace the convenience and immediacy of web-based solutions.These digital platforms, with their intuitive interfaces, can revolutionize clinicians' decision-making, ensuring patient-centric, optimal care pathways.In the era of digital health ascendancy, a web-based prognostic tool tailored for this demographic is both timely and essential. Thus, the crux of our study was twofold: to pinpoint the salient risk factors for elderly patients with high-grade glioma and to architect and validate a web-centric prognostic nomogram for OS and CSS.This nomogram is underpinned by established clinical prognostic markers discerned through multivariate regression analysis from the expansive Surveillance, Epidemiology, and End Results (SEER) database.We envisage that our nomogram will equip healthcare professionals with a tangible, pragmatic instrument to sharpen survival predictions and tailor treatment plans for the elderly glioma cohort.Validation was undertaken with external datasets to enhance its reliability and applicability. Patient selection and data source Elderly patients with glioblastoma multiforme were identified from the SEER database using SEER*Stat software (Version 8.4.2) through January 2023 (6).We employed the International Classification of Diseases for Oncology, third edition (ICD-O-3) codes to recognize glioblastoma (GBM) cases diagnosed between 2000-2020.The SEER cancer registry, established by the National Cancer Institute in 1973, captures standardized cancer data from diverse U.S. regions, covering 34.6% of the national population.Drawing from hospitals, physicians, laboratories, and vital statistics offices, SEER offers a rich dataset on patient demographics, tumor attributes, treatment, and outcomes.This valuable resource assists in monitoring national cancer statistics, trends, and aids cancer control initiatives.The publicly accessible SEER data facilitates in-depth cancer analysis to guide prevention, treatment, and research strategies.The following variables were extracted for each patient: age (coded as 65 to 69 years, 70 to 69 years) 79 or ≥80 years), sex, race (white, black, or other), marital status (married, unmarried, or other), tumor grade (class III or IV), primary tumor site (supratentorial, cerebellum/brainstem, overlap area, or unspecified), laterality (left, right, or other), and tumor size (<4.5 cm or ≥4.5 cm) cm), extent of lesion (localized, regional, or distant), type of surgery (none, subtotal, or total resection), and use of radiotherapy and chemotherapy (yes or no/unknown).We chose these variables because previous studies have shown that they may be prognostic factors for survival outcome in glioma patients.Age, extent of resection, and modalities such as radiotherapy and chemotherapy have long been considered important determinants of prognosis.Characteristics such as tumor location, size, and grade can also significantly affect clinical outcomes. The ICD-O-3, crafted by the World Health Organization, ensures precise classification of neoplasms based on anatomy and histology.It promotes standardization across over 1500 histological types, using four-digit codes for location and two-digit codes for microscopic composition.By ensuring consistent tumor categorization, the regularly updated ICD-O-3 bolsters cancer surveillance and research, enabling comparison of national and global incidence data.Given the SEER dataset's public accessibility, there was no need for ethics committee approval or informed consent. Our primary focus was on high-grade gliomas in elderly patients.The inclusion criteria were: (1) First or primary malignant glioma, excluding other primary cancers; (2) Diagnostic confirmation by positive histology; (3) Grade III-IV glioma, excluding unclassified cases; (4) Age ≥ 65; (5) Predominant histological types of high-grade gliomas listed by specific codes; (6) Exclusion of ambiguous or invalid primary tumor dimensions; (7) Surgical type specifications, excluding unknown or diagnostic surgeries; (8) Excluding unknown or unspecified laterality records; (9) Exclusion of patients with unspecified demographic details. For external validation, we retrospectively sourced data from elderly high-grade glioma patients at the Fourth Affiliated Hospital of Harbin Medical University and Hulin People's Hospital between 2008-2023.This external cohort's inclusion and exclusion criteria mirrored the primary SEER dataset.All participants from the external validation group provided informed consent.The study received local ethics committee approval and conformed to the Declaration of Helsinki.Figure 1 depicts the patient selection flow.From our screening, 5245 glioma patients were shortlisted and randomly segmented into training (3672) and internal validation (1573) cohorts, with an additional external validation cohort of 63 patients. Variables and definitions We extracted twelve attributes from the SEER database, deemed potentially prognostic for prostate cancer patients with brain metastases.The profile of geriatric glioblastoma patients encompassed the following factors: demographics (including age, sex, race, and marital status), tumor characteristics (such as grade, primary site, laterality, size, and extent of spread), and treatment modalities (surgery type, radiotherapy, and chemotherapy Cox regression and nomogram development In the training cohort, potential prognostic factors were ascertained using univariate Cox regression analysis.Those with a P-value less than 0.05 in the univariate analysis underwent multivariate Cox regression to identify independent prognostic determinants.A prognostic nomogram was then formulated for predicting OS and CSS based on these independent factors." Hazard Ratio (HR)" is used to denote hazard ratios derived from the Cox regression analyses. Model discrimination and performance were evaluated using Harrell's concordance index (C-index) and receiver operating characteristic (ROC) curves.The area under the curve (AUC) was calculated for the ROC curves to gauge the model's accuracy (7).Cindex and AUC values ranging between 0.5 and 1.0 signify better predictive precision.The R package "rms" facilitated the generation of calibration plots, providing insight into the nomogram's accuracy. For assessing the clinical relevance of the nomogram, decision curve analysis (DCA) was deployed (8).Upon identifying the optimal cutoff for risk scores, a risk stratification system was established.Based on this demarcation, patients in both cohorts were classified into high-risk or low-risk categories.Kaplan-Meier curves and log-rank tests were employed to discern survival differences between these risk groups. The nomogram development process can be summarized as follows: 1. Assignment of Points for Each Variable: Points for each variable were determined using regression coefficients from our multivariable Cox proportional hazards model.A unit increase in a predictor variable leads to a proportional increase in the log hazard ratio, and thus risk, as defined by its regression coefficient.Setting one predictor (usually with the smallest coefficient) as a reference (e.g., 100 points) allowed for the scaling of other predictors' coefficients in relation to this benchmark, establishing their respective point values; 2. Rationale Behind Point Assignments: This method of point allocation provides a graphical simplification of the complex mathematical interplay between predictors and outcomes, aiding clinicians and researchers in calculating an aggregate point score that denotes a patient's specific risk or likelihood of an outcome; 3. Translating Total Points to Survival Probabilities: Aggregate points from predictors were linked to survival probabilities using the baseline survival function.The survival probability corresponding to a particular point score was determined by integrating the score into our cohort's derived baseline survival function. In this study, point scores in the nomogram were assigned based on the b-coefficients obtained from the Cox regression models.The prognostic factor with the largest absolute b-coefficient was allocated a score of 100 points.Subsequent prognostic factors were scored relative to this benchmark, according to their individual b-coefficients.There were no additional modifications or adjustments to the b-coefficients beyond this relative scoring process.Using these assigned point scores, the nomogram was developed by aligning each prognostic determinant with its corresponding point range.The cumulative points from all determinants were then mapped to the predicted probabilities of OS and CSS on the nomogram's outcome axis. Statistical analysis All statistical analyses were performed using R software (version 4.1.3).Continuous variables, such as OS presented in months, are depicted as medians with interquartile ranges (Q1, Q3).Categorical variables are conveyed through frequencies and percentages.Chisquared tests evaluated categorical variables, whereas t-tests analyzed continuous variables.Kaplan-Meier curves, constructed to assess survival rates, were compared using log-rank tests.To discern independent prognostic factors, both univariate and multivariate Cox regression analyses were executed.R packages, including "survival", "rms", "timeROC", "ggplot2", "ggDCA", and "DynNom", facilitated the development, evaluation, and web-based deployment of the prognostic nomogram models.A P-value less than 0.05 (two-sided) was deemed statistically significant. Characteristics of baseline cohort In this study, 5,245 elderly patients diagnosed with high-grade glioma were selected from the SEER database based on specific inclusion and exclusion criteria.These patients were randomly divided into a training cohort (n = 3,672) and an internal validation cohort (n = 1,573) using a 7:3 ratio.Additionally, 63 elderly patients with high-grade glioma from the Fourth Affiliated Hospital of Harbin Medical University and Hulin People's Hospital were included as an external validation cohort, comprising 20 patients aged 65-69, 31 aged 70-79, and 12 aged 80 and above. Table 1 presents the baseline clinicopathological attributes of the participants.It's noteworthy that 48.5% of the patients were aged between 70-79 years, with a significant majority (over 90%) being white.Most had grade IV gliomas (over 90%).About 70% exhibited gliomas situated in the supratentorial lobes, and over 90% had tumors with localized extent.More than half of the patients (53.4%) had Identification of prognostic factors Univariate Cox regression analysis was performed on all variables within the training cohort to discern factors influencing overall survival.Variables significant at a P-value less than 0.05 included age, marital status, glioma's primary site, laterality, glioma extent, tumor size, surgical intervention, chemotherapy, and radiotherapy.These variables were subsequently incorporated into the multivariate Cox regression model.Upon multivariate analysis, age, glioma's primary site, laterality, glioma extent, tumor size, surgical approach, chemotherapy, and radiotherapy remained statistically significant predictors of overall survival for elderly patients with high-grade glioma, with P-values of <0.001, 0.016, <0.001, <0.001, 0.013, <0.001, <0.001, and <0.001, respectively.Notably, for CSS, significant factors included age, glioma's primary site, laterality, glioma extent, tumor size, surgical approach, chemotherapy, and radiotherapy, with corresponding P-values of <0.001, 0.022, <0.001, <0.001, 0.011, <0.001, <0.001, and <0.001.Collectively, these findings underscore that patient age, glioma location, laterality, extent, size, and treatment modalities significantly determine survival outcomes in this patient demographic (refer to Table 2 for details). Development and validation of the prognostic nomogram Using multivariate Cox regression analysis, eight independent risk factors were identified, and a nomogram was constructed to predict 3-, 6-, and 12-month OS and CSS in elderly patients with high-grade glioma (Figures 2E and 3E).Each variable was assigned a score from 0 to 100 based on its prognostic significance.The combined score, calculated from the sum of individual variable scores, reflected the projected 3-, 6-, and 12-month survival rates.Calibration curves revealed a strong alignment between the nomogram predictions and observed outcomes at 3, 6, and 12 months for both the training and internal validation cohorts, underscoring the nomogram's high predictive accuracy (Figures 2I-K, 3H, I).The C-index for OS was 0.734 (95% CI: 0.725-0.743) in the training cohort, 0.729 (95% CI: 0.715-0.743) in the internal validation cohort, and 0.701 (0.620-0.781) in the external validation cohort.AUC values for these cohorts were as follows: for the training cohort, they were 0.863 at 3 months, 0.819 at 6 months, and 0.780 at 12 months (Figure 2F); for the internal validation cohort, they were 0.850 at 3 months, 0.822 at 6 months, and 0.775 at 12 months (Figure 2G); for the external validation cohort, they were 0.732 at 3 months, 0.838 at 6 months, and 0.763 at 12 months (Figure 2H).These metrics exhibit robust discriminative capacity, reinforcing the nomogram's predictive precision. Similarly, the C-index for CSS was 0.733 (95% CI: 0.724-0.742) in the training cohort and 0.727 (95% CI: 0.713-0.741) in the validation cohort.The associated AUC values were 0.864 at 3 months, 0.819 at 6 months, and 0.777 at 12 months (Figure 3F) in the training cohort, and 0.852 at 3 months, 0.820 at 6 months, and 0.770 at 12 months (Figure 3G) in the validation cohort.These metrics also showcase strong discriminative power, further affirming the nomogram's predictive accuracy.In summary, the proposed nomogram presents a reliable method for individualized outcome prediction in elderly patients with high-grade glioma. Clinical application of the nomogram We assessed the utility of our nomogram against the summary stage using decision curve analysis.This analysis demonstrated that our nomogram consistently provided a higher net clinical benefit, producing more precise 3-, 6-, and 12-month OS and CSS predictions compared to the summary stage.The external validation cohort further validated this advantage, underscoring the clinical efficacy of our nomogram (Figure 4). To enhance the nomogram's clinical applicability, we developed an intuitive point scale for straightforward bedside use.As illustrated in Figures 2E and 3E, physicians can align a patient's prognostic indicators with the corresponding points.By summing the total points and referencing the total point scale, clinicians can directly ascertain the projected 3-, 6-, and 12-month OS and CSS.For each patient, a vertical line drawn from the variable value intersects the 'Points' axis to determine the corresponding score.The combined score is inferred from the 'Total Points' axis, and another vertical line from this total score indicates the predicted OS and CSS for 3, 6, and 12 months.This streamlined point system effortlessly merges the nomogram into clinical routines, offering tailored survival predictions that can inform patient discussions and treatment decisions tailored to the risks for elderly glioma patients.Parameters such as the extent of resection can be adaptively modified to refresh prognostic estimates during patient follow-ups. Application of risk stratification system X-tile employs a data-driven approach complemented by statistical simulations and modeling to determine optimal Frontiers in Endocrinology frontiersin.orgcut point for biomarkers that maximize sensitivity and specificity for outcomes such as survival (9).The implemented algorithms include equal-width binning, equal-frequency binning, optimal data-driven binning, Monte Carlo simulations, Kaplan-Meier analysis, and bootstrapping (10).This rigorous approach enables optimal biomarker cut point determination and has led to the frequent utilization of X-tile for survival analysis across various malignant tumors (11)(12)(13).In this study, the X-tile algorithms enabled reliable optimal cut-point analysis and creation of survival-based risk stratification systems using nomogram scores for all patients.The entire cohort was divided into two distinct risk subgroups: low-risk (N = 2599, 49.55%, scores <107.8) and high-risk (N = 2646, 50.45%, scores >107.8) on OS (Figure 5A), which displayed substantial differences in Kaplan-Meier survival curves, validating the risk stratification system.A similar stratification was observed when the cohort was divided into low-risk (N = 2618, 49.91%, scores <108.73) and high-risk (N = 2627, 50.09%, scores >108.73)subgroups on CSS (Figure 5D), which also exhibited significant differences in Kaplan-Meier survival curves, further corroborating the validity of the risk stratification system.Analysis of survival using Kaplan-Meier curves and log-rank tests indicated that the subgroup at high risk exhibited decreased survival rates in comparison to the low-risk subgroup (Figure 5B, C, E). Web-based nomogram Web-based nomograms are interactive online prognostic tools that incorporate important predictive factors into graphical calculating devices to provide individualized and precise outcome predictions, beyond traditional staging systems, to guide clinical decision-making.Developed from multivariate analyses of datasets, nomograms allow users to obtain personally tailored risk assessments by entering patient parameters.Their user-friendly web interface facilitates dissemination and validation across clinical settings to aid evidence-based, personalized treatment decisions and counselling regarding recurrence risks, survival outcomes, or post-treatment complications.A user-friendly, webbased dynamic nomogram was developed that physicians and patients could access from any electronic device.As shown in Figures 2A-D and 3A-D, the web-based nomogram allows doctors and patients to input common clinical variables to visually assess individualized postoperative OS (https:// prenom.shinyapps.io/DynNomapp_Glioma/) and CSS (https:// prenom.shinyapps.io/DynNomapp_CSS/) for elderly patients with high-grade glioma.The legend demonstrates the specific usage method. Discussion Clinical management of elderly patients with highgrade glioma is challenging given their frail health, multiple comorbidities, and heightened sensitivity to chemoradiotherapy toxicity (14).Most clinical studies, including randomized controlled trials (RCTs), often exclude elderly patients with highgrade glioma, leading to an absence of clear treatment guidelines and prognostic models for this demographic.In this study, we developed a prognostic scoring system based on multivariate analysis to provide individualized survival assessment and risk stratification for elderly patients with high-grade gliomas.By retrospectively analyzing data from 5,245 elderly patients in the SEER database, a comprehensive national cancer database, we found that age, primary tumor site, tumor laterality, tumor extent, tumor size, surgery, chemotherapy, and radiotherapy were independent prognostic factors.Based on these factors, we developed two web-based online prognostic scoring systems that can predict individualized survival rates based on patients' clinical characteristics.Our study provides a valuable tool for prognostic evaluation and risk stratification in elderly patients with highgrade gliomas.Building upon existing literature, this study also had unique features compared to prior prognostic models developed for highgrade gliomas.A key novel aspect was the creation of an online, individualized prognostic scoring system, different from many previous glioma prognostic tools utilize traditional scoring systems or are not web-accessible for immediate point-of-care use (15, 16).Our user-friendly nomogram provides a practical and comprehensive tool for clinicians to obtain real-time survival predictions tailored to individual patients' profiles.Compared to other similar nomograms, our study incorporated additional prognostic factors shown to be relevant in elderly glioma patients, including precise tumor location and lateralization rather than only broad categories (e.g.supra-vs infratentorial) (17,18).However, consistent with previous findings, we also identified age and treatment modalities as significant independent predictors of survival (15). In comparison to prior studies on the prognosis of elderly glioblastoma patients (19), our study demonstrated a moderately higher C-Index for both OS and CSS.For OS, the C-index in our training cohort was 0.734, compared to 0.715 in previous studies, and 0.729 versus 0.726 in the validation cohort.Similarly, for CSS, the C-Index in our training cohort was 0.733, compared to 0.700 in earlier research, and 0.727 versus 0.707 in the validation cohort.We refined the classification of the primary glioma site into four categories: supratentorial lobes, cerebellum and brainstem, overlapping regions, and unspecified locations.A more detailed classification of the primary glioma site enhances the predictive accuracy of the model.Our study broadened the scope by focusing on elderly patients with a range of high-grade gliomas (WHO III-IV), enhancing the clinical relevance and applicability of our findings.Unlike previous studies that primarily focused on glioblastoma, we incorporated a wider variety of high-grade glioma types, including glioblastoma, giant cell glioblastoma, gliosarcoma, glioblastoma (IDH-mutant), astrocytoma anaplastic, oligodendroglioma anaplastic, and ganglioglioma anaplastic.This comprehensive inclusion improved the predictive accuracy of our model. Different from the previous research (19), our study did not identify race as an independent predictor for either OS or CSS.This discrepancy might be attributed to differences in sample size and study time points.Moreover, the primary site of glioma emerged as an independent predictor for both OS and CSS in our analysis.This distinction may arise from our study's more granular classification, wherein the primary site of glioma was categorized into four groups as said above, thereby amplifying its prognostic impact.Additionally, our research recognized tumor extent as an independent predictor for both OS and CSS.We classified the extent of glioma into three categories: Localized, Regional, and Distant.'Localized' denotes a tumor confined to its primary site without distant metastasis.'Regional' signifies tumor invasion into surrounding tissues or regional lymph nodes without distant metastasis, while 'Distant' indicates tumors with distant metastases, such as in the cerebrospinal fluid, ventricles, or other body parts.This detailed classification enhances the predictive precision of our model.Consistent with the previous results (20), univariate and multivariate Cox regression analyses identified six prognostic factors: tumor site, laterality, histological type, extent of surgery, radiotherapy, and chemotherapy. The presence of comorbidities and concerns regarding treatment toxicity may contribute to elderly patients with highgrade glioma declining active therapy after diagnosis, leading to poorer survival prognoses (21).While surgery, radiotherapy, and chemotherapy are standard treatments for glioma, there is no consensus on the optimal approach for elderly patients with highgrade gliomas, as most clinical trials have excluded this older demographic (22).Due to the infiltrative growth pattern, total resection of gliomas is challenging.However, maximal safe surgical resection has been associated with improved prognosis in patients with high-grade gliomas, including elderly populations (23).Importantly, radiotherapy and chemotherapy may improve survival despite not directly improving general condition or quality of life.Treatment side effects should be weighed against potential survival benefit (24).The Web-based nomograms provide individualized risk assessment that can inform discussions around treatment intensity, such as whether aggressive multi-modality treatment is likely to provide meaningful survival benefit or if more conservative options may be more appropriate considering the patient's predicted prognosis. Consistent with previous studies (25), tumor extent (local or distant) and metastasis are important prognostic factors in gliomas, with patients having distant or metastatic disease demonstrating poorer survival prognosis.Local invasion or distant metastasis of gliomas has consistently been a key factor impacting prognosis.Studies have shown that gliomas with metastases tend to have a poor prognosis (26,27).The presence of distant metastases signifies that tumor cells have disseminated via vasculature or cerebrospinal fluid, indicative of advanced disease with heightened treatment challenge.Hence, distant metastasis represents a pivotal parameter for gauging malignancy grade and prognosis in the clinical staging of gliomas. This study demonstrates poor prognosis for gliomas located in the cerebellum and brainstem, consistent with previous studies (28,29), maybe attributable to surgical challenges, disruption of critical functional regions, heightened tumor invasiveness, increased postoperative complications, and reduced efficacy of adjuvant therapies.The cerebellum and brainstem comprise critical functional hubs, conferring substantial surgical risks that often preclude total tumor resection.Residual neoplastic cells readily facilitate relapse and progression.As the cerebellum modulates balance and coordination (30) while the brainstem governs respiration and circulation (31), these areas are prone to irreparable neurological impairment from mass effect and operative trauma.Gliomas situated within these sites tend to be higher-grade lesions exhibiting robust invasive and regenerative potential, with enhanced dissemination and metastatic spread.Resection of such cerebellar and brainstem gliomas confers heightened surgical hazards, with increased postoperative complications like cerebral edema and infection that directly jeopardize patient survival. Our external validation cohort comprised 63 patients from our institution.While smaller than the primary dataset from the SEER database, this cohort included all eligible patients available during the study period.The smaller sample size may introduce variability in the validation metrics.Specifically, the C-index, which measures discriminative ability, can exhibit instability with smaller samples.A larger cohort would provide more robust and generalizable results.However, even with the smaller size, our external validation provides a preliminary check on the nomogram's performance in a setting apart from the SEER database.Although the external cohort is smaller, the substantial SEER dataset (1,573 patients) offers confidence in the model's accuracy and generalizability.We recognize the importance of validating our tools in larger, diverse cohorts.In future studies, we aim to collaborate with other institutions to assemble a larger external validation dataset, further establishing the reliability and generalizability of our nomograms. Compared to traditional nomogram, advantages of web-based nomogram for analyzing glioma overall survival include: 1) intuitive visualization of prognostic factor effects, 2) straightforward comparisons between groups, 3) multifaceted presentation of results, and user-friendly operation and interpretable outputs.4) Web-based nomograms clearly depict the distribution and trends in survival time associated with various prognostic factors (e.g.age, grade) and visually convey their impact on prognosis, allows for dynamic risk prediction, offering the ability to update parameters at follow-up.5) We incorporated a broader range of variables, ensuring a more holistic understanding of factors influencing outcomes in elderly glioma patients.6) By targeting elderly glioma patients specifically, our model is tailored to this demographic, ensuring higher relevance and accuracy.Juxtaposed nomograms readily facilitate comparison of survival time differences across strata of the same prognostic variable (such as age groups).Beyond survival curves, nomograms can also present median survival times, survival rates, and other statistics for enriched data representation.With simple website-based usage, nomogram output is concise, uncluttered, and readily interpretable. This study has several notable strengths.This study has developed a robust prognostic nomogram for elderly glioma patients that holds significant clinical implications.First, it provides individualized survival prediction to facilitate patient counseling and personalized treatment recommendations.Patients identified as high-risk could be considered for more aggressive therapies or clinical trials, while low-risk patients may benefit more from less intensive treatment.Second, this nomogram enables risk-based stratification for guiding management strategies.High-risk patients may warrant more frequent imaging surveillance or prophylactic interventions.Low-risk patients could avoid overtreatment and undue harms.Third, the model allows objective risk assessment to optimize clinical trial design.Patients could be assigned to trial arms or adaptive interventions according to their predicted prognosis.This tool supports dynamic risk prediction through the recalibration model with updated parameters.This allows tracking of evolving patient risk profiles over time.With further validation, it holds promise to improve prognostic accuracy, risk stratification, and ultimately, clinical outcomes for elderly glioma patients.Finally, various methods including C-index, AUC, and calibration curves were used to comprehensively validate the predictive performance.Despite the promising results, our study had some limitations that could be addressed in future research.First, prognostic biomarkers such as tumor mutational burden and DNA methylation profiles were not included and may further improve the predictive accuracy.Second, the dynamic change of prognostic factors during treatment and follow-up needs to be examined.Finally, immune status, comorbidities, and other factors that may influence elderly patient prognosis were not incorporated into the scoring system.Based on these limitations, future studies should focus on (1): Incorporation of emerging prognostic biomarkers to enhance individual risk prediction (2).Development of dynamic, longitudinal prognostic models that integrate serial measurements over time (3).Collaboration with other institutions to assemble a larger external validation dataset and establishment the reliability and generalizability of our nomograms. Conclusion Taking advantage of a substantial sample size, this study identified independent prognostic factors for OS and CSS in elderly patients with high-grade glioma and formulated a webbased prognostic nomogram.These nomograms offer predictions on survival probabilities and serves as a clinical reference for treatment strategies and prognosis. Funding The author(s) declare financial support was received for the research, authorship, and/or publication of this article.This study was supported by LncRNAs and miRNAs regulate the apoptosis of glioma cells (201900007), the roles of LncRNAs and PAK in glioblastoma cell proliferation and temozolomide resistant glioblastoma, Comparison of stereotactic intracranial hematoma puncture and aspiration combined with urokinase (20220016), the General Project of the Torch Program of the Fourth Affiliated Hospital of Harbin Medical University (HYDSYHJ201902) and the Special Support Project of the Fourth Affiliated Hospital of Harbin Medical University (HYDSYTB202232). FIGURE 1 FIGURE 1 Overview of the study.The flowchart illustrates the step-by-step progression from Data extracted from the SEER database to Validation of the prediction model.Each box represents a distinct phase, interconnected by arrows indicating the flow or sequence.C-index, concordance index; SEER, Surveillance, Epidemiology, and End Results; ROC, receiver operating characteristic; DCA, decision curve analysis. 2 FIGURE 2 Development and Validation of a Web-Based Nomogram to Predict 3-, 6-, and 12-Month Overall Survival in Elderly Patients with High-Grade Glioma.The web-based nomogram on overall survival (A).Curve depicting estimated survival probability over time for the input patient (B).95% confidence intervals for selected predicted monthly survival probabilities (C).Numerical summary of predicted monthly survival probabilities (D).Nomogram on overall survival in elderly patients with high-grade glioma (E).ROC curves in the training group (F), the internal validation group (G) and external validation (H).Calibration curves were generated for the training cohort (I), the internal validation cohort (J) and the external validation (K).User guide for the nomogram: For each patient, a vertical line from each variable value intersects the "Points" axis to determine its score.The cumulative score is determined based on the axis labeled as 'Total Points'.Next, a vertical line is drawn downwards from the sum of points to determine the predicted overall survival of 3-, 6-, and 12-month.User guide for the web-based nomogram: Log on to the website, enter the age, primary site, laterality, summary stage, tumor size, surgery, chemotherapy, and radiation into the line according to the actual situation of the patient, select predicted survival n months, and then click "Predict".If high traffic prevents normal use, click "Reload" in the bottom left corner to retry.STR, subtotal resection; GTR, gross total resection. 3 FIGURE 3 Development and Validation of a Web-Based Nomogram for Predicting 3-, 6-, and 12-Month Cancer-Specific Survival in Elderly Patients with High-Grade Glioma.The web-based nomogram on cancer-specific survival (A).Curve depicting estimated survival probability over time for the input patient (B).95% confidence intervals for selected predicted monthly survival probabilities (C).Numerical summary of predicted monthly survival probabilities (D).Nomogram on overall survival in elderly patients with high-grade glioma (E).ROC curves in the training group (F) and validation group (G).Calibration curves were generated for the training cohort (H) and the validation cohort (I).The User guide is the same as in Figure 2. 4 FIGURE 4 The DCA of the nomogram.On OS for predicting 3-month (A), 6-month (B), and 12-month (C) in the training cohort; 3-month (D), 6-month (E), and 12-month (F) in the internal validation cohort and 3-month (G), 6-month (H), and 12-month (I) in the external validation cohort.The DCA of the nomogram on CSS for predicting 3-month (J), 6-month (K), and 12-month (L) in the training cohort and 3-month (M), 6-month (N), and 12-month (O) in the validation cohort.Summary stage is equal to extent of glioma.DCA, decision curves analysis; OS, overall survival; CSS, cancer-specific survival. 5 FIGURE 5 Kaplan-Meier curves demonstrating Overall Survival (OS) and Cancer-Specific Survival (CSS) in low-and high-risk patient groups.The x-axis represents time and the y-axis shows the probability of survival.The drops in the curve represent observed events (deaths) at that time point.Histogram depicting distribution of patients based on optimal risk score cut-off point determined by X-tile software on OS (A) and CSS (D).Kaplan-Meier curves demonstrating SEER cohort (B) and the external validation (C) on OS and SEER cohort on CSS (E) in low-and high-risk groups. TABLE 1 Characteristics of elderly patients with high-grade glioma. TABLE 1 Continued OS, overall survival. TABLE 2 Analyses of overall survival and cancer-specific survival in elderly patients with high-grade glioma using both univariate and multivariate regression.
2023-11-25T16:06:24.975Z
2023-11-21T00:00:00.000
{ "year": 2023, "sha1": "e8ec52d1a1f173c6023bb8a1cb91a4ed9a1d247f", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2023.1307256/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "37d58c93dba20dffa08931a6379c3de63af8d06a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225484398
pes2o/s2orc
v3-fos-license
Toxicity Assay of Methanolic Extract of Caryota No Seed using Drosophila melanogaster Model Objectives: To investigate the effect of methanolic extract of Caryota no (CN) seeds in Drosophila melanogaster (DM) survival and life span. Study Design: Experimental design. Place and Duration: African Centre of Excellence for Phytomedicine Research and Development, University of Jos, Jos Plateau State Nigeria between June 2018 and February 2019. Methods: The LC50 was determined by exposing 50 flies to concentrations ranging from 1 mg to 600 mg per 10 g diet and mortality of flies was scored every 24 hours for 14 days and from the results, five doses were chosen for the next assay. Survival assays were carried out by exposing 50 flies in each vial to the following concentrations: 300 mg, 350 mg, 400 mg, 500 mg and 600 mg of methanolic extract in 5 replicates for 28 days with daily recording of mortality Original Research Article Maduagwuna et al.; AJBGMB, 4(3): 43-51, 2020; Article no.AJBGMB.59142 44 while the longevity assay continued from the survival until the last fly dies. All three experiments were done as three independent trials. Results: The LC50 values of the methanolic extract was determined to be 6.533e+017 mg/10g food in D. melanogaster. The result of the survival assay with methanolic extract of CN showed slight significant (P < .05) increase with the lowest two doses but no significant (P > .05) difference with other higher doses compared to the control. The longevity assay revealed that the extract significantly (P < .05) decreased longevity in Drosophila melanogaster. Conclusion: The results obtained from evaluating the methanolic extract of Caryota no indicate that the plant is relatively non-toxic and maybe safe under acute and subacute exposures but may become deleterious during chronic exposure. INTRODUCTION Medicinal herbs have consistently been considered the leading source of pharmaceuticals, employed in the treatment of various human diseases due to their high chemical diversity and broad biological functionality [1]. Traditional medicines are mostly compounded from natural products therefore there is a likelihood of them being accepted by the body better than synthetic substances [2] and have been recognized to have convincing and credible curative effects [3]. . Some researchers [4] illustrated by their work that mice infected with P. aeruginosa and treated with a garlic/tobramycin combination showed significantly improved clearing of their bacterial infections as compared to a placebo control group. In vitro analysis of P. aeruginosa biofilms showed considerable destruction of the biofilm when exposed to a combination of garlic extract and tobramycin. Exposure to either compound alone had little or no effect on the biofilm. Another group [5] also demonstrated that Sphenyl-L-cysteine sulfoxide and its breakdown product, diphenyl disulfide, significantly reduced the amount of biofilm formation by P. aeruginosa. It was also found that a tannin-rich component of Terminalia catappa leaves (TCF12) was able to inhibit the maturation of biofilms of P. aeruginosa to significant levels [6]. A large number of plant products have been proven to be very valuable for the treatment of a myriad of medical maladies. Natural toxicants present in human foods and animal feeds present a potential hazard to health. The starting point in determination of the safety profile of any compound is in the determination of its lethal concentration (in acute or chronic conditions) and also to check other toxicological parameters using experimental animals or insects. Lethal Concentration 50 or LC 50 is a standard measure of toxicity to determine how much of a substance is needed to kill half of a group of experimental organisms in a given time [7]. These preclinical studies must be undertaken before any biologically active substance must be evaluated clinically for therapy. The common fruit fly, Drosophila melanogaster, has been extensively studied for decades. In effect, it was introduced as a decisive model in biology about a century ago. The fly shares several basic biological, biochemical, neurological and physiological similarities with mammals. It is documented that about 75 % of human disease-causing genes have functional homolog in DM [8]. The fly can effectively be maintained at low cost in the laboratory, and it has been recommended as an alternative model to vertebrate usage. Consequently, it has attracted the attention of toxicologists [8]. Determination of LC 50 of any substance in D. melanogaster is essential for selection of the concentration of the substance for further experiments. Caryota no palm is reported to be one of the largest species of the genus found in Borneo rainforests. The common name is the Giant Fishtail Palm [9]. In habitat, this palm can reach a height of 75 inches and stems measure 18-20 inches in diameter [9]. Caryota species, mostly found in Asia, are used traditionally in the treatment of gastric ulcer, migraine headaches, and snakebite poisoning and also rheumatic swellings by preparing porridge from the flowers [10]. What sets this palm apart from others in the genus is its upright growth habit. Although this palm is considered a giant, its footprint in the landscape is reduced by its fronds growing mostly upward and rarely ever extending horizontally from the stem. This palm would grow successfully anywhere a coconut palm thrives. CN is not wind resistant. Along with Arenga pinnata, it is one of the least wind resistant palms. These researchers recorded that Caryota urens (which is from the same family of palms) is suggested to treat seminal weakness and urinary disorders [11]. Very scanty information has been reported concerning research works on CN. The aim of this work is to do a preliminary screen of methanolic extract of C. no for LC 50 , and their effects on survival and longevity of D. melanogaster so as to determine its toxicity profile and to establish a base line for future studies on the plant. Reagents All chemicals used were of analytical grade. Methanol and Distilled water were obtained from Africa Centre of Excellence in Phytomedicine Research and Development, Jos, Plateau State, Nigeria. Plant Collection and Preparation The plant material was collected from Games Village, Abuja, Nigeria. The plant was identified by a taxonomist in the herbarium of the Federal college of Forestry Jos. The seeds were sorted, air-dried for several days and then pulverized to powder using a commercial grinding machine. The soxhlet extractor was used for extraction of the plant compound using analytical grade 80 % methanol as solvent following a method described by Virot et al., [12]. A rotary evaporator was employed to recover the solvent. The extract was further dried in a water bath regulated at 40 0 c, while the extract was exposed to a freeze drier and kept in an airtight container. D. melanogaster Harwich strain was obtained from Africa Center of Excellence in Phytomedicine Research and Development, University of Jos and maintained at constant temperature and humidity (23 °C; 60 % relative humidity, respectively) under 12 h dark/light cycle. The flies were cultured by feeding them with a standard medium of the following compositions; 1700 ml of water, 16 g agar agar, 20 g of baker's yeast, 100 g of corn flour, and 1 g of methyl paraben dissolved in 5 ml of absolute ethanol, 1700 ml of water [13]. LC 50 of Methanolic Seed Extract of CN The 14-days LC 50 was determined following the method described [7] with slight modification. 50 flies (of both genders (1-3 days old) per vial were exposed to the following concentrations; 1 mg, 10 mg, 50 mg, 100 mg, 250 mg, 300 mg and 350 mg of methanolic extract of Caryota no seed per 10 g diet. Mortality of flies was scored every 24 hours for 14 days. During the experimental period, flies were transferred onto new vials containing fresh food every 2 days. Details are stated in 2.5 and 2.6. Survival Assay of Methanolic Seed Extract of CN-treated Flies 50 flies of both genders (1-3 days old) were exposed to selected concentrations of methanolic extracts of CN seeds (300mg 350mg, 400mg, 500mg and 600mg prepared in distilled water) in five replicates for 28 days [14,15]. The numbers of live and dead flies were scored daily till the end of the experiment and the survival rate was expressed as percentage of live flies. The flies were divided into six groups containing 50 flies each. Control group was placed on normal diet alone while groups II-IV were placed on basal diet containing methanolic seed extract of CN at various concentrations of diet as shown thus; Control group Basal diet 300 mg group Basal diet + 300mg CN methanolic seed extract/10g fly food 350 mg group Basal diet + 350mg CN methanolic seed extract/10g fly food 400 mg group Basal diet + 400mg CN methanolic seed extract/10g fly food 500 mg group Basal diet + 500mg CN methanolic seed extract/10g fly food 600 mg group Basal diet + 600mg CN methanolic seed extract/10g fly food During the experimental period, flies were transferred onto new vials containing fresh food every 2 days. The flies were exposed to these treatments for 28 days, and the vials containing flies were maintained at room temperature. All experiments were carried out in triplicate (each experimental group was carried out in five independent vials). Survival analyses were calculated based on the number of deaths recorded and evaluated by the log-rank Mantel-Cox test. Longevity Assay of Methanolic Seed Extract of CN-treated Flies Longevity assay proceeded as a continuum from the survival assay [16,17], such that after 28 days, the daily recording of number of deaths continued until the last fly dies. Survival analyses were calculated based on the number of deaths recorded and evaluated by the log-rank Mantel-Cox test. Maintaining the experiment The vials containing fresh food were made to be at room temperature for each transfer. During the experimental period, flies were transferred onto new vials containing fresh food every 2 days. This step will ensure that the feeding environment for young females is not disrupted by the presence of larvae. This transfer were completed without anesthesia, which can induce acute mortality, particularly in older flies (Pletcher, personal observations). During each vial transfer, the dead flies in the old vial were counted, and the dead flies that are carried to the new vial also noted. This information was recorded separately in two columns in a spreadsheet. This will ensure that the carried flies are not double-counted. The total number of deaths (dead + carried) should at least equal the number of carried flies from the previous transfer. Subtract the number of previously carried flies from the total number of deaths to determine the number of new deaths. A fly is considered right-censored if it left the experiment prior to natural death through escape or accidental death. Animals exiting the experiment in this way were entered into a separate column on the day that the fly exited the experiment. Censored flies are not recorded as dead. These transfer steps were continuously repeated until the last survivor is dead. As the flies age, some flies may lie on their back and appear dead due to their inactiveness. Therefore when counting carried (dead) flies, the side of the vials were tapped to determine if there are leg movements. If so, these flies are still alive. In the case where flies remain stuck to the food in the old vial but alive, they should not be counted as dead but were rescued by further tapping of the vial to dislodge the fly. Censoring such flies should be used with caution as it may result in experimental bias. Statistical Analysis Analysis of the data was done for the determination of the LC 50 of CN on adult D. melanogaster. The data was expressed as mean ± SEM (standard error of the mean) of five parallel measurements, and the statistical analysis was carried out using one-way analysis of variance (ANOVA) and two-way ANOVA in cases of comparisons with the software, GraphPad Prism version 7.0 (GraphPad Software, San Diego, CA, USA). The results were considered statistically significant at P < 0.05. LC 50 of Crude Methanolic Seed Extract of CN 14 days LC 50 of methanolic extracts of CN seeds revealed that the concentration that can kill 50 % of flies was found to be 6.533e+017 mg/10 g diet (Fig. 1). Exposure to methanolic extract of CN seeds resulted in LC 50 levels of 6.533e+017 mg/10g food in DM. The LD 50 of the methanolic extract in swiss rats was also determined to be > 5000mg/kg by the oral route and >1000mg /kg by the intraperitoneal route (unpublished). The concentration that can kill 50 % of the test organism, LC 50 agrees with the unpublished LD 50 value obtained from animal studies. This high level of LC 50 suggests the safety of this extract and also serves as a baseline for selecting the concentrations; 350 mg, 400 mg and 500 mg per 10 g diet for 28-days survival study. It can therefore be inferred that the methanolic extract of CN seeds is relatively safe. Percentage Death of Flies Treated with Methanolic Seed Extract of CN The survival result (Fig. 2) for the methanolic extract show statistically significant (P = .007 **) decrease in death by the treatment groups compared to the control. The lowest extract dose recorded the highest number of deaths. The higher doses lowered the percentage death. The difference observed was later found not to be between the control and the lowest two groups but rather in comparison between the two said groups. There was rather a significant difference (**) between the lowest extract dose (300 mg/10g food) and the immediate next dose (350 mg/10g food) -the difference (P = .009) observed was between the last two lowest treatment doses. It can therefore be inferred that exposure to methanolic extract of CN caused significant (P < .05) effect on percentage deaths in DM. Survival Assay of Methanolic Seed Extract of CN-treated Flies Exposure to methanolic extract of CN caused nonsignificant (P = .672) effect on survival in DM (Fig. 3). By the 28th day, the survival proportions for the control, 600, 500, 400, 350, and 300 mg/10g diet groups were 23, 4.7, 1.7, 0, 10.0, 2.1 percent respectively. Also, the number of subjects at risk in the same order at the 28th day of the assay were 10, 10, 10, 10, 10, and 10. For the summary of the data, the median survival is 9, 18.5, 20, 19, 11, and 18 for the groups. It can be inferred that the methanolic extract of CN did not increase or decrease survival of DM adult fly after 28 days oral exposure. Longevity Assay of Methanolic Seed Extract of CN-treated Flies The graph (Fig. 4) illustrates that the methanolic extract caused a significantly (P < .0001) reduced life span in D. melanogaster between the groups and the control. The graph illustrates that there was statistically significant difference between the groups and the control. By the 38th day, the survival proportions for the control, 600, 500, 400, 350, and 300 mg/10g diet groups were 34, 6.4, 0, 14.8, 0, 7.2 % respectively. Also, the number of subjects at risk in the same order at the 38th day of the assay were 87, 46, 8, 38, 52, and 20. For the summary of the data, the median survival is 29, 32, 30, 31, 34, and 25 for the groups. The lowest extract dose also recorded the least number of median survival. The higher doses were better tolerated. The methanolic extract shortened the fly life span in comparison to the control (P < 0.0001). The last control fly died at day 54 while most of the treatment groups could not survive to 45th day. A particular group all died by day 39 and this is an obvious shortening of life span on chronic exposure in comparison to the control. Exposure to methanolic extract of CN seeds significantly (P < .05) decreased life span in D. melanogaster. The comparison of longevity data between the nhexane extract of CN which was also researched on by this author and methanolic extract of CN is shown in Table 1. The Discussion The toxic effect of any substance following a single acute exposure may be quite different from the effects produced by chronic exposure. It was reported [7] that a small amount of cryolite at one-time application is not sufficient to produce detectable changes in the biology of the animal, while the same small amount of the chemical applied day after day may cause chronic illness and ultimate death. At lower concentrations, insects try physiologically to combat the poisonous effects of any chemical by its elimination through the intestinal tract. Rapid elimination of cryolite from the rat digestive tract can save the animal from harmful effects [18]. A similar response is also true for Drosophilids. It was reported that drosophila larvae which were exposed to 65,000 to 70,000 μg/ml cryolite through food showed 50% mortality after 18 hours of acute exposure, whereas only 150 to 160 μg/ml cryolite was sufficient to cause 50% mortality in case of chronic exposure [7]. The very high value of LC 50 recorded ( Fig. 1) infers that the methanolic extract is very safe. The results (Fig. 2) agrees with the review [14] which refers to [19] which have proven that the fruit fly tolerates only up to 1% of fat in the diet. It was observed [15] that higher dietary inclusions of Garcinia kola seed reduced the survival rate of D. melanogaster more significantly compared to control flies. This agrees with what was observed by this researcher that different doses of dietary inclusions could prolong or reduce survival. The phytochemical analysis of the methanolic extract of CN revealed a very high quantity of carbohydrates and some level of saponins (unpublished). This information is also supported by a research work [20] which concluded that diets rich in excess carbohydrates (saccharides) tend to lower the life span of flies and also [15] which showed that saponins are toxic to drosophila and decreased the levels of acetylcholinesterase (AChE) activity. Diet such as cornmeal prolongs the lifespan of the fly, while diets with high quantities of free available carbohydrates (saccharides) and cholesterol can reduce life expectancy [20]. In addition, overcrowding has been shown to reduce the longevity of the fly [21]. The similarities of molecular processes involved in the control of lifespan and aging between DM and humans, coupled with good degree of genetic homology between the two species, makes D. melanogaster an interesting model system for toxicologists. It can be inferred that on the overall that the methanolic extract of CN is safe. On the whole, is there really a reduction in longevity since the average life span of fruit fly is between 40-120 days? CONCLUSION From the findings it can be concluded that methanolic extract of CN seeds has high LC 50 = 6.533e+017 mg/10 g diet; caused a significant increase and decrease in survival with the 300mg/10g food and 350 mg/10g food respectively and no significant changes with higher doses of the extract; however it could only sustain fly life for only 40 days and could imply that it is safe in acute and subacute but maybe deleterious in chronic exposure.
2020-08-06T09:06:17.863Z
2020-08-01T00:00:00.000
{ "year": 2020, "sha1": "470f819df044a9aff509da2f731b80e93228f88e", "oa_license": null, "oa_url": "https://www.journalajbgmb.com/index.php/AJBGMB/article/download/30110/56505", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "05889305bdc65ce64ec11200e2fb44522bdddf3f", "s2fieldsofstudy": [ "Environmental Science", "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
8700383
pes2o/s2orc
v3-fos-license
Complex DNA Damage: A Route to Radiation-Induced Genomic Instability and Carcinogenesis Cellular effects of ionizing radiation (IR) are of great variety and level, but they are mainly damaging since radiation can perturb all important components of the cell, from the membrane to the nucleus, due to alteration of different biological molecules ranging from lipids to proteins or DNA. Regarding DNA damage, which is the main focus of this review, as well as its repair, all current knowledge indicates that IR-induced DNA damage is always more complex than the corresponding endogenous damage resulting from endogenous oxidative stress. Specifically, it is expected that IR will create clusters of damage comprised of a diversity of DNA lesions like double strand breaks (DSBs), single strand breaks (SSBs) and base lesions within a short DNA region of up to 15–20 bp. Recent data from our groups and others support two main notions, that these damaged clusters are: (1) repair resistant, increasing genomic instability (GI) and malignant transformation and (2) can be considered as persistent “danger” signals promoting chronic inflammation and immune response, causing detrimental effects to the organism (like radiation toxicity). Last but not least, the paradigm shift for the role of radiation-induced systemic effects is also incorporated in this picture of IR-effects and consequences of complex DNA damage induction and its erroneous repair. Introduction Many decades of experimental research in cellular and molecular radiation biology have provided evidence suggesting that DNA damage plays a critical role in a plethora of human pathologies, including cancer, premature aging and chronic inflammatory conditions [1]. In response to both endogenous and exogenous insults (approximately 10 4 -10 5 lesions induced per cell per day) mammalian cells evolve the DNA damage response and repair pathway (DDR/R) that arouse the immune system, activating DNA damage checkpoints and facilitating the removal of DNA lesions [1]. Dysregulation of the DDR/R pathway is closely linked to several human disorders associated with cancer susceptibility, developmental abnormalities, neurodegenerative disorders and accelerated aging [2][3][4]. The DDR is triggered by a wide variety of physico-chemical aberrations in the genome. Depending on the source of damage, diverse lesions in the DNA can be induced, including nucleotide alterations (mutation, substitution, deletion and insertion), bulky adducts, single strand breaks (SSBs) and double strand breaks (DSBs) [5]. Genotoxic agents, such as ultraviolet light from the Sun and IR from e.g., cosmic radiation and medical treatments utilizing X-rays or γ-radiation, mainly cause changes or losses of bases (abasic sites), crosslinks formed between two complementary DNA strands, SSBs and DSBs. Such types of DNA damage can occur separately or in conjunction with one another, resulting in complex DNA damage (clustered lesions). Chemical agents used in cancer therapy can also induce a diversity of DNA lesions, such as intrastrand or interstrand crosslinks [6]. Apart from these environmental agents and genotoxic chemicals, DNA aberrations can also arise from physiological processes such as base mismatches introduced during DNA replication [7] and from the release of reactive oxygen and nitrogen species (ROS/RNS) upon oxidative respiration or through redox-cycling events mediated by heavy metals [8]. Additionally, replication stress resulting from oncogenic signaling may cause genome instability [9]. It is well-accepted that IR can induce cancer even at clinically relevant doses and the relationship between radiation and formation of solid tumors is considered to be linear in the dose range of 0.15-1.5 Gy [10]. Epidemiological data from the Life Span Study of the Japanese Atomic Bomb survivor cohort has provided significant evidence on the causal relationship between IR exposure and carcinogenesis [11,12]. For low doses (<0.1 Gy), there is a heated debate on the actual relationship between dose and cancer incidence. Even recently the validity of the well-known linear no-threshold (LNT) model has been challenged and many questions are still open regarding if the radiosensitivity of a tissue to malignant transformation increases or decreases with dose and if the actual form of the curve, i.e., linear or curvilinear, etc. [13,14]. Our knowledge of the mechanistic basis of the strong link between IR and carcinogenesis has been based on early studies using various animal models and it is concluded that radiation tumorigenesis proceeds in a conventional multi-step mode following radiation-induced key gene losses from single-target cells (including possible stem cells) [15]. These genes can be DNA damage response, apoptotic and cell cycle control genes and others. This radiation-induced GI can be transmitted over many generations after irradiation via the progeny of surviving cells [16]. Complex DNA damage and the consequent less precise and/or delayed DNA repair certainly hold a pivotal role(s) in this association between IR and cancer [17,18]. Last but not least, in order to draw the current picture of the factors contributing to radiation-induced carcinogenesis one should also add the non-targeted effects and the release of clastogenic factors in non-irradiated cells and tissues [19,20], as well as the involvement of inflammation and constant triggering of the immune system [21,22]. Lesions formed in a close proximity (i.e., within a few nm) result in clustered types of DNA damage, also called multiply damaged sites (MDS) and are considered the fingerprint of IR. Clustered DNA lesions can comprise a DSB and several base damages and/or abasic sites in close vicinity. In the case of multiple DSBs, we refer to the idea of complex DSBs [23]. The biological significance of such lesions relates to the inability of cells to process them efficiently compared to isolated DNA damages and the outcome in case of erroneous repair can vary from mutations up to chromosomal instability [24][25][26]. Therefore one should wonder if there are any mutational signatures of IR. Only recent evidence, mostly due to availability and affordability of next generation sequencing technologies, indicates that such radiation signatures do exist [27]; yet previous studies have shown the lack of such associations [28]. Specifically, Behjati et al. have shown a significant increase in small chromosome deletions and balanced inversions in radiation-associated tumors which probably act as driver mutations and explain the carcinogenic potential of IR. More importantly, they suggest that these chromosomal abnormalities originate from the repair of radiation-induced DNA damage via the less accurate pathways of non-homologous (NHEJ) or microhomology mediated end-joining (MMEJ) [27]. Therefore, accepting the claim that IR induces complex DNA damage that is irreparable and leads to mutations or structural abnormalities and subsequently to genomic instability (GI) and cancer, radiation-induced cancers should bear traces of the radiation-related origin of these mutations. Disruption of genome maintenance (i.e., GI) can occur through a variety of mechanisms and it is now considered as a key hallmark of cancer. Hence, there is a great need for improved detection techniques at cellular and tissue level that will provide valuable information for understanding the cellular mechanisms to process clustered DNA lesions [29]. Biological Significance and Detection of Clustered DNA Damage The complexity of DNA damage as discussed above refers to the idea of clustering of several and different DNA lesions within a short DNA region of 10-15 bp. The two main categories of lesions appearing in a cluster are the DSB and non-DSB lesions, usually referred to as oxidatively-clustered DNA lesions (OCDLs). The reader can refer to several comprehensive reviews for the general description of clustered DNA damage [23,26,30], detection methodologies [23,29,31] and biological importance. More specifically for the accepted repair resistance of these lesions, including experimental evidence on the increase of the possibility for generation of mutations and chromosomal breaks after erroneous repair of clustered DNA lesions please see [17,25,26,32,33]. Although within the context of this review we refer to bistranded DNA lesions appearing in both the DNA strands, there is also the possibility of unistranded or tandem lesions appearing in the same DNA strand and several groups have dealt with the processing and biological role of these complex DNA lesions as described in recent reviews [31,[34][35][36]. The biological significance of clustered DNA damage is not only based on the difficulty encountered by the different DNA repair proteins that process these closely spaced DNA lesions, but also to the fact that several of these OCDLs can be converted into de novo DSBs during repair [37,38]. There are still many questions open as to which type of clusters will be more prone to be converted to potentially dangerous DSBs, but some parameters such as the presence of an SSB in one strand that delays the simultaneous process of other base lesions on the other strand, the nucleotide distance between the various lesions and the direction 3 or 5 to each other have been found to be critical [25,39].For example, some recent in vitro data using plasmid pUC18 DNA exposed to high-LET IR (He 2+ or C 6+ ions) or low-LET (X-rays) and under varying radical-scavenging conditions, suggest that base lesion clusters appear three or more base pairs apart and are promptly converted to a DSB by a glycosylase, regardless of the order of enzymatic treatment [40]. These and other similar results are in good agreement with Monte Carlo (MC) track structure calculations, suggesting an increase of complexity with LET and specific base to SSB ratio etc. [41]. Additionally, one cannot disregard that the initial repair steps at clustered damage sites is a major parameter that directs towards the conversion of MDS into DSB or not [42]. Unrepaired clustered DNA lesions can lead to chromosomal breaks and significant GI as primarily manifested during the induction of clustered DNA damage by high-LET radiations [43]. The experimental validation of DNA damage clustering induction, as well as, the repair mechanisms involved have not been an easy task. There are some discrepancies that can still be found between experimental evidence or data and prediction models using Monte Carlo (MC)-based methodologies [41,44,45]. Significant advancement in the understanding of expected clustered DNA damage induction mechanisms has been achieved using a fast and cell-level MC code, the Monte Carlo Damage Simulation (MCDS) code, integrated into the general-purpose MC N-particle radiation transport code system (MCNP) [46]. At the same time, a better understanding of the processes and mechanisms involved in the repair of clustered DNA lesions has been provided by the development of analytical biochemical models for DSB and base lesion repair [47][48][49]. Towards the history and advances in the field of experimental detection of clustered DNA lesions, the reader can refer to the above mentioned references. Current research in this field is based on the idea that theory and predictions do not always coincide with experimental evidence. The major challenges towards the detection of clustered DNA damages have been: (1) the accurate measurement of DSBs and OCDLs levels and their types, especially at the cellular level and (2) theirin situ detection and reliable quantitative measurement. During the last decade and since its initial discovery in 1998, the application of the γH2AX methodology has provided a significant boost towards reliable measurements of DSBs at a cellular or tissue level [50][51][52][53][54][55][56][57]. On the other hand, for the measurement of non-DSB lesions at least in situ, significant advancements have been made using adaptations of fluorescence microscopy and foci colocalization as reviewed in [29], but still there is no reliable in situ-technique to detect closely spaced DNA lesions within 1-20 bp apart. The colocalization of two or more antibodies (corresponding to DNA repair proteins presumably working on a clustered damage site), certainly provides valuable information, but this only gives an idea on how many different proteins maybe present in a chromosome region of a few Mbp. In each case, measurement of DNA lesions is being performed indirectly by the use of usually two DNA damage/repair proteins specific primary antibodies (e.g., against γ-H2AX:DSB and OGG1:oxidized purines or NTH1:oxidized pyrimidines etc.) each detected by the appropriate fluorescent labeled secondary antibodies. The simultaneous use of more than three different antibodies requires highly advanced microscopic systems and it is considered to be highly challenging. This microscopy-based methodology however, is very distant from the original definition of clustered DNA damage located in a very small DNA region [29]. Based on the above and in an attempt to make a rough comparison between the originally used adaptations of gel electrophoresis to measure different types of DNA clusters (DSBs and non-DSBs) as introduced by Sutherland and colleagues [58][59][60] and afterwards by others [38,61,62] one can conclude that: (1) there are two main methodologies to measure complex DNA lesions at the cellular level; one based on DNA fragmentation measurement using gel electrophoresis with repair enzymes as damage probes, and in situ immunofluorescence microscopic approaches using different antibodies to allow foci colocalization centered around the DSB focus (usually γH2AX/53BP1) (Figures 1 and 2) both methodologies are necessary and useful, but they are complementary; when it comes to measurement of damage complexity of DSBs and non-DSBs one should consider applying them both. A short description on the powerful γH2AX methodology follows. The Epigenetic Biomarker γH2AX Detects DNA Double-Strand Breaks To date, a large volume of studies supports the notion that the γH2AX epigenetic biomarker has been established as the most sensitive and specific epigenetic biomarker for DSB detection and quantification. H2AX is a mammalian variant that belongs to the H2A histone family that has a phosphorylation site at a serine 139. This site becomes rapidly phosphorylated when DSBs are generated into DNA. It has been well documented that this phosphorylation is specific to DSBs [57]. This specific phosphorylation is denoted as "γ-phosphorylation" and the H2AX histone molecules that "carry" this phosphorylation are designated as "γH2AX" accordingly. One of the most intrinsic features of γH2AX is that γ-phosphorylation extents at megabase-long domains in chromatin. The γ-phosphorylation of H2AX is evident within minutes after the generation of DSBs. Nevertheless, γ-phosphorylation is not restricted to the vicinity of the sites of the DSB, but extends both sides of the damage, and reaches megabase-long domains in chromatin [53,63]. This feature of γ-phosphorylation is very important; it represents a biological amplification mechanism where one DSB induces the γ-phosphorylation of thousands of H2AX molecules along megabase-long domains of chromatin that are adjusted to the sites of DSBs. The γ-phosphorylated megabase-long chromatin domains that are adjusted to the sites of one DSB are the basis for a very important technological implication. As one DSB is surrounded by thousands of γ-phosphorylated H2AX nucleosomes, specific antibodies enable the microscopy observation of the site of one DSB by immunocytochemistry. When detected with epifluorescence or confocal microscopy, γH2AX foci appear as large, roughly spherical conformations in cells that are in the G0, G1, S, or G2 phase of the cell cycle [63]. Pclc values less than 1 imply DSB foci localization on euchromatin DNA regions, where the DAPI intensity is expected to be lower. In each case, measurement of lesions is being performed indirectly by the use of DNA damage/repair proteins specific primary antibodies (e.g., against γ-H2AX:DSB or OGG1:oxidized purines etc.) detected by the appropriate fluorescent labelled secondary antibodies as described in the text. 1 Figure 2. Linking processing of clustered DNA damage and immune response. I. The challenge of repairing a clustered damaged DNA site: a task for real survivors. Upon the induction of clustered DNA damage consisting for example of one double strand break (DSB) and two oxidative DNA lesions like a damaged base (shown here with an asterisk) and an apurinic/apyrimidinic (AP) site, two at least DNA repair pathways and several DNA repair proteins will arrive at the same chromosome region. For the base damage, the primary pathway is the base excision repair (BER) while for the DSB, here we consider for simplicity only the non-homologous end joining (NHEJ). In all cases, the most basic proteins and enzymes are also described in the main text. Last but not least, as shown by advanced fluorescence microscopy and foci colocalization, each DSB is expected to be rapidly accompanied by the phosphorylation of thousands of H2AX histone protein molecules called γH2AX. The MRN complex functions rather as a sensor of DNA ends and activates ATM kinase. The ATM phosphorylates substrates such as Chk2, p53, and the H2AX in flanking chromosomal regions. II. Linkage to immune response. Processing of clustered DNA damage and especially of unrepaired orpersistent is expected to lead to senescence or cell death i.e., apoptosis, necrosis (accidental, non-programmed), and necroptosis (programmed). All these processes can trigger the extracellular release of diverse signatures of 'Danger' signals or Damage-Associated Molecular Patterns (DAMPs: ATP, short DNAs/RNAs, ROS, heat shock proteins (HSPs), high-mobility group box 1 (HMGB)-1, S100 proteins and others) [65]. DAMPs activate different pattern recognition receptors (PRPs) including for example Toll-like receptors (TLRs) and inflammasomes, a process that leads usually to inflammation and immune-related pathologies. Interestingly, recent evidence as explained in the main text, suggests a direct interactions between different PRPs and DNA repair proteins involved in DSB repair and others (Dashed arrow connecting DSB to PRPs). Cellular damage or death can also lead to the release of several cytokines and chemokines that can regulate immune responses. Activation of PRPs usually results in nuclear factor-κB (NF-κB)-mediated release of various proinflammatory cytokines like IL-6, IL-8 and others. The activation of antigen-presenting cells (APCs) like dendritic cells, macrophages will induce primarily the innate immune response (activation of T-cells) and most rarely by B-cells, the adaptive immune response. In all cases, the constituent and constant triggering of the immune system is expected to generate a variety of systemic effects on the organism and possibly pathophysiology, close to the damaged cells often called as "bystander" effects or distant. Overall, for the final assessment of radiation effects and the return to the physiological state, the role of immune response and the systemic nature of radiation is of enormous importance. On the contrary, it has been demonstrated that γH2AX foci appear as band-like conformations [63] in deer mitotic cells, resembling perhaps the known bands in human mitotic chromosomes as seen in routine karyotype tests. Though, these conformations have not been detected in human mitotic cells, perhaps due to intrinsic features of human mitotic chromatin. Additionally, the possible detection of only one DSB in the nucleus by γH2AX immunocytochemistry [66] renders this technology as currently the most sensitive assay for the detection of DSBs. Although the amount of H2AX, as well as the percentage of H2AX in respect to the total H2A of the histone family is not the same between differentiated cell types, the percentage of chromatin that becomes phosphorylated per one DSB has a roughly constant average. That permits the quantification of the γH2AX foci [63] in cell lines, primary cells, and tissues. Assays based on specific antibodies against the characteristic γH2AX epitope (e.g., confocal and epifluorescent microscopy, flow cytometry, ELISA, immunoprecipitation etc.) have been incomparably successful for the detection of DSBs [51,[67][68][69]. Among them, immunocytochemical detection of γH2AX has become the primary method of detection, as it is several orders of magnitude more sensitive than other methods and has the potential for quantification [56]. In addition, it has been shown that γH2AX foci are formed preferentially in euchromatin after IR-exposure [70]. In general, the γH2AX assays share four important technical features: (i) a general acceptance for specificity to DSBs, (ii) sensitivity, (iii) quantification of DSBs, and (iv) repeatability and reproducibility. Regarding the technical supremacy of the specific methodology one can assign the following features: (i) specificity to DSBs: the γH2AX has been shown to detect specifically DSBs rather than other DNA damages [57]. However, it has been reported that γH2AX can be formed at other types of lesions and in high frequencies in S-phase cells undergoing replication [71], or some other cell types undergoing for example chromatin remodeling [72], (ii) sensitivity: immunoassays utilizing specific antibodies for γH2AX show the highest score in sensitivity. Even one DSB can be detected by anti-γH2AX immunocytochemistry [66]. The biology of γ-phosphorylation provides the explanation for this remarkable sensitivity; visualization of only one DSB in the whole nucleus is feasible, as γ-phosphorylation spans megabaselong domains in chromatin juxtaposed to the break, (iii) quantification: the presence of γH2AX detected by antibody based techniques can be quantified by various methods, such as confocal and epifluorescence microscopy (measured manually or automatically), flow cytometry, western blot quantification, etc. [51,67] and (iv) repeatability and reproducibility: to date, the repeatability and reproducibility of the method have been demonstrated by numerous diverse research laboratories all over the world, as demonstrated by the number of scientific publications [50,[73][74][75][76][77][78][79][80][81][82]. At this point, it must be mentioned that a variety of tumor cells have been found with increased numbers of γH2AX foci suggesting to be related to the overall chromosomal instability of these cells [83]. Last but not least, it has been also indicated by Banath et al. that persistence of DNA damage-induced γH2AX foci can be suggestive of lethal DNA damage so that it may be possible to predict tumor cell killing by different DNA damaging therapeutic agents by measuring the fraction of cells that retain γH2AX signalling [84]. Using Fluorescence Microscopy for the in situ Detection of Complex DNA Damage. A Useful Tool The study of complex DNA damage in terms of in situ detection involves the concept of DNA repair colocalization (DNA repair centers) as previously introduced for DSBs [43,85] and non-DSB damage [29,43,64]. The term "colocalization" actually refers to the spatiotemporal coexistence of two or more proteins of different type. The detection of complex DNA lesions consisting of a variety of DSBs and OCDLs is made possible through the visualization of proteins participating in a distinct DNA repair mechanism, e.g. one protein participating in the base excision repair (BER) processing base lesions and another one participating in the homologous recombination (HR) or the non-homologous end joining (NHEJ) for the repair of DSBs. As shown in Figure 2, upon the induction of a cluster of DNA lesions, several DNA repair pathways and proteins will be involved. For short-patch BER, a DNA glycosylase will arrive, excise the damaged base and the repair will be completed presumably by the human AP endonuclease 1 (APE1), a polymerase and ligase III to seal the broken ends. In the nearby DSB area (within a few bp apart), the Ku heterodimer (Ku70/80) initiates NHEJ by binding to the free DNA ends and engaging other NHEJ factors such as DNA-dependent protein kinase (DNA-PK), XRCC4, and DNA Ligase IV to the site of the break. DNA-PK becomes activated upon DNA binding, and phosphorylates a number of substrates including p53, Ku, and DNA Ligase IV cofactor XRCC4. Phosphorylation of these factors is believed to further facilitate the processing of the break. Finally, in order for ligation to occur, a partial processing of the ends by nucleases Artemis, MRE11/Rad50/NBS1 complex and FEN-1 is taking place. Although the in situ immunofluorescence has been extensively utilized for the detection of single/simple DNA damage including one type of lesions [74,[86][87][88], the simultaneous detection of DSBs and non-DSB lesions has been reported only in a few studies [43,64,89]. The difficulty in achieving visualization of base lesions, in terms of foci, lies in the fact that only a few molecules of every specific DNA repair protein (e.g., OGG1, NTH1, APE1 etc.) are taking part in the repair of a single lesion, in contrast with DSB repair where hundreds/thousands of molecules of the same DNA repair protein (like γH2AX/53BP1) may contribute to the process, as discussed above. Moreover, unlike γH2AX protein which becomes present mainly upon a DSB formation, most of the non-DSB repair proteins have endogenous concentrations within the cell nucleus, therefore resulting in increased background signal. A pre-extraction step in the experimental procedure, as well as the introduction of the Pclc colocalization parameter in image analysis [64] have helped researchers overcome these obstacles (Figure 1). In Figure 1, the theoretical description of the Pclc-parameter is given in detail (panel A), along with its application for the detection of complex DNA damage (panel B) and an additional application for the derivation of useful data regarding the localization of DNA repair proteins in euchromatin/heterochromatin regions (panel C). Complex DNA Damage, Immune Signaling and Systemic Effects. A Puzzling Case of Triage for the Cell Triage in medical situations refers to the assignment of degrees of urgency to wounds or illnesses to decide the order of treatment of a large number of patients or casualties. Radiation injury for the cell can be considered as a major "wound to its crucial organs" and in many cases a matter of life or death. The delineation of how DDR exerts immune responses still can be considered as a puzzling topic [90]. Based on the above ideas, it is generally accepted that once complex and/or persistent DNA damage is induced and most probably GI, immune signaling is initiated by different components of the DDR/R pathway including DNA damage sensors, transducer kinases, effectors and repair proteins [3]. In general, association between innate immune system response and persistent DNA damage has been shown in various cases as reviewed in [91,92]. In the same direction, Ermolaeva et al. used the nematode Caenorhabditis elegans eukaryotic system to show that DNA damage in germ cells induces an innate immune response that consequently leads to activation of the ubiquitin-proteasome system (UPS) in somatic tissues, which confers enhanced proteostasis and systemic stress resistance [93]. Rodier et al. have shown that X-ray damaged human HCA2 fibroblasts develop persistent chromatin lesions bearing DSBs detected using γH2AX/53BP1 foci as surrogate markers, which triggers the secretion of inflammatory cytokines such as interleukin-6 (IL-6) [94]. It is important to notice, that this cytokine secretion occurred only after establishment of persistent and heavy DNA damage (10 Gy of X-rays), associated with senescence and not after transient DNA damage responses (X-ray dose of 0.5 Gy). On the other hand, systemic DNA damage responses are part of the organism's defense system in order to secure removal of damaged and malfunctioning cells and preserve tissue integrity and functionality i.e., tissue homeostasis [95]. For example, it has been shown that in repair deficient Ataxia-telangiectasia (AT) patients, where the repair protein ATM is defective, small DNA fragments generated from the excessive DNA-breaks accumulate in the cytoplasm of these patients' cells. The DNA fragments are consequently recognized by innate immune receptors that normally detect viral DNA. This "false alarm" of viral invasion results in the production of type I interferon which in turn drives the innate immune system into an activated state [92]. Regarding IR exposure as a genuine genotoxic stress, accumulating experimental evidence suggests a diverse range of radiation effects for non-irradiated areas often referred to as non-targeted effects (NTE) or under the general umbrella of systemic effects [65,[96][97][98]. The NTE can be separated in two major groups: near (bystander), where non-irradiated cells exhibit a response similar to their neighboring irradiated cells, and distant (e.g., the clinically relevant abscopal effect) while different mechanisms are implicated in each case, as discussed recently in [65,96]. The NTE usually involve the discharge of various chemical and biological mediators from the irradiated cells and thus promoting the communication of the radiation attack via the so-called damage-associated molecular patterns (DAMPs), which is based on the originally introduced idea of "danger" signals [99]. Recent work by Redon et al. showed that growing tumors may act as a type of stress in the organism and induce complex DNA damage (DSBs and OCDLs) in distant proliferative tissues in vivo [100]. According to this study, rapidly growing normal tissues, such as colon and skin were found to be particularly susceptible to remotely induced DNA damage and a signaling molecule involved in inflammation, the chemokine CCL2 (monocyte chemoattractant protein-1: MCP-1) appeared to be a major player in promoting this distant effect. Interestingly, later studies by the same groups showed that this systemic DNA damage accumulation under tumor growth can be inhibited by the antioxidant Tempol suggesting the involvement of oxidative stress [101]. The involvement of CCL2 and macrophage activation in tumor-induced distant DNA damage suggests some resemblances with the chronic tissue stress responses usually referred to as para-inflammation [102], which relies mostly on alternatively activated macrophages (M2) rather than on classically activated macrophages (M1) associated with the acute inflammatory response [103]. A CCL2-based mechanism has been also suggested for other cases of stresses i.e., exposure to IR. Specifically, it was shown that a single-dose whole-body γ-irradiation (8 Gy) induced DNA damage in mice neuronal retina, which was complemented by a low-grade chronic inflammation, para-inflammation, characterized by upregulated expression of chemokines (CCL2, CXCL12, and CX3CL1) and microglial activation [104]. Recent patient studies also suggest an actual involvement of cytokines in the induction of RT-induced systemic DNA damage in normal tissues distant to the irradiation site [105]. More specifically in this study, sixteen patients with non-small cell lung carcinoma (NSCLC) received 60 Gy in 30 fractions of definitive thoracic RT with or without concurrent chemotherapy (chemoRT) and peripheral blood lymphocytes (PBL) and eyebrow hairs samples were taken prior, during, and after RT. The results showed an elevation of DSBs manifested as γH2AX foci in PBL, representing normal tissues in the irradiated thorax volume, 1 hour after fraction one and γH2AX foci numbers returned to near baseline values in 24 hours after treatment. Most importantly, unirradiated hair follicles, exhibited delayed systemic (abscopal) DDR measured as γH2AX foci which increased at 24 hours post-fraction one, and remained elevated during treatment in a dose-independent manner. This distant radiation effect was related with changes in plasma levels of MDC/CCL22 and MIP-1α/CCL3 cytokines. Interestingly and consistent with the unifying model suggestion introduced by Georgakilas uniting different types of stress i.e. radiations and a growing tumor [106], MCP-1 blockade by neutralizing antibodies was found to inhibit lung cancer tumor growth by altering macrophage phenotype and activating cytotoxic CD8 + T lymphocytes (CTLs) [107]. Another side of the same coin of cytokines-inflammation is the reverse activity. Earlier studies have shown that ROS/RNS could be generated in vitro by a mixture of inflammatory cytokines (IL-1β, IFN-γ and tumor necrosis factor α) in three human cholangiocarcinoma cell lines by a nitric oxide (NO)-dependent response, as assessed by alkaline (denaturing) comet assay [108]. In addition, a parallel inhibition of global DNA repair activity by 70% was detected. These and later data indicate that activation of iNOS and excess production of NO in response to inflammatory cytokines can cause DNA damage and inhibit DNA repair, at least partially. Recent extensive bioinformatics-based metanalysis studies have verified the interactions between mediators of systemic effects and DDR/R components, as well as interactions between pattern recognition receptors (PRPs) and DNA repair proteins like BRCA1, XRCC1, DNA-PK, Ku70/80 and others [96,109]. Recently, Nikitaki et al. produced a detailed list of proteins implicated in different categories of radiation-induced systemic effects, including the clinically relevant abscopal phenomenon, using improved text-mining and bioinformatics tools from the literature. Genes belonging to the DDR/R pathway and protein-protein interaction (PPi) networks as well as KEGG pathway analyses have revealed that the main pathways participating in NTE are: apoptosis, TLR-like and NOD-like receptor signaling pathways [96]. Conclusively, one can wonder how cells triage this scenario of the interaction between complex DNA damage, immune signaling and systemic effects, which is the most important in regulating the overall outcome of this complex crosstalk (Figure 2). It is rather secure to suggest that complex and persistent DNA damage constitutes a major "danger" signal for the cells and this probably alarms the whole cell or tissue about something "peculiar" happening in this area of damage. If this complex form of damage is processed correctly and all problems have been resolved then the alarm goes off, but the "danger" signaling may already have generated an immune response. In this case, the outcome is uncertain. Immune response manifested initially at least as innate and later on as adaptive and inflammation maybe present, especially when specific "danger" signals are produced due to cell death or senescence. As so, a continuing systemic effect of unknown severity and duration will be induced resulting to a chronic state of immune response and a precursor of pathological evolution and disease as presented with red in Figure 2. Recent evidence obtained using mice carrying an ERCC1-XPF DNA repair defect systematically or in adipocytes, suggests that persistent DNA damage-driven autoinflammation plays a causative role in adipose tissue degeneration, with important complications for advanced lipodystrophies and aging [110]. In any case, the knowledge of the exact mechanisms and mediators of systemic responses will be very useful in various applications that involve complex DNA damage formation, such as RT, chemotherapy and tumor growth early detection. As nicely presented in a recent work by Pateras et al. continuous triggering of DDR/R can lead to excessive innate and adaptive immune response which, in turn, can lead to pathological conditions and disease [109]. Clinical Implications of Complex DNA Damage As well-known, IR exposure can be considered for humans as a double-edged sword to either hurt or save. On one hand it can induce significant levels of complex and usually unrepairable DNA damage that can lead to enhanced mutation levels, GI and cancer, but on the other hand it can be used as the ultimate weapon against tumors [39]. Treatment options for patients with various kinds of malignancies have expanded with discoveries of druggable targets as well as technological advances. Surgical resection, chemotherapy and RT are the three major available modalities for the treatment of most cancers and are utilized either in combination or separately, as deemed appropriate. In case of chemotherapy and RT, the main aim is to spare normal cells while inducing sufficient, non-repairable DNA damage in tumor cells. Consequently, cancer cells may exit the cell cycle permanently, a phenomenon referred to as senescence, or triggered apoptosis. The mechanism of action of chemotherapeutic agents and the dose and type of RT determines the spectrum of DNA damage induced by treatment. As discussed earlier, complex DNA lesions are the most challenging type of damage for a cell to repair. This section focuses on whether there is evidence linking efficacy of chemotherapeutic drugs or RT to the type of DNA damage they incur. Additionally, a discussion is made on evidence from literature that highlights the drawback of using these agents for therapy, since normal cells affected by these insults to their DNA can also lead to a second primary cancer development. The therapeutic index is high when molecular targets overexpressed specifically in tumor cells can be targeted by small molecule inhibitors. Multi-kinase inhibitors have dramatically improved patient survival in hematologic malignancies, while drugs targeting cancer specific mutations have improved survival in select patient populations. In the mid-1970s, 5 year survival estimates were at 41% for patients diagnosed with acute lymphocytic leukemia while they are reported at 71% for patients diagnosed between 2006 and 2012. Similar improvement has been witnessed for chronic myeloid leukemia, 22% to 66% in the same time intervals [111]. However, currently available standard chemotherapeutic agents and even the latest technologies in radiation physics fail to qualify as curative options for several cancer types. Commonly used chemotherapy regimens include platinum based DNA alkylating agents, topoisomerase poisons, antimetabolites, microtubule inhibitors, antitumor antibiotics, proteasome inhibitors etc. [112]. Antitumor antibiotics include a class of drugs called anthracyclines that inhibit pathways that generate DNA nucleotides. Non anthracycline drugs in this class include a compound called bleomycin. Bleomycin portrays the strongest evidence for clustered DNA damage being used as the mechanism of action for a chemotherapeutic agent. The mechanism of action of bleomycin and the similarities in base damage produced when compared with IR makes it a "radiomimetic" chemotherapeutic [113]. The drug creates reactive aldehyde groups at the sugar moiety that is capable of reacting with cytosine residues in its proximity and creating clustered DNA damage. Use of bleomycin has been inhibited due to severe pulmonary toxicity [114] and risk of pulmonary fibrosis despite tolerable myelotoxicity. Since the clinical trials establishing the correlation between bleomycin use and pulmonary toxicity in the 1980s [115,116] there has been years of research that indicates the importance of ROS at the site of action in propagation of the oxidative DNA damage. Even low levels of ROS have been reported to cause GI via NHEJ-mediated DNA repair [117]. One of the major treatment modalities for several types of cancers is RT, and ROS and clustered DNA damage are thought to be critical to mediate the effect of IR. Many decades of experimental research in cellular and molecular radiation biology provide evidence suggesting that nuclear DNA is the critical target of IR, and both the initial and residual levels of DSBs are linked to the biological effects of radiation, and that DNA damage and repair is relevant to carcinogenesis [118]. Precise delivery of radiation beams to site of solid tumors has improved with advances in medical physics and engineering. Among these, the use of proton beams as an alternative to traditional high energy electrons has at least in theory, improved accuracy of targeting and reduction in surrounding tissue toxicity. Long term follow-up data for significant patient cohort sizes will enable us to compare the potential benefits of proton beam therapy. OCDLs are a hallmark of IR although their endogenous levels are relatively low [119,120]. Radiation dose and quality dictates the complexity of DNA damage induced by the particle. Increasing dosage and LET (linear energy transfer) correlates with higher accumulation of clustered lesions in cancer cells [25,64]. The recruitment kinetics of DNA repair proteins is dependent on the level of LET [121]. The fact that DNA repair capability is compromised with increasing complexity of damage underscores the importance of these lesions in therapy [24,122]. One of the significant and therapeutic advantages of high LET IR is that there are extensive amounts of clustered damage leading to increased relative biological effectiveness (RBE) vs. both photon-based and even proton-based modalities [123]. As recently reviewed by Mohamad et al. [124], comparison of conventional photon-based external beam radiation to carbon ion radiotherapy reveals that carbon ions result in a better and more targeted-to-the-tumor dose distribution, higher LET and RBE. This improved RBE relates to the unique high-LET radiation-induced complex DNA damage that overpowers the DNA repair system of tumor cells as also showed for example by earlier studies using human monocytes exposed to 56 Fe ions (LET=148 keV/µm) [125]. The use of carbon or other high-LET particles maybe a solution in the case of difficult to treat tumors, including those that are hypoxic, radio-resistant, or located deeper in the body [124]. On the history of carbon-ion based RT, one of the pioneers was the National Institute of Radiological Sciences (NIRS) which started treating patients with beams in the Heavy Ion Medical Accelerator (HIMAC) in Chiba, Japan in 1994. Following Japan, Germany in 1997 in the Gesellschaftfür Schwerionenforschung (GSI) in Darmstadt, treated their first patient and later in the Heidelberg Ion Therapy Center (HIT) in 2009. Therefore based on clinical evidence, mostly originating from Japan and Germany, high-LET radiations maybe a promising RT modality with limited radiation toxicity [124]. The precise contribution to the effects of clustered DNA lesions after proton treatment on cells is a matter of debate that remains to be studied in further detail [126][127][128]. The highest LET along the path of a proton beam around the Bragg peak has been reported to correlate with maximum complexity of DNA damage [129]. The rise in the population of cancer survivors has led to a better understanding of the effects of RT to treat cancer patients [130]. A significant portion of cancer survivors are patients with a history of childhood cancer. According to the American Cancer Society (ACS) the 5-year survival rate for childhood cancer patients is now over 80%. However, exposure to radiation treatment can lead to the occurrence of secondary primary cancers (SPCs) in the future. An analysis of thyroid cancer in childhood cancer survivors showed that the relative risk of thyroid cancer in these patients increased linearly with the dose of radiation for treatment through 10 Gy [131]. The relative risk of thyroid cancer post RT was lowered at high treatment doses. Another study looking at chest RT to treat childhood cancers showed an increased risk of breast cancer in these patients [132]. Especially treatment involving whole-lung irradiation increased this risk. This shows the importance of localized RT to reduce the risk of SPCs. The need for improvement in RT techniques that spare normal tissue is also highlighted by the incidence of metachronous cancers (multiple primary cancers developing at intervals) in adults. The incidence of secondary primary lung tumors increased by a stark 8.5% per Gy in women who had undergone RT for breast cancer [133]. An analysis of prostate cancer patients also told a similar story. Patients with prostate cancer undergoing RT had an increased overall risk of developing hematologic, liver, esophageal, and urinary bladder cancers [134]. Certainly, the induction of DNA damage by RT is an important factor towards the prediction of SPCs. Recent studies for example show that simulated radiation-induced persistent telomere-associated DNA damage foci can be used to predict excess relative risk of developing secondary leukemia after fractionated radiotherapy [135]. In general, the incidence of secondary malignant neoplasms (SMN) depends on several factors like patient's lifestyle, genetic susceptibility, DNA repair efficiency and radiosensitivity of the patient or specific organ [136]. The advent of proton beam therapy (PBT) brings new promise of reduced radiation treatment related morbidity by minimizing the dose to critical normal tissues [137]. Proton therapy has shown great therapeutic potential in treating various adult malignancies including the central nervous system and gastrointestinal tract, but with uncertain benefits for example for lung cancers. At the same time, it has been estimated that excess fatal SPCs may be further reduced with proton therapy by two-thirds compared to conventional photon therapy [138]. However, evidence that PBT reduces occurrence of metachronous cancers is limited. A study looking at PBT for advanced cholangiocarcinomas showed gastrointestinal toxicities and early metastatic progression still remains a treatment obstacle [139]. Another study looking at cardiac events post RT in patients with thymic malignancies, showed that the lower dose to organs with PBT reduced the occurrence of major cardiac events post treatment [140]. A long-term follow-up of patients with pediatric tumors showed fewer late adverse events and a reduced risk of metachronous malignancies with PBT [141]. Similar studies seem to indicate that the reduced dose to normal structures with PBT as opposed to intensity-modulated RT is better tolerated by the patient population [142]. But, it may be too soon to draw a conclusion on the benefits of PBT over traditional photon RT. A study comparing RT usage trends in men with localized prostate cancers pointed differences in demographic and prognostic factors between patients treated with proton and photon RT [143]. Thus, although the physical theory of it may indicate to clear benefits, there is a need for more long-term assessments and more studies in general looking at the potential profits of PBT over traditional RT. Concluding Remarks In this mini-review, we present the idea of complex (clustered DNA damage), the signature of IR, by a different perspective that of its clinical implications and its involvement in the route to carcinogenesis. As recently discussed in Pateras et al. [109], enthralling evidence supports the idea that DNA damage response and repair (DDR/R) and immune response signaling networks work together towards the proper function of organisms and homeostasis. We believe that there is a strong linkage between the induction of complex DNA damage, deficient or incomplete DNA repair, constant DDR/R triggering and the continuous activation of the immune system. This vicious relationship which is usually accompanied by GI can be considered without any doubt as the major pathway leading to carcinogenesis [144,145]. Chronic inflammation which is synonymous to the activation of innate immune system can lead to the downregulation of DNA repair pathways and cell cycle checkpoints due to the release of inflammatory mediators and ROS which can lead to GI [146]. Towards this direction, Colotta et al. suggested a few years ago, that cancer-related inflammation can promote GI by the various inflammatory mediators, leading to accumulation of random genetic modification in cancer or healthy cells. According to the authors, this cancer-relating inflammation represents the seventh hallmark of cancer [147] in addition to the six hallmarks suggested initially by Hanahan and Weinberg [148]. The understanding of the mechanisms that repair-resistant DNA damage is processed by the cells will benefit significantly therapeutic applications maximizing tumor killing and minimizing radiation toxicity for the cancer patient under RT. Therefore, one can easily understand the importance of detecting correctly not only DSBs but also all other forms of non-DSB clustered lesions (OCDLs) and especially in the context of chromatin. A special effort must be made by the scientific community to optimize the specificity and accuracy of all current methodologies for the detection of complex DNA damage in situ and even better under live-cell imaging conditions. Acknowledgments: This research has been financed by the "Research Projects for Excellence IKY/SIEMENS" awarded to Ifigeneia V. Mavragani and Alexandros G. Georgakilas. Conflicts of Interest: The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results. Abbreviations The following abbreviations are used in this manuscript:
2017-07-30T06:14:08.214Z
2017-07-01T00:00:00.000
{ "year": 2017, "sha1": "1d96a40412a65df9128554afd03d1a1d8559d048", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/9/7/91/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1d96a40412a65df9128554afd03d1a1d8559d048", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
1148000
pes2o/s2orc
v3-fos-license
Host genetics and tumour metastasis Metastasis, the spread and growth of tumours at secondary sites, is an extremely important clinical event, since a majority of cancer mortality is associated with the metastatic tumours, rather than the primary tumour. In spite of the importance of metastasis in the clinical setting, the actual process is extremely inefficient. Millions of tumour cells can be shed into the vasculature daily; yet, few secondary tumours are formed. The classical hypothesis explaining the inefficiency was a series of secondary events occurring in the tumour, resulting in a small subpopulation of cells capable of completing all of the steps required to successfully colonise a distant site. However, recent discoveries demonstrating the ability to predict metastatic propensity from gene expression profiles in bulk tumour tissue are not consistent with only a small subpopulation of cells in the primary tumour acquiring metastatic ability, suggesting that metastatic ability might be pre-programmed in tumours by the initiating oncogenic mutations. Data supporting both of these seemingly incompatible theories exist. Therefore, to reconcile the observed results, additional variables need to be added to the model of metastatic inefficiency. One possible variable that might explain the discrepancies is genetic background effects. Studies have demonstrated that the genetic background on which a tumour arises on can have significant affects on the ability of the tumour to metastasise and on gene expression profiles. Thus, the observations could be reconciled by combining the theories, with genetic background influencing both metastatic efficiency and predictive gene expression profiles, upon which, subsequently, metastasis-promoting mutational and epigenetic events occur. If the genetic background is an important determinant of metastatic efficiency, it would have significant implications for the clinical prediction and treatment of metastatic disease, as well as for the design of potential prevention strategies. Metastasis is an extraordinarily complex process. To successfully colonise a secondary site, a cancer cell must complete a sequential series of steps before it becomes a clinically detectable lesion. These steps include separation from the primary tumour, invasion through surrounding tissues and basement membranes, entry and survival in the circulation, lymphatics or peritoneal space, arresting in a distant target organ, usually, but not always (Al-Mehdi et al, 2000) followed by extravasation into the surrounding tissue, survival in the foreign microenvironment, proliferation, and induction of angiogenesis, all the while evading apoptotic death or immunological response (reviewed in Liotta and Stetler-Stevenson, 1993). This process is of great importance to the clinical management of cancer, since the majority of cancer mortality is associated with metastatic disease rather than the primary tumour (Liotta and Stetler-Stevenson, 1993). In most cases, cancer patients with localised tumours have significantly better prognoses than those with disseminated tumours. Since recent evidence suggests that the first stages of metastasis can be an early event (Schmidt-Kittler et al, 2003) and that 60 -70% of patients have initiated the metastatic process by the time of diagnosis, better understanding of the factors leading to tumour dissemination is of vital importance. However, even patients who have no evidence of tumour dissemination at primary diagnosis are at risk for metastatic disease. Approximately one-third of women who are sentinel lymph node negative at the time of surgical resection of the primary breast tumour will subsequently develop clinically detectable secondary tumours . Even patients with small primary tumours and node-negative status (T1N0) at surgery have a significant (15 -25%) chance of developing distant metastases . In spite of the prevalence of secondary tumours in cancer patients, the metastatic process is an extremely inefficient process. To successfully colonise a distant site, a cancer cell must complete all of the steps of the cascade. Failure to complete any step results in the failure to colonise and proliferate. As a result, tumours can shed millions of cells into the bloodstream daily (Butler and Gullino, 1975); yet, very few clinically relevant metastases are formed (Tarin et al, 1984). Although many steps in the metastatic process are thought to contribute to metastatic inefficiency, our incomplete understanding of this process suggests that we are aware of some but not all of these key regulatory points. For instance, killing of intravasated cells by haemodynamic forces and sheering has been thought be a major source of metastatic inefficiency (Weiss et al, 1992). However, recent evidence suggests that the destruction of tumour cells by haemodynamic force in the vasculature may not always be a major source of metastatic inefficiency. Cells in the bloodstream have been shown to arrest in capillary beds and extravasate with high efficiency and reside dormant in the secondary sites for long periods of time (Luzzi et al, 1998), sometimes for years (Riethmuller and Klein, 2001). Micrometastases may form, but the bulk of these preclinical lesions appear to regress (Luzzi et al, 1998), probably due to apoptosis (Wong et al, 2001). GENETIC MODULATION OF METASTASIS The first suggestion of the role of genetic background as a critical determinant of metastatic potential was derived from transfection experiments. Introduction of proto-oncogenes can induce tumorigenicity and metastatic potential when transfected into NIH-3T3 cells. However, when the same oncogenes were transfected into cell lines derived from different strains of mice, metastatic potential, but not tumorigenicity, was lost (Muschel et al, 1985;Tuck et al, 1990). These results suggested either that secondary mutations in metastasis-promoting or -suppressing genes were differentially present among the cell lines, or that allelic differences derived from the inbred strain progenitor were capable of modulating metastatic potential. More compelling evidence for the existence of allelic variation influencing metastatic efficiency comes from experiments from our laboratory. These studies are based on the use of highly metastatic mouse mammary model, the FVB/N-TgN(MMTV-PyVT) 634Mul mouse (Guy et al, 1992). This animal carries the mouse polyoma virus middle T antigen expressed from the mouse mammary tumour virus enhancer and promoter. Expression of the transgene induces synchronous multi-focal mammary tumours in all of the mammary glands of virgin female animals, and greater than 85% of these animals develop pulmonary metastases by 100 days of age (Guy et al, 1992). To determine whether there was genetic modulation of metastatic progression, the genetic background that the tumour arose on was varied by a simple breeding strategy. The PyVT mouse was bred to a variety of different inbred strains selected from different branches of the mouse phylogenic tree (Beck et al, 2000) to survey a broad range of the allelic diversity captured in the inbred strains. The F 1 progeny were aged to permit tumour induction and potential metastatic dissemination. Subsequently, the lungs were examined to determine whether introduction of allelic variation had an affect on the density of pulmonary metastases, and a wide variation in metastatic efficiency was observed (Lifsted et al, 1998). Since all of the tumours were induced by the same genetic event, expression of PyVT, the most likely explanation for this variation is that subtle genetic differences between the strains are affecting the metastasis process. Further evidence of the effect of background on metastatic efficiency was obtained by genetic mapping experiments. Using quantitative trait mapping strategies, three backcross mapping experiments and a recombinant inbred cross were analysed to identify chromosomal regions associated with metastatic efficiency. Two statistically significant associations were observed, one on chromosome 6 and the other on 19 (Hunter et al, 2001). In addition, suggestive associations were reproducibly observed for several other chromosomal regions. The ability to map metastasis efficiency loci within an inbred strain genome argues against random somatic mutations being the major determinant of metastatic efficiency, since each individual animals would retain different sets of alterations, precluding meiotic mapping. Understanding the events and factors that influence tumour dissemination is clearly of great importance for the development of more effective prevention or clinical interventions. Recent studies have sparked considerable debate in the literature on the subject. Several studies were published that demonstrated the ability to classify primary tumours as metastatic or nonmetastatic, based on gene expression from bulk tumour tissue (van 't Veer et al, 2002;Ramaswamy et al, 2003). Since a substantial portion of the tumour must exhibit a particular expression pattern to be detectable in microarray experiments, the authors interpret their data to suggest that metastatic capacity is likely to be encoded early in tumorigenesis by the particular collections of oncogenic events that initiate the tumour. As supporting evidence, the authors cite the clinical phenomenon of patients with metastatic disease, but unknown primary cancer (UPC). These patients, estimated at approximately 5% of cases, present with disseminated disease, but have no clinically detectable primary tumour or only a small welldifferentiated lesion found at autopsy (Riethmuller and Klein, 2001). The lack of large primary tumour mass could suggest that there were insufficient numbers of cells to achieve the necessary sequence of events predicted by the stochastically driven progression model. In contrast, the generally accepted progression model predicts that only a small subpopulation of the tumour would attain metastatic capacity and therefore would not be less likely to dominate the average gene expression profile of bulk tumour tissue. However, compelling evidence for the progression model exists. For example, consistent reproducible chromosomal aberrations are often specifically associated with disseminated tumours rather than the primary tumours. The rapidly growing collection of metastasis suppressors, those genes whose reintroduction into tumour cells specifically interferes with metastatic colonisation without affecting primary tumour initiation or growth kinetics, impact virtually every known step in the metastatic process (Kauffman et al, 2003;Shevde and Welch, 2003;Steeg, 2003). The statistical likelihood of stochastic events predicted by the model resulting in the appropriate combination of metastasis-associated genomic alterations is small, consistent with the poor efficiency of the process. Although recent evidence suggests that some of these aberrations may occur subsequent to dissemination (Schmidt-Kittler et al, 2003), the fact that metastases are often clonal in nature (Fidler and Kripke, 1977) supports the hypothesis that there is a specific subpopulation within the heterogenous primary tumour that these cells originate from. The truth is likely to be a blend of the models, with additional variables added in. One of these variables is likely the affect of genetic background as a determinant of metastasis. As previously mentioned, we demonstrated that the genetic background on which a cancer arises has a significant affect on the ability of mammary tumours to successfully colonise the lung. In addition, we and others (Eaves et al, 2002;Qiu et al, 2003) have demonstrated that genetic background significantly influences gene expression, including the metastasis signature genes. The expression of the 17-gene metastasis signature set described by Ramaswamy et al (2003) was examined between the high metastatic FVB/NJ background with the low metastatic (NZB/ B1NJ Â FVB/NJ)F 1 background. Of the 17 mouse orthologs, 16 were expressed in the PyVT tumour model used in our laboratory. Out of 16, 15 showed the same direction of expression as observed in the human primary vs metastasis Qiu et al, 2003;Ramaswamy et al, 2003). Similar results are observed comparing FVB/NJ tumours with another low metastatic genotype ((DBA/2J Â FVB/NJ)F 1 ; K Hunter, unpublished results). These observations suggest that the propensity of a tumour to metastasise, and the predictive gene expression profile, is at least in part set by the combination of subtle changes in gene function, mediated by polymorphisms in coding sequence, splice sites, promoters, and enhancers, before tumour initiation. Subsequently progressive events such as translocations, deletions, etc., occur to produce rare cells that are capable of completing the metastatic process. The allelic background of the tumour would also likely influence what specific secondary events would be necessary in each individual host genotype to successfully complete the metastatic cascade. Importantly, the genetic efficiency determinant not only exerts its affects within the tumour cell itself, but also in the primary tumour stroma as well as the microenvironment at distant sites. Target organ microenvironment is known to play an important role in metastasis formation (Fidler, 2002). Tumour cells are known to require normal stroma for important signalling events (Alessandro and Kohn, 2002). Expression of important metastasisrelated genes has been shown to be expressed not only in the tumour cells but also in the target tissue (Muller et al, 2001). As a result, polymorphisms that alter the function of normal tissue functions, for example, promoter polymorphisms altering cytokine levels, missense polymorphisms affecting adhesion molecule function, alterations in signaling cascades, etc., may be as important a barrier to successful metastatic colonisation as alterations occurring within the tumour cell itself. Alternatively, relevant polymorphisms might indirectly affect important genes by altering epigenetic controls. Several metastasis suppressors have been shown to be epigenetically downregulated during dissemination (e.g. Domann et al, 2000), rather than inactivated by mutation or deletion. Since it has been shown that endogenous genes can be differentially imprinted in mouse strains (Jiang et al, 1998), polymorphisms that affect more global gene expression by modulating DNA methylation of histone modification must also be considered as potential metastasis modulating functions. The growing evidence suggesting that the majority of tumour cells are capable of extravasating (Naumov et al, 2002) suggest that proliferation in the secondary sites may in fact be one of the most important determinants to whether cells proliferate into a secondary tumour or undergo apoptosis. Since the growth of disseminated cells to clinically relevant macroscopic lesions is dependent upon angiogenesis, the effect of genetics on this process might be another important source of metastatic efficiency modulation. Inbred strains of mice are known to be different in their angiogenic response to at least some growth factors (Rohan et al, 2000). Differences in the ability of the target stroma in different genotypes to support angiogenic conversion from microscopic to macroscopic secondary lesions in response to tumour-secreted growth factors might therefore play an important role in the efficiency of the development of clinically relevant secondary tumours. Furthermore, it is conceivable that allelic variation may affect escape from immune surveillance. Subtle variations in the ability of the host to mount an effective cytolytic defense, coupled with the ability of highly malignant cells to downregulate tumour-specific antigens (Schirrmacher et al, 1982), might also play an important role in metastatic efficiency. It is unclear at present which of these, or other cellular or molecular processes or combinations of all, might be responsible for genetic modulation of metastasis. Clearly, this complex and complicated process will require a great deal of additional research to explore and characterise the critical interplay between inherited, somatic, and epigenetic interactions that influence metastatic progression. IMPLICATIONS These observations, particularly the microarray data, have important implications for metastasis detection and management. If genetic background is a major influence on metastatic potential, as measured by predictive gene expression patterns in normal and tumour tissue, it suggests that, like cancer susceptibility, there may be individuals or families present in the human population that are more susceptible to disseminated disease. It may therefore be possible to identify these individuals before they develop neoplastic disease, so that they might be more aggressively treated with neo-adjuvant therapies immediately upon diagnosis of the primary tumour. Alternatively, since tumour dissemination often appears to be an early event, it is theoretically possible that a chemoprevention regime might be developed that would prevent tumour metastasis before the primary tumour was clinically apparent, enabling the bulk of human cancer to be cured by surgical resection. In conclusion, the identity of the genomic elements in the host background modifying metastatic efficiency is currently unknown. They clearly warrant further investigations, since the majority of the genetically defined regions are not associated with known metastasis-suppressor genes. The metastasis suppressors that are associated with our genetically defined regions do not have any apparent molecular defects nor expression level differences between the high and low metastatic genotypes (Park et al, 2002;Qiu et al, 2003). Identification and characterisation of these metastasis efficiency-modifier genes may therefore yield novel targets to develop chemoprevention agents or antimetastatic therapies. Preliminary evidence of the feasibility of such a strategy is currently ongoing in our laboratory. Using a small-molecule agent, we have demonstrated a significant reduction in the efficiency of pulmonary colonisation, as well as modulation of the expression profile of an independent set of metastasisassociated genes (Yang, Lukes, Rouse, Lancaster, and Hunter, manuscript in preparation). The new strategies could be developed to either kill occult metastases or possibly increase the inefficiency of the myriad tasks necessary to generate a clinically relevant metastasis to the point where the odds of solitary, dispersed cancer cells successfully completing the metastatic cascade to become clinically relevant lesions approach zero.
2014-10-01T00:00:00.000Z
2004-02-17T00:00:00.000
{ "year": 2004, "sha1": "7ea10a9179d78d982e9492b8f4357cc659010bee", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/6601590.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "7ea10a9179d78d982e9492b8f4357cc659010bee", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
229319016
pes2o/s2orc
v3-fos-license
Tailored communication methods as key to implementation of evidence-based solutions in primary child health care Abstract Background Evidence-based policies should underpin successful implementation of innovations within child health care. The EU-funded Models of Child Health Appraised project enabled research into effective methods to communicate research evidence. The objective of this study was to identify and categorize methods to communicate evidence-based research recommendations and means to tailor this to stakeholder audiences. Methods We conducted an online survey among national stakeholders in child health. Analysis of the most effective strategies to communicate research evidence and reach the target audience was carried out in order to ensure implementation of optimal child health care models at a national level. Results Representatives of stakeholders from 21 of the then 30 EU MS and EEA countries responded to the questionnaire. Three main approaches in defining the strategies for effective communication of research recommendations were observed, namely: dissemination of information, involvement of stakeholders and active attitude towards change expressed in actions. The target audience for communicating recommendations was divided into two layers: proximal, which includes those who are remaining in close contact with the child, and distal, which contains those who are institutionally responsible for high quality of child health services. They should be recipients of evidence-based results communicated by different formats, such as scientific, administrative, popular and personal. Conclusions Influential stakeholders impact the process of effective research dissemination and guide necessary actions to strengthen the process of effective communication of recommendations. Communication of evidence-based results should be targeted to each audience’s profile, both professional and non-professionals, by adjusting appropriate communication formats. Introduction C hild oriented health policies are universally important. Moreover, one of the priorities of the Universal Health Coverage strategy of the World Health Organization (WHO) is Primary Health Care, which includes actions on improvement of maternal, newborn, child and adolescent health. 1 The European Commission recognizes the need to protect the wellbeing of children, e.g. in terms of medicinal products 2 or the promotion of child rights. 3 Member States are also adapting their national policies along these lines, though the approaches to child-focussed health policy vary between two patterns: 'On the one hand the child-focussed policies are part of wider health care and policy context, on the other they are devoted to children as a stand-alone approach'. 4 However, an elusive aspect of child health service improvement is how new knowledge is adopted by policymakers nationally. Evidence-based policymaking has been defined as 'a set of rules and institutional arrangements designed to encourage transparent and balanced use of evidence in public policymaking', 5 but literature shows 'the limited extent to which professionals utilize or draw upon research findings to determine or guide their actions'. 6 Also 'a solid research infrastructure is facilitating but not sufficient for evidence use'. 7 Policy developments and service improvements do not happen by accident-they have to be created, accepted by stakeholders, and implemented Examination of the policy cycle 8 shows that tailored communication between researchers, policymakers, professionals and the general population is crucial in enabling transition from research evidence, to policy adoption, and then implementation and continuation. Implementation of evidence into new policies is seldom a standalone activity-policymakers frequently and wisely look to see what has been done in similar neighbouring countries, and a previous Horizon 2020 project-the Research Inventory of Child Health Europe-specifically focussed on cataloguing such evidence in Europe. 9 However, knowledge and effectiveness do not exist in a vacuum, and context is significant. Transferability based on context is therefore a key concept when planning to implement evidencebased policies found to work in one context in another country. The theory of transferability was developed in the Models of Child Health Appraised (MOCHA) study (as below), and comprises four key over-arching themes. In these themes, the population (P), the intervention (I) and the environment (E) represent a set of conditional transferability criteria, and the transfer of the intervention (T) represents process criteria for transferring the intervention to the target context, while overall transferability (ÀT) depends on the dynamic interaction. 10 There is also a distinction to be made between top-down policy implementation by instruction, and enthusiastic adoption of the practices at the delivery level, and this depends on making the underpinning evidence accessible and credible. This requires effective knowledge communication relevant to specific recipients. The goal of our research was to identify effective methods of communicating evidence to facilitate effective policy implementation, including identification of key audiences, drawing from practical experience in European Union (EU) and European Economic Area (EEA) countries. Study design This study was part of the EU-funded MOCHA project, which intended to assess various models of primary child health care across Europe, 11 and had already identified three patterns: paediatricianled, GP-lead or combined. 12 In this inquiry, relevant stakeholders were identified, and a questionnaire was developed to measure the best possible ways to communicate evidence to appropriate recipients. Topic of inquiry The types of stakeholder and relevant topics of inquiry were developed from consultation with fellow MOCHA researchers who were asked to identify key elements for the future of primary child health care. They focussed on domains, such as prevention, mental health, chronic care and complex care. 13 This was then refined into specific activities in primary care reflecting these domains: (i) prevention of communicable diseases through vaccination of young children, (ii) treatment and monitoring of a chronic childhood condition and (iii) problem recognition/early diagnosis of mental health disorder in adolescents. Participants The stakeholder selection process was achieved via the MOCHA Country Agents (CAs), who were national experts from the study countries who were recruited for the project in order to provide country-specific information. CAs were asked to identify, and supply contact details of, at least three stakeholders in their country who would be willing to complete a questionnaire about three broad areas of primary care, and three broad age groups of children as users of primary care. We highlighted that these stakeholders might be policymakers, physicians, school health doctors, paediatricians, nurses or others, but they needed to be knowledgeable about the healthcare system in the country. We asked them to include at least one policymaker in the field of primary child health care on a national level. In addition, European Union for School and University Health and Medicine (EUSUHM) congress members provided the names of relevant national stakeholders in their countries. The stakeholders were asked to respond to the questionnaire based on their expert knowledge and experience and expertise, not their personal opinions. Questionnaire Stakeholders were asked to complete a digital questionnaire about communication modes to ensure implementation of evidence-based solutions in their countries: a. the most effective strategy for communicating recommendations, to ensure implementation of optimal models, b. the most effective target audience for promoting implementation of optimal models and c. the most effective format for communicating policy evidence. In line with the MOCHA project's established methodology, the questionnaire was designed by the topic researchers, approved by the project coordination team and validated by the project's External Advisory Board, comprising members nominated by European medical, paediatric and policy bodies, WHO European Regional Office, UNICEF Innocenti Research Centre and civil society groups, as published. 14 This ensured scientific and professional validity. Data collection The data collection was carried out between March and May 2018. Out of the 30 EU/EEA countries, the MOCHA CAs and EUSUHM congress members of 22 countries provided names of 161 stakeholders. Data analysis Our questions were open ended, and were analyzed by using thematic content analysis. The collected responses were coded by highlighting relevant parts of the answers. This facilitated further categorization, which led to emergence of umbrella themes characterizing strategies to implement evidence-based research recommendations and means to tailor this to the audiences. In order to identify, analyze and report patterns (themes) within the data, the approach proposed by Brown and Clarke 15 with six phases was used. The analytical process led to identification of clusters of strategies of effective communication of evidence-based data, format of recommendations and target audiences on which the stakeholders participating in the survey showed a convergence. Ethics The study was reviewed and approved by the ethical committee of the Faculty of Behavioural, Management and Social Sciences of the University of Twente under file number BCE17614, on 19 September 2017. Results In total, 99 (61.5%) of 161 nominated stakeholders started the questionnaire, 90 (55.9%) completed it-they were from 21 countries comprising all EU Member States except Belgium, Cyprus, Estonia, France, Lithuania, Luxembourg, Malta, Slovenia and the UK, plus Norway and Iceland from the EEA (figure 1). Most respondents were experts in prevention of communicable diseases (vaccination as a tracer), and recognition of mental health problems in adolescents. The least numerous group was experts in treatment and monitoring of a chronic condition (figure 1). A total of 62 out of 90 respondents answered the questions about most effective strategy, target audience and format for communicating policy recommendations. They represented three types of MOCHA Primary Health Care system as identified by the MOCHA project: GP-lead (31.2%), Paediatrician-lead (33.3%) and Combined (31.7%) along with others (4.8%) (figure 1). Some of the stakeholders declared an expertise in more than one field: within the topic of treatment and monitoring of chronic condition (19 responses), prevention of communicable diseases (28 responses) and problem recognition and early diagnosis of mental health problems in adolescents (19 responses). Thus, we obtained 66 responses to the questions about evidence-based issues. Full statistical characteristic of respondents is available in the Supplementary tables S1-S5. Strategies for effective communication of recommendations We identified three over-arching approaches to effective communication of policies (figure 2): a. influential stakeholders' impact on communication processes regarding evidence-based recommendations for policies (36.2% responses), b. dissemination of information in order to provide effective communication of evidence-based recommendations for policies (42.6% responses) and c. necessary actions in order to strengthen the process of effective communication of evidence-based recommendations for policies (21.3% responses). Consistently, dissemination played the most important role in each of the three groups of countries classified by system type (Supplementary table S6). Influential stakeholders The most influential stakeholders in disseminating the recommendations influencing optimal models of child health were authorities and policymakers, as those who are also responsible for further adoption and implementation of innovations. The significant role of health professionals and other associations (professional, medical and patient) was indicated as well (Supplementary table S7). The German respondent highlighted that 'community/public and associations should be also convinced of the new idea in order to put pressure on politicians who would make the decisions' (respondent 14, Germany). Dissemination of information Strategic tools for dissemination of information were identified regarding 'hard' law (legally binding) and 'soft' recommendations (not legally binding). 16 Respondents mentioned that not only new formal policies, but also soft guidelines and recommendations were important. Seminars, conferences and workshops are significant facilitators of exchange of information, not only between countries but also between competent authorities. Experts highlighted the strategic role of media, including social media, in the dissemination of information about innovative solutions facilitating the process of active implementation (Supplementary table S8). A Spanish respondent claimed that 'legislation, policies, standards, advice and guidance are necessary to provide the framework for addressing critical issues such as the provision of care of high quality, the improvement of access to care, the protection of rights' (respondent 49, Spain). It was stressed that strategies should be 'suited to the target audience's profile' (respondent 19), while media often determine what is visible for the public and politicians (respondent 22, Norway). The media has power and can lead to the mobilization of societal action that creates the conditions and place for health issues on the national public agenda and can catalyse action at the national and local levels (respondent 49, Spain). Actions In the respondents' opinion actions should be based on the implementation of long-term strategies or legislative changes, with the involvement of users at professional and non-professional level. Promoting the model by spreading a positive message is key in the process of increasing awareness (Supplementary table S9). The regular renewal of the existing action plan and program of health care measures was said to be important (respondent 69, Croatia). In order to obtain the broad scope of the recipients who are aware of new child healthcare evidence recommendations, discussion among stakeholders about pros and cons of a new model and cost-benefit analysis of this model is recommended (respondent 72, Latvia). Importance was also given to the meeting and personal encounters with authorities directly responsible for child health services (respondent 74, Sweden). The optimal strategy should be 'through well planned, sufficiently funded implementation work that targets service providers directly with content that appears to be useful in their everyday work' (respondent 7, Norway), emphasizing the targeting to the particular needs of each stakeholder in their work context. The Austrian expert in problem recognition and early diagnosis proposed a strategy, which was education based, with activities oriented to those who are working with the child in the field, to children and to parents (respondent 58, Austria). Target audience In order to identify the most appropriate recipients who should be informed about the development of a new model, we asked respondents to identify the most effective target audience for communicating recommendations, to ensure successful implementation of optimal models in their countries. Experts recognized the significant importance of both patients and their environment at micro level as well as decision/policymakers, and professional associations and organizations at macro level. Many of them stressed that both the format of the recommendations, as well as the strategy, should be suited to the target audience. Observing the data, we divided the reported target audience for communicating recommendations into two layers: a. audiences in the proximal environment of the child/patient (42.2% responses) and b. audiences in the distal environment of the child/patient (57.8% responses). We noticed that experts from all three groups of MOCHA systems were choosing the distal audience as most relevant (Supplementary table S10). Proximal audience The proximal target audience consists of children/patients, families, parents, people supporting parents, self-help groups, health care workers, teachers and health professionals (Supplementary table S11). This group includes those who come into direct contact with the child and are the recipients of the implemented policies. In the opinion of respondents they should also be included in the group of recipients of evidence-based solutions as they can indirectly affect the policymaking process. The importance of health professionals who have the power to change the system was stressed (respondent 22, Norway). The Spanish respondent highlighted that parents should be informed by primary care professionals about new evidence-based solutions (respondent 24, Spain). Distal audience The audience of the distal environment of the child includes: decision makers, professional organizations and associations, politicians, child advocacy groups, patient associations, administration-civil servants, health insurances, governmental institutions, opinion leaders, authorities (including local authorities), knowledge centres, general public/service users, journalists and health mediators (Supplementary table S12). This group includes specialists and decision makers who are directly involved in the policymaking process. The diversity of stakeholders mentioned by experts shows the need for adjusting the type of evidence to the various groups of recipients. The Latvian stakeholder stressed that 'politicians in particular in municipalities do not have a high level of health literacy, so they are definitely one target audience' (respondent 32, Latvia). Also, governmental institutions should be included in the group of primary target audiences (respondent 30, Finland). Policymakers and other stakeholders need to have the expertise to examine state-level data and differentiate specific risk sub-populations (respondent 49, Spain). On the one side, the scientific approach is still relevant, popular and expected (Supplementary table S13), together with the administrative and formalized reports, strategies and recommendations (Supplementary table S14). On the other hand, we noted that the data must be adapted to the general population and users who are more aware of the emerging possibilities of improvement of the quality of care and services. Thus, the popular format, which contains media, social media and electronic media were identified (Supplementary table S15). Additionally, there is the need for public involvement in the discussion of newly proposed solutions, which is correlated with health education activities at the primary care level/ health personnel, meetings with parents and citizens, decision makers/citizens involvement, public discussions including competent authorities and/or celebrities (Supplementary table S16). Format of the recommendations We also observed that the countries that are representing the combined and paediatrician-lead MOCHA system were mostly choosing the administrative format of recommendation as most relevant whereas GP-lead countries were preferring the scientific format. However, the differences were minimal (Supplementary table S17). Scientific and administrative format The answers given by the respondents confirm that the format of advice 'should be suited to the target audience's profile, either individual or priority groups, i.e. peer-reviewed journal and/or seminar for stakeholders and professionals' (respondent 19). The Norwegian expert highlighted that 'reports, scientific publications, seminars and news items are either useless or make a temporary change. The format must appear useful for the person receiving it, and it must be followed up regularly to ensure actual implementation' (respondent 7, Norway). A Croatian respondent stressed that 'health professionals will like a peer-reviewed journal, politicians and decision makers would prefer EU report, and parents will react to the popular media' (respondent 69, Croatia). Popular and personal format The most relevant and effective format for patients is media because 'patients should know what is possible and might be better' (respondent 44, Austria). Public discussions with doctors should be facilitated by famous persons/celebrities (particularly in terms of immunization), supported by educational shots in media, whereas scientific publications should be directed to medical professionals (respondent 66, Slovakia). Reports published in mass media and social networks, as well as innovative approaches, technology-Tailored communication methods as key to implementation of evidence-based solutions 5 of 8 based, and peer-led approaches, may increase awareness amongst patients and also general population (respondent 49, Spain). A Latvian expert stressed that 'evidence-based scientific publication in a peer-reviewed journal is good for scientists and writers but not for wider society'. He suggested that permanent and positive information in popular media and advocacy from the authorities could have the greatest benefit (respondent 32, Latvia). Others claimed that strong scientific evidence should be disseminated by social media (respondent 12, Italy). Discussion It is important to recognize that evidence-based policy, in order to be effective, needs to rely on appropriate strategies of dissemination of scientific results. Based on the analysis of stakeholders' views, we characterized the strategies to communicate evidencebased research recommendations and means to tailor this to the audiences and we set it in the wider context of the recognized policy cycle 8 ( figure 4). In our study, we identified three essential aspects that need to be taken into account while planning introduction of evidence-based innovation (model). Firstly, stakeholders representing medical and patient environment should be considered as crucial component in the process of the dissemination of information. It is compatible with a definition of a stakeholder as 'any group or individual who can affect or is affected by the developed (. . .) system'. 17,18 Disseminated information about innovative solutions can take various forms and go through various channels. In particular media impact was highlighted, and that is consistent with the opinion of These findings support communicating evidence in a way that appeals to specific complementary audiences. They make the difference between mechanical acceptance of a policy and enthusiastic adoption and successful implementation. Effective dissemination requires active circulation of the evidence/research and leads to positive local innovation. It fits the view of Greenhalgh et al. 19 that dissemination is an active, planned effort to persuade target groups to adopt an innovation, while implementation is an active and planned effort to mainstream it within an organization. 19 The consequence of appropriate dissemination and actions is adoption, which is the series of stages from first hearing about a product to finally applying it. 19 Successful transfer of the innovation requires tailoring the message to appropriate audiences. We identified two layers of the target audience; the proximal audience include those who are remaining in close contact with the child and who indirectly influence the policymaking processes, and the distal audience of those who are institutionally responsible child health services or play an advocacy role. This division is compatible with a classification identified by the MOCHA project where two groups of children's agents were identified, agents of proximal and distal child environment. 20 Eventually, this study presented a cascade view of the most effective format of recommendations (figure 3 based on Boere-Boonekamp et al. 21 ), which shows the baseline for the process of communicating and mainstreaming evidence-based policy, including publications or other academic reports. Even though scientific data are the main source of the administrative recommendations and strategies 'solid research infrastructure is facilitating but not sufficient for evidence use'. 7 A powerful role is played by various kinds of media, which are significant channels for popularizing the research amongst a wider public. Messages and evidence that appear in popular media help to reach the recipient at a personal level. To conclude, Influential stakeholders impact the process of effective research dissemination and guide necessary actions to strengthen the process of effective communication of recommendations. Communication of evidence-based results should be targeted to each audience's profile, both professional and non-professionals, by adjusting appropriate communication formats. Strengths and limitations Our work drew on respondents from a large number of diverse European countries who are active in the functions of primary health care and working with different age groups. However, it was not possible to include stakeholders from all European countries and of all fields in the research. We are aware that the 61% response rate might bring the risk of limited representativeness, but collecting the data from high level decision makers is challenging. However, they have been carefully chosen by CAs with criteria such as knowledgeable and national view. We are aware that proposed recommendations may have very different relevance for different interventions and the choice of dissemination strategies. In applying the approaches emerging from the study at local level, national experts should adapt several approaches of communicating evidence, taking into account contextual determinants of child health policy, which we characterized in previous works. 22 Supplementary data Supplementary data are available at EURPUB online. Funding The article is part of the work of WP9 within the project MOCHA (Models of Child Health Appraised) that is funded by the European Commission through the Horizon 2020 Framework (grant agreement number: 634201). Conflicts of interest None declared. Data availability The MOCHA data contains no patient information, but may contain other personal or institutional data, such as source of a commentary. The MOCHA project has therefore resolved that source data will be curated on the MOCHA web site, and be accessible via the Principal or other Partners through a curator function, so that data relevant to any enquiry can be supplied, and redaction effected, but also contextualization given.
2020-12-03T09:01:31.302Z
2020-12-17T00:00:00.000
{ "year": 2020, "sha1": "0df336e753a2c1b91e77644a2f83be6741061eb7", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/eurpub/article-pdf/31/1/92/36171144/ckaa234.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "89c9037daf3e399e303a4223ab54fcafc945f019", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
53082183
pes2o/s2orc
v3-fos-license
Challenges in Pediatric Cardiac Anesthesia in Developing Countries Introduction: Approximately 90% of a million children worldwide born with congenital heart defect do not have an access to adequate pediatric cardiac care. The World Society for Pediatric and Congenital Heart Surgery, established in 2006 shifted the focus from providing individual pediatric cardiac care to developing global standards for the practice of pediatric cardiac surgery and professional education of the local teams. Materials and Methods: After recognizing the challenges of the local team regarding providing safe anesthesia and functioning as a broader team, we have focused our education on simplifying anesthetic procedures and advancing structured team approach. The appropriate selection of patients and simplifying anesthetic technique should be the standard of care. We introduced structured approach to daily education using just in time teaching, case based discussions and simple skill training simulation sessions. Furthermore, we enhanced team-training approach applying tools such as WHO surgical safety checklist and implementation manual, SAFE communication, introducing KDD with SMART aim, SCAMPs, advanced protocols of care and culture change tools. Results: Following a significant number of short missions to developing centers we have, within NGO, succeeded to support building and maintaining several local pediatric cardiac centers with structured approach to anesthesia and team building. Conclusion: The appropriate selection of patients is one of the most important contributing factors for decreasing morbidity and mortality rate in pediatric cardiac surgery patients. The anesthesia technique for pediatric cardiac procedures should be aimed at fast-track surgery, with early extubation as a goal. Regional blocks such as paravertebral and caudal should be considered for perioperative pain control. By introducing structured approach to daily education and by enhancing team-training approach we have contributed evolving sustainable pediatric cardiac centers in developing countries. INTRODUCTION It is striking that ∼90% of a million children worldwide born with congenital heart defect (CHD) do not have an access to adequate pediatric cardiac care (1). Incidence of CHD ranges from 5 to 14 cases per 1,000 live births with higher absolute number in developing countries (2)(3)(4). Acquired heart diseases as rheumatic heart disease, endomyocardial fibrosis, Chagas, and Kawasaki disease are common in children in developing countries and frequently lead to premature death as a result of suboptimal medical care (1). According to World Health Organization (WHO) a populations of two million people, requires a pediatric cardiac center performing 300-500 operations annually. That is not always the case in developing countries where, in specific areas, populations between 15 and 70 million are without a single pediatric cardiac center (5). In Asia, there is approximately one pediatric cardiac center for population of 16 million. The distribution is even less in Africa where one pediatric cardiac center covers population of 33 million (6). Various non-governmental humanitarian organizations (NGOs) have been providing pediatric cardiac surgeries in developing countries for many years. Majority of them were short-term missions called "surgical safaris" (7). The World Society for Pediatric and Congenital Heart Surgery established in 2006 shifted the focus from providing individual pediatric cardiac care to developing global standards for the practice of pediatric cardiac surgery and professional education of the local teams (5). Furthermore, the Lancet Commission on Global Surgery published in 2015 stated that all people should have access to safe, high-quality surgical and anesthesia care. The purpose of The Lancet Commission on Global Surgery is to make this vision a reality for provision of quality surgical and anesthesia care for all (8). A Journey of a Thousand Miles Begins With a Single Step Chinese philosopher Laozi (circa 604 BCE -circa 531 BCE) Many anesthesiologists join NGOs in various missions to developing countries and function as a part of clinical, teaching, and research projects. Participating in NGO expeditions to Africa and Asia within pediatric cardiac team we have previously been exposed not only to the challenge of providing safe pediatric cardiac surgery but similarly to the challenge of providing safe general anesthesia to pediatric cardiac patients. Consequently, our NGO has identified existing local pediatric cardiac centers with potential for growing and developing into sustainable pediatric cardiac centers. Currently, the primary aim of our team is no longer to provide pediatric cardiac care. Our primary aim is focused on providing training for the local team and advancing their ability to independently diagnose and treat pediatric cardiac patients. Pathway Since 2007, our NGO visited India, Malaysia, Nigeria, Kenya, Tanzania, and Mauritius. The centers visited in developing countries were carefully identified. Our teams visited centers with existing pediatric cardiac program where the basic equipment required and basic infrastructure were already in place. One team contained pediatric cardiac surgeon, surgical fellow, cardiologist, anesthesiologist, perfusionist, two intensivists, and two intensive care nurses. Our teams were visiting the local center for 1 week at a time. Continuation was provided for several months and occasionally for more than a year if required. Furthermore, our team provided a function of long-term, off-site collaborator for sustainable local centers. Pediatric Cardiac Anesthesia and Team Approach Hence the visiting centers had basic equipment and infrastructure in place, the major challenge for the visiting anesthesiologist was not lack of equipment required. In our experience, the major challenge was lack of dedicated and sufficiently educated pediatric cardiac anesthetic team. The local anesthetic team was mainly adult trained and frequently required basic education about anatomy, physiology and appropriate anesthetic agents used for induction and maintenance of anesthesia for pediatric cardiac patients. Nevertheless, the support was required in selection of adequate endotracheal tube size (ETT), laryngoscope, intubation and ventilation techniques, ETT securing techniques, as well as selection of arterial and central line sizes, ultrasound guided insertion techniques and securing techniques. Lack of knowledge regarding cardiopulmonary bypass cannulas, circuit and oxygenator sizes (Figure 1) as well as insufficient supply of blood and blood products were often a supplementary challenge. In addition, the persistent safety treats were infection control due to reusing and recycling disposable equipment by local team (Figure 2). Transthoracic and transoesophageal echocardiography machines were available in majority of the centers. None of the local anesthesiologists performed echocardiography. Echocardiography was performed by local cardiologists. Our team focused on improving basic pediatric cardiac anesthesiology techniques as a primary goal of our missions rather than introducing advanced echocardiography teaching for local anesthetic team. Moreover, we have identified a second considerable challenge for local team: Functioning as a broad team of experts. Common aims, team briefs, safety checks, and structured protocol based approach to patient care were not existing. The local centers did not have a structured method of data collection in place related to anesthetic or surgical procedures prior to our visits. Therefore, our observations were limited to descriptive rather than objective study in order to measure the impact of our implementations. Ways to Make It Better After recognizing the challenges of the local team regarding providing safe anesthesia and functioning as a team we have focused our education on simplifying anesthetic procedures and advancing structured team approach in patient care. Teaching and Education One of the goals of our team was introducing "Just in time teaching" (9). Daily education in relation to intubation and ventilation techniques, ultrasound guided line insertion ( Figure 3), anesthetic agents and vasopressor support before and after cardiopulmonary bypass was provided. Weight and age related charts for ETT, laryngoscope, arterial and central line sizes as well as cardiopulmonary bypass cannulas, circuit and oxygenator sizes were introduced together with securing techniques for ETT and vascular lines. Furthermore, we have implemented structured case based discussions and basic simulation skill training according to current anesthetic guidelines 1 . Chosen subjects of discussion reflected the majority of cases treated in the local center. Difficult airway training using difficult airway cards was conducted 2 . In order to minimize perioperative morbidity rate, infection prevention, and control were introduced according to current standards 3 . We provided structured and simplified approach to: Case Selection • Appropriate selection of cases including patients with simple cardiac defects (Figure 4) • Patients with high morbidity and mortality risk or risk for complex surgical procedures should be transferred to highly specialized centers • Procedures with a high risk of major blood loss or risk of prolonged postoperative intensive care should not be undertaken Preoperative Care • Preoperative intravenous fluid resuscitation should be considered as dehydration and malnutrition were recurrent patient related issues • Premedication should be considered • The care providers should be using universal precautions for possible exposure to infectious diseases (HIV, hepatitis), and as infection prevention Type of Anesthesia • Available appropriate anesthetic agents should be used in weight related doses • The anesthesia technique should be aimed at fast-track surgery with early extubation in operating room (OR) • Regional blocks such as paravertebral and caudal should be considered for perioperative pain control Cardiology, surgical, perfusion, and intensive care training was undertaken simultaneously by other team members. Structured Daily Team Approach In order to help the local team in maintaining the structure and competencies we have developed and introduced several standardized perioperative procedures tailored for the requirements of the local team: • WHO surgical safety checklist and implementation manual 4 • SAFE communication (situation awareness for everyone) 5 • Key Driver Diagrams (KDD) with SMART (specific, measurable, achievable, relevant, time-bound) aim (10) • SCAMPs (Standardized Clinical Assessment and Management Plan) (11) • Culture change tools (flat organizational structure) 6 . RESULTS Supporting a pediatric cardiac center in developing countries in order to become self-sufficient and well-functioning requires time, individual enthusiasm, financial and personal investment, hard work, and dedication of NGO members. After a significant number of short missions to selected centers we have, within NGO, succeeded to support building and maintaining several local pediatric cardiac centers using structured approach to cardiology, surgery, anesthesia perfusion, and intensive care education together with team building strategy. Sustained centers have developed designated cardiology, surgical, anesthetic, perfusion, and intensive care teams and advanced team building skills. Currently, the patient care is provided on significantly higher level than prior to our visits rated by local team. Sustained centers report to have lower morbidity and mortality rate, and high success in selected surgical procedures. One center successfully provides extracorporeal membrane oxygenation (ECMO) in selected cases after collaboration with our team. Our team still functions as long-term, off-site collaborator for sustainable local centers. We are planning to provide overseas fellowships to local staff in order to advance their education and stimulate them to use the skills on return to their home country. Beyond that, we have established friendship for life. After several years of experience our motto became the famous phrase: "The success should not be measured by the number of successful operations of any given mission, but by the successful operations that our colleagues perform after we leave" (12). After establishing the basic care for pediatric cardiac patients we are currently aiming to establish data collection and objective measures for skill acquisition, success rate, team performance, morbidity, and mortality. Correspondence Within Teams Team interaction within visiting and local team is very important for successful collaboration. Friendly atmosphere with zero tolerance for judgmental or discriminating behavior is fundamental for team building. Well-educated, compliant members facing challenges with professionalism are the crucial element for successful correspondence within teams. Not long ago, somebody asked me what was absolutely essential to bring on the trip. I replied, firstly your smile, and then your ultrasound equipment. DISCUSSION WHO supports the fact that "Safe surgery saves lives" (13,14). Anesthesia is a specialty with low status in many developing countries and anesthetic services are often underdeveloped (15). It is well known that the majority of pediatric-related mortality is due to airway-related complications (16,17). Similarly, it is a recognized fact that the number of trained pediatric cardiac anesthesiologist in developing countries is very small (18). That leads to increasing population of nonmedical anesthetic providers trained without appropriate supervision (19). A part of the anesthetic residents undertake their speciality training outside the country and frequently stay in developed countries (20). All of that contributes to two to three times increase in anesthesia related morbidity and mortality in the developing world compared with decreasing anesthesia related complications in developed countries (21)(22)(23). Anesthesia is a technology-based specialty and relay on functioning monitoring equipment (2). Providing anesthesia in developing countries becomes highly challenging considering the fact that more than 19% of operation theaters worldwide have no pulse oxymeter (23,24). According to millennium development program (Goal-4), oxygen supply and pulse oxymeter should be provided to every healthcare facilities especially involving pediatric patients (25). Ultrasound machine for line insertions and regional blocks is commonly not available, which increases the risk of complications furthermore. Even well-established centers have unreliable supply of basic utilities including electricity, water and oxygen (20,26,27), and more than 70% of developing countries lack a national blood transfusion service (1,16). In addition, there is frequently shortage of resuscitative equipment, airway and suction devices and other intraoperative monitoring systems (24). Likewise, the increasing trend of corruption and neglect is related to the impaired healthcare systems in developing countries (25). Combination of mentioned contributing factors has a negative impact on morbidity and mortality in developing countries (24,28). To address this concern, the main focus of visiting anesthetic team should be to reduce total perioperative and anesthetic-related mortality with evidence-based best practice. Establishing local sustainable pediatric cardiac centers in developing countries providing both initial and continued training has made the greatest impact on mortality rates in the last decade (18). It is worth remembering that adequate education of local team requires involvement of local and central government (28). Our NGO visited existing local pediatric cardiac centers with potential for growing and developing into sustainable pediatric cardiac centers. In our experience, the major challenge of pediatric cardiac anesthesia was lack of dedicated and sufficiently educated team. The primary aim of our team was to provide training for the local team in order to advance their ability to independently diagnose and treat pediatric cardiac patients. Previous review highlights that visiting anesthesiologist frequently provides pediatric cardiac anesthesia aiming to educate local team (18). Several international Internet sites are found to be helpful tool to local team. The online tutorial of the week available on the World Federation of Societies of Anaesthesiologists (WFSA) website at http://www. anaesthesiologists.org and textbooks from the World Anesthesia Society, are useful resources of education for local team (29). Furthermore, WFSA pediatric committee offers overseas fellowships, and supports international Teach The Teachers courses (30). Overseas fellowships can provide the longer-term solution for education of the local team (31). It is well-known that the role of simulation is highly important in skill and team training of the local team. The mannequin-based resuscitation training is found to be very effective (32,33). Significant mortality reduction in developing countries was achieved with simulation training in newborn resuscitation (34). In general, most of the anesthesiarelated cardiac events are preventable (35). Careful labeling of medications ( Figure 5) and resuscitation equipment including difficult airway carts can improve patient safety. Our team has managed to introduce simple anesthetic protocols and charts for local team allowing easy interpretation and use. We have developed structured approach to daily education establishing just in time teaching, case based discussions and simple skill training simulation sessions. Furthermore, we have enhanced team-training approach applying tools such as WHO surgical safety checklist and implementation manual, SAFE communication, introducing KDD with SMART aim, SCAMPs, advanced protocols of care and culture change tools. By introducing structured approach to daily education and by enhancing team-training approach we have contributed evolving sustainable pediatric cardiac centers in developing countries. Limitations of the Study The local centers did not have a structured method of data collection in place related to anesthetic or surgical procedures prior to our visits. Therefore, this study is subjective and observational limited to description of methods and techniques. Currently, the impact of our implementations is rated by local team. Sustained centers report to have lower morbidity and mortality rate, and high success in selected surgical procedures. CONCLUSION Establishing local sustainable pediatric cardiac centers in developing countries providing both initial and continued training has made the greatest impact on mortality rates in the last decade (18). It requires careful determination of adequate center with potential for growing into sustainable pediatric cardiac center. The appropriate selection of cases including patients with simple cardiac defects is one of the most important contributing factors for decreasing morbidity and mortality rate in pediatric cardiac surgery patients. Anesthesia technique is a global challenge. The main focus of visiting anesthetic team should be to reduce total perioperative and anesthetic-related mortality with evidence-based best practice. Simplification of the care should be the primary anesthetic technique for pediatric cardiac procedures, and should be aimed at fast-track surgery, with early extubation as a goal. Regional blocks such as paravertebral and caudal should be considered for perioperative pain control. Correspondingly, team performance is a considerable challenge for local team. By introducing structured approach to daily education using just in time teaching, case based discussions and simple skill training simulation sessions, together with enhancing team-training approach by applying tools such as WHO surgical safety checklist and implementation manual, SAFE communication, KDD with SMART aim, SCAMPs, advanced protocols of care and culture change tools we have contributed evolving sustainable pediatric cardiac centers in developing countries. AUTHOR CONTRIBUTIONS The author confirms being the sole contributor of this work and has approved it for publication.
2018-10-29T13:05:10.493Z
2018-10-29T00:00:00.000
{ "year": 2018, "sha1": "3de206f93c2cfc94b750b6d5ce6be178a7e30524", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fped.2018.00254/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3de206f93c2cfc94b750b6d5ce6be178a7e30524", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
202178023
pes2o/s2orc
v3-fos-license
Thermal performance of a green roof based on CHAMPS model and experimental data during cold climatic weather Green roofs are increasingly implemented in cities around the world. They have the potential to improve thermal performance of building systems through evapotranspiration, thermal mass, insulation and shading. Several studies have analyzed the heat flow impact of green roofs in hot weather, but few studies have examined the thermal performance during cold conditions. Roof membranes are known to fail in cold climates due to stress caused by large temperature fluctuations. A green roof can reduce the daily membrane temperature fluctuations (Tmax Tmin) by an average of 7◦C. This study presents an experimental investigation of a large extensive green roof on the Onondaga County Convention Center in Syracuse, NY from November 2017 to March 2018. The model known as CHAMPS has been applied to simulate the temperature profile through the layers of the green roof. In early winter without snow, the temperatures of the growth medium and roof membrane follow the diurnal cycle of ambient air temperatures with smaller amplitude. An average seven hour peak delay is observed. Under extremely cold weather, snow acts as an insulator. The temperature of the growth medium on the Convention Center remains slightly above freezing and is relatively steady when there is significant snow, even during extremely cold temperatures. Heat flux is dominated by the temperature gradient between interior space and the snow layer. On the basis of this work, it is shown that the CHAMPS model can play a valuable role in informing green roof design decisions. INTRODUCTION Green roofs normally consist of multiple layers, for example, vegetation, growth medium, drainage, waterproof membrane, and roof surface. They can vary from one design to another based on regional climates. While green roofs have been implemented in cities for years, the interest in installing green roofs in both retrofit and new construction is still increasing. Thermal benefits of green roofs include saving energy for space heating and cooling, and mitigating urban heat island effects due to evapotranspiration, direct foliage shading, the insulation effect of the soil and other factors. Another benefit is that the green roof can block solar radiation, thus protecting the base roof membrane from temperature fluctuations. In winter, a green roof can shield the roof membrane from extreme cold and from sudden changes in ambient air temperatures. Daily temperature fluctuations create thermal stress in the roof membrane and reduce its longevity (Teemusk and Mander, 2010). The need for tools that designers and architects can use to assess the potential thermal benefits of green roofs is growing. Some studies investigate numerical models in DesignBuilder, PHPENICS, and EnergyPlus for green roof energy consumption simulations (Ran and Tang, 2017;Zhang et al. 2017;Lazzarin et al. 2005). In this paper, we present a new platform to inform the design process, the CHAMPS-BES model (Combined heat, air, moisture and pollutant simulations in building envelope systems). This model is used to assess the long-term energy and durability performance of building envelop systems. Snow cover may provide a natural insulation layer in winter, which affects the cold weather performance of green roofs. There are two objectives of this study, both related to minimizing temperature variations that can damage the waterproof membrane on a roof. The first objective is to determine the impact of adding a green roof as a retrofit to a traditional roof on a large building in a cold climate. The second objective is to determine the impact of a significant snowpack on the retrofitted green roof. Study Site This project focuses on the green roof on the Onondaga County Convention Center in Syracuse, NY. Syracuse is located at the northeast corner of the Finger Lakes region. It is known for its snowfall, in part due to the lake effect from nearby Lake Ontario. Based on the data from the weather station at Hancock International Airport (National Weather Service, 1938-2016, there are on average 65.2 days with snow per year. Total yearly snowfall depth is increasing (Fig.1). Snow mainly falls between the months of November and March. January is the coldest month and also has the most snowfall (Fig. 2). The Convention Center green roof was retrofitted in 2011. The roof consists of the following layers: a steel deck, a gypsum board, extruded polystyrene insulation, a second gypsum board, a drainage mat, a waterproof membrane and a coarse growth medium layer (Fig. 3). The insulation layer and layers below are original to the building. Table 1 summarizes the main thermal properties of the layers of the green roof. The total area of the green roof is 5600 m 2 . Plant species on the roof include Sedum album, Sedum sexangulare, Sedum rupestre, Sedum floriferum, and Phedimus taksimense (Squier and Davidson, 2016). Instrumentation and measurement The thermal monitoring system of the green roof has been equipped with CR1000 Dataloggers and AM 16/32B Multiplexers (Campbell Scientific). A weather station on the roof includes air temperature, relative humidity, wind direction and windspeed. T109 temperature sensors (Campbell Scientific) are positioned at five different heights within the green roof (Fig. 3). This temperature profile is measured at five locations on the roof; only one location (station1) is used in this study. Temperature data are averaged and reported at hourly intervals. Interior temperatures are controlled by an HVAC system. Temperature sensor Y is mounted on the ceiling of the Exhibit Hall of the Convention Center to measure the indoor temperature. Solar radiation data are obtained from Syracuse University Weather Station, which is 1.9 kilometers from the green roof. The measuring period used for thermal analysis of the green roof is November 2017 to March 2018. Manual snow depth measurements have also been conducted. CHAMPS simulation Several input parameters are needed in the CHAMPS model. The weather data discussed above are used as inputs. Properties for the layers of the green roof are taken from manufacturer specifications. Snow cover is not considered in the simulation. The exchange coefficient of heat transfer is taken as 15 W/m 2 K, due to the large surface area of sedum plants. To assure the validity of the simulation results of CHAMPS, the model is validated using parameters for the green roof with experimental data from early November (11/1/2017-11/7/2017). After the validation, two case studies are performed based on the objectives: • To determine the impact of the retrofit, CHAMPS is run for the case of the Convention Center traditional roof before the retrofit. The output of CHAMPS is compared with the experimental data from the green roof. • To determine the impact of a snowpack on the green roof, CHAMPS is run for the green roof without snow. The output of CHAMPS is compared with experimental data from the green roof with a significant snowpack. RESULT Winter thermal performance During the experimental campaign period, there were 87 days with snowfall in Syracuse. The average ambient temperature on the roof was -0.24 • C. Temperature profiles for nine days in early November and early January are shown in Fig. 4. The extruded polystyrene layer contributes most to the effective insulation across the roof layers, where the largest temperature difference is between sensor A and sensor B. Early November represents typical early winter without snow in Syracuse. The temperatures of layers above the extruded polystyrene insulation (B, C, and G) follow the diurnal pattern of ambient air but with slightly smaller diurnal variation. An average of 7 hours delay in peak temperature is observed based on the temperature in the growth medium (G), compared to the ambient temperature. Early January represents a typical snowy winter period in Syracuse with a significant snowpack over the full nine-day period. Snow depth along the roof from west to east measured on January 9, 2018 is highly variable (Fig. 5). The snow depth on top of station 1 was 0.1 m. Figure 4 shows that the temperature of the growth medium was roughly constant at around 0 • C. This is true even when the ambient air temperature was -20 • C on occasion. This shows that the impact of snow accumulation on roof temperature is significant. A similar finding was reported by Getter et al (2011). Model validation The developed green roof model in CHAMPS has been validated using the data of the first week in November. The simulated growth medium temperature is compared with the measured data (temperature sensor G) in Fig. 6. On average, the CHAMPS model overpredicts the measured temperature by only 17%. The simulated green roof model appears to be reliable and can be used to simulate the traditional roof and green roof without snow cover. Figure 6. The validation of the CHAMPS model using experimental data for layer G Temperature fluctuations on the membrane The daily fluctuation is defined as the difference between daily maximum temperature and daily minimum temperature. A traditional roof model was developed in CHAMPS by deleting the growth medium and drainage mat layers. The temperature fluctuations of membrane on the traditional roof are far greater than those measured on the green roof in early winter (Fig. 7). The growth medium and drainage mat clearly play an important role in reducing temperature fluctuations on the membrane. Insulation effect of the snow cover The green roof is simulated without a snow cover using early January meteorological data in CHAMPS. The membrane temperatures without a snow cover are compared with the measured membrane temperatures under snow (Fig. 8). The role of snow accumulation in reducing temperature fluctuations is significant. Without snow cover, under the same weather conditions, the membrane temperature could range from -18 • C to 0 • C. Under the accumulation of snow, the protection provided by the growth medium becomes negligible compared with the protection provided by the snow. The benefit of having a green roof is decreased in cold weather. This result has been reported in other studies (Lundholm et al. 2014). DISCUSSIONS Green roofs modify membrane temperature fluctuations in the winter. During times without snow cover, those membranes absorb solar radiation and are subjected to moderate temperatures. At night, exposed membranes re-radiate the heat and their temperatures drop (Liu & Baskaran, 2003). Extreme temperature fluctuations are a major cause of membrane failure. Vegetation and growth media of green roofs improve the insulation properties of buildings and decrease the absorption of solar radiation, protecting the membranes. During snowy winter periods, snow cover promotes higher survival of perennial plants due to warmer soil temperatures. The snow cover also decreases temperature fluctuation at the membrane. In this study, a simulation using CHAMPS has been based on an energy balance through layers. Wind effects and moisture properties have not been considered due to lack of data, although those are crucial factors in assessing the overall energy performance of green roofs. Next, moisture and wind effects will be added to the simulation. CONCLUSIONS In this study, a large extensive green roof in Syracuse, NY was monitored during the winter months to understand its thermal performance. Furthermore, a green roof model was developed and verified by CHAMPS software. During early winter months, the plants and growth medium add thermal mass to decrease the membrane temperature fluctuations. In very cold weather, snow accumulation acts as effective natural insulation, isolating the roof from the ambient environment. CHAMPS software enables the user to add a green roof to any roof design. It is a systems model accounting for heat, air, and moisture. CHAMPS is a useful tool to the quantitative evaluation of the energy benefits of green roofs under regional climates, and can be of value to designers when considering retrofit additions of green roofs on buildings.
2019-09-10T20:24:11.323Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "784977687c411d0e833b6b138ddf07bafb3e0158", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.14305/ibpc.2018.gb-2.05", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "dd5672d479562bf67539cee2ecbf8e66b80e6603", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Environmental Science" ] }
195063604
pes2o/s2orc
v3-fos-license
Quorum and Light Signals Modulate Acetoin/Butanediol Catabolism in Acinetobacter spp. Acinetobacter spp. are found in all environments on Earth due to their extraordinary capacity to survive in the presence of physical and chemical stressors. In this study, we analyzed global gene expression in airborne Acinetobacter sp. strain 5-2Ac02 isolated from hospital environment in response to quorum network modulators and found that they induced the expression of genes of the acetoin/butanediol catabolism, volatile compounds shown to mediate interkingdom interactions. Interestingly, the acoN gene, annotated as a putative transcriptional regulator, was truncated in the downstream regulatory region of the induced acetoin/butanediol cluster in Acinetobacter sp. strain 5-2Ac02, and its functioning as a negative regulator of this cluster integrating quorum signals was confirmed in Acinetobacter baumannii ATCC 17978. Moreover, we show that the acetoin catabolism is also induced by light and provide insights into the light transduction mechanism by showing that the photoreceptor BlsA interacts with and antagonizes the functioning of AcoN in A. baumannii, integrating also a temperature signal. The data support a model in which BlsA interacts with and likely sequesters AcoN at this condition, relieving acetoin catabolic genes from repression, and leading to better growth under blue light. This photoregulation depends on temperature, occurring at 23°C but not at 30°C. BlsA is thus a dual regulator, modulating different transcriptional regulators in the dark but also under blue light, representing thus a novel concept. The overall data show that quorum modulators as well as light regulate the acetoin catabolic cluster, providing a better understanding of environmental as well as clinical bacteria. INTRODUCTION Acinetobacter baumannii has recently been recognized by the World Health Organization (WHO) as one of the most threatening pathogens deserving urgent action (Tacconelli et al., 2018). With the aid of new taxonomic tools and technological advancements, other members of the Acinetobacter genus have also been identified as causative agents of hospital acquired infections and are gaining clinical relevance (Turton et al., 2010;Karah et al., 2011). Key factors determining their success as pathogens include their extraordinary ability to develop resistance to antimicrobials as well as to persist in the hospital environment despite adverse conditions such as desiccation, lack of nutrients, etc. (McConnell et al., 2013;Spellberg and Bonomo, 2014;Yakupogullari et al., 2016). It is known that some members of the genus can be transmitted by air. In fact, some genotypes of A. baumannii have been shown to survive for up to 4 weeks in the air in intensive care units (ICUs) (Yakupogullari et al., 2016). It is becoming increasingly clear, despite not very much studied, the importance of this kind of transmission since it leads to recontamination of already decontaminated surfaces, transmission between patients, airborne contamination of healthcare providers as well as of medical instruments (Spellberg and Bonomo, 2013). We have recently reported the genome sequence of Acinetobacter sp. strain 5-2Ac02 (closely related to Acinetobacter towneri), which has been recovered from the air in an ICU of a hospital in Rio de Janeiro, Brazil (Barbosa et al., 2016). This strain was shown to harbor a much reduced genome and higher content of insertion sequences than other Acinetobacter sp. Moreover, four different toxin-antitoxin (TA) systems as well as heavy metal resistance operons were found encoded in its genome (Barbosa et al., 2016). Interestingly, some bacteria have been shown to produce and release a large diversity of small molecules, including organic and inorganic volatile compounds such as acetoin and 2,3-butanediol (BD), referred as bacterial volatile compounds (BVCs), which can mediate airborne bacterial interactions (Audrain et al., 2015). BVCs can mediate cross-kingdom interactions with fungi, plants, and animals, and can even modulate antibiotic resistance, biofilm formation, and virulence (Audrain et al., 2015). Several molecular mechanisms have been associated with the development of bacterial tolerance or persistence under stress conditions (environmental or drug-related) . Among these are included the general stress response (RpoS-mediated), tolerance to reactive oxygen species (ROS), energy metabolism, drug efflux pumps, the SOS response, and TA systems, with the quorum network (quorum sensing/quorum quenching) regulating many of them . The finding that many bacterial pathogens are able to sense and respond to light modulating diverse aspects related to bacterial virulence and persistence in the environment is particularly pertinent in this context. Indeed, light has been shown to modulate biofilm formation, motility, and virulence against C. albicans, a microorganism sharing habitat with A. baumannii, at environmental temperatures in this pathogen. Moreover, light modulates metabolic pathways including trehalose biosynthesis and the phenylacetic acid degradation pathway, antioxidant enzyme levels such as catalase, and susceptibility or tolerance to some antibiotics (Ramirez et al., 2015;Muller et al., 2017). In addition, light induced the expression of whole gene clusters and pathways, including those involved in modification of lipids, the complete type VI secretion system (T6SS), acetoin catabolism, and efflux pumps . Many of these processes are controlled by BlsA, the only canonical photoreceptor codified in the genome of A. baumannii, which is a short blue light using flavin (BLUF) protein. BlsA has been shown to function at moderate temperatures such as 23 • C but not at 37 • C by a mechanism that includes control of transcription as well as photoactivity by temperature (Mussi et al., 2010;Abatedaga et al., 2017;Tuttobene et al., 2018). Knowledge of these mechanisms will potentially enable the implementation of several clinical or industrial applications. In this study, we characterized the airborne Acinetobacter sp. strain 5-2Ac02, analyzing gene expression adjustments in response to environmental stressors such as mitomycin C and acyl-homoserine-lactones, which modulate the quorum network. The results showed that genes involved in the SOS response, TA systems, and heavy metal resistance were induced in response to mitomycin, while genes involved in acetoin and aromatic amino acid catabolism were modulated as a response to quorum sensing signals. The fact that acetoin catabolic genes were also found to be induced by light in A. baumannii prompted us to deepen the study on this metabolism. In bacteria, the butanediol fermentation is characterized by the production of BD and acetoin from pyruvate. The production of butanediol is favored under slightly acidic conditions and is a way for the bacteria to limit the decrease in external pH caused by the synthesis of organic acids from pyruvate. The catabolic α-acetolactate-forming enzyme (ALS) condenses two molecules of pyruvate to form one α-acetolactate, which is unstable and can be converted to acetoin by α-acetolactate decarboxylase (ALDC) or diacetyl as a minor by-product by non-enzymatic oxidative decarboxylation. Diacetyl can be irreversibly transformed into its reductive state acetoin, and acetoin can be reversibly transformed into its reductive state BD, both catalyzed by 2,3-butanediol dehydrogenase (BDH). The acetoin breakdown in many bacteria is catalyzed by the acetoin dehydrogenase enzyme system (AoDH ES), which consists of acetoin:2,6-dichlorophenolindophenol oxidoreductase, encoded by acoA and acoB; dihydrolipoamide acetyltransferase, encoded by acoC; and dihydrolipoamide dehydrogenase, encoded by acoL . Our results show that the acoN gene codes for a negative regulator of the acetoin/butanediol catabolic cluster and is involved in photoregulation of acetoin catabolism in A. baumannii through the BlsA photoreceptor. Most importantly, we provide strong evidence on the mechanism of light signal transduction, which is far from being understood for BlsA or other short BLUF photoreceptors, taking into account in addition that BlsA is a global regulator in A. baumannii. In this sense, we have recently shown that this photoreceptor binds to and antagonizes the functioning of the Fur repressor only in the dark at 23 • C, presumably by reducing its ability to bind to acinetobactin promoters, thus relieving repression at the transcriptional level as well as growth under iron limitation at this condition (Tuttobene et al., 2018). Here, we further show that BlsA directly interacts with the acetoin catabolism negative regulator AcoN at 23 • C but, in this case, in the presence of blue light rather than in the dark. In fact, growth on acetoin was much better supported under blue light than in the dark through BlsA and AcoN. Moreover, acetoin catabolic genes were induced at this condition in a BlsA-and AcoN-dependent manner. Opposite behavior was observed for blsA and acoN mutants, being BlsA necessary for the observed induction while AcoN for repression, thus indicating that BlsA antagonizes AcoN. Finally, yeast two-hybrid (Y2H) assays indicate that BlsA interacts with AcoN only under blue light but not in the dark. The results strongly suggest that BlsA interacts with and likely sequesters the acetoin repressor under blue light but not in the dark. Thus, in the presence of light, acetoin catabolic genes are relieved from repression resulting in much better bacterial growth in this condition. Here again, the phenomena depends on temperature, occurring at low-moderate temperatures such as 23 • C but not at 30 • C, consistent with previous findings of our group for BlsA functioning (Mussi et al., 2010;Abatedaga et al., 2017;Tuttobene et al., 2018). Bacterial Strains, Plasmids, and Media Bacterial strains and plasmids used in this work are listed in Table 1. Luria-Bertani (LB) broth (Difco) and agar (Difco) were used to grow and maintain bacterial strains. Broth cultures were Taylor et al. (2002) and Akinbowale et al. (2007). In bold are shown the resistance category. incubated at the indicated temperatures either statically or with shaking at 200 rpm. Y2H Plasmid Construction PCR amplification of blsA and acoN coding sequences was performed from A. baumannii ATCC 17978 genomic DNA using primers blsAdh and acoNdh (Supplementary Table S2). The amplification products were subsequently cloned into the BamHI and XhoI sites of Gateway entry vector pENTR3C (Invitrogen) (Supplementary Table S1). The cloned fragments were then transferred to pGBKT7-Gw and pGADT7-Gw Y2H vectors (Clontech) by using LR Clonase (Cribb and Serra, 2009;Tuttobene et al., 2018). In the yeast host, these plasmids express the cloned coding sequences as fusion proteins to the GAL4 DNA-binding domain (DB) or activation domain (AD), respectively, under the control of the constitutive ADH1 promoter. Automated DNA sequencing confirmed correct construction of each plasmid. Susceptibility to Antimicrobials and Heavy Metals (MICs) The antibiotic and heavy metal susceptibility profile by microdilution was determined according to CLSI recommendations ( Table 1). Heavy metal susceptibility was determined by broth microdilution following CLSI instructions for cobalt, chromium, copper, arsenic, and zinc (Akinbowale et al., 2007). The susceptibility to tellurite was determined by serial plate dilution, with concentrations ranging from 1 to 1024 µg/mL Escherichia coli K12 were used as reference strain (Akinbowale et al., 2007). The breakpoints adopted for resistance phenotype were as follows: ≥100 µg/mL for cadmium; ≥200 µg/mL for copper, arsenic, and zinc; ≥400 µg/mL for cobalt; ≥800 µg/mL for chromium; and >128 µg/mL for tellurite. Growth curves in the presence of heavy metals were performed as follow: one colony of Acinetobacter sp. strain 5-2Ac02 was grown overnight, diluted 1:100 in 20 mL of low nutrient LB broth, and incubated at 37 • C with shaking (180 rpm) . The cultures were grown for 4 h to the exponential phase; and then, the heavy metals were added. For each isolate, the proportion of survivors was determined: (i) in the control without heavy metals, (ii) in the presence of arsenic (0.50× MIC), (iii) in the presence of cupper (0.5× MIC). Bacterial concentrations (log 10 CFU/mL) were determined at 0, 2, 4, 24, and 48 h by serial dilution and plating on LB agar. All experiments were performed in duplicate. Gene Expression by Microarrays Under Stress Conditions: Mitomycin and AHLs Acinetobacter sp. strain 5-2Ac02 cells were grown in LB medium to an exponential phase about OD 600 = 0.5 before addition of 10 µg/mL of mitomycin C (SOS response) or a mixture of 1 µM each acyl-homoserine lactones composed by N-(butyl, heptanoyl, hexanoyl, β-ketocaproyl, octanoyl, and tetradecanoyl)-DL-homoserine lactones or 10 µM 3-oxo-dodecanoyl-HSL (3-oxo-C12-HSL) (Quorum Network). After incubation of the mixtures for 2 h, 1 mL of each culture was used for RNA extraction. RNA was purified using the High Pure RNA Isolation Kit (Roche, Germany). The microarrays were specifically designed for this strain using eArray (Agilent). The microarray assays were performed with 12,664 probes to study 2,795 genes. Labeling was carried out by two-color microarray-based prokaryote analysis and Fair Play III labeling, version 1.3 (Agilent). Three independent RNAs per condition (biological replicates) were used in each experiment. Statistical analysis was carried out using Bioconductor, implemented in the RankProd software package for the R computing environment. A gene was considered induced when the ratio of the treated to the untreated preparation was 1.5 and the p-value was <0.05 (Lopez et al., 2017b). Bacterial Killing Curves The MICs of ampicillin, ciprofloxacin, and mitomycin C were determined for Acinetobacter sp. strain 5-2Ac02 (0.5, 1, and 0.5 µg/mL) versus A. baumannii strain ATCC 17978 (8, ≤0.12, and 2 µg/mL). Briefly, an initial inoculum of 5 × 10 5 CFU/mL was incubated at 37 • C with shaking (250 rpm) in 20 mL of low nutrient LB broth (LN-LB; 2 g/L tryptone, 1 g/L yeast extract, and 5 g/L NaCl) (Lopez et al., 2017a,b). The cultures were grown for 4 h to the exponential phase; and then, the antibiotics were added. For each isolate, the proportion of survivors was determined: (i) in the control without antibiotic, (ii) in the presence of mitomycin C (0.25× MIC), (iii) in the presence of ampicillin (10× MIC), (iv) in the presence of ciprofloxacin (10× MIC), (v) in the presence of mitomycin C and ampicillin (0.25× MIC and 10× MIC), and (vi) in the presence of mitomycin C and ciprofloxacin (0.25× MIC and 10× MIC). Bacterial concentrations (log 10 CFU/mL) were determined at 0, 1, 2, 3, 4, 20, 24, 28, and 48 h by serial dilution and plating on Mueller-Hinton agar. All experiments were performed in triplicate. This protocol was performed following previously described indications (Hofsteenge et al., 2013). Finally, the persister sub-population was determined from the percentage of survivors. Gene Deletion in A. baumannii ATCC 17978 The negative regulator of the acetoin operon was deleted following the double recombination method, using the pMO-TelR plasmid and E. coli DH5α strain to multiply the plasmid with the construct (Hamad et al., 2009;Aranda et al., 2010). All primer sequences used were designed in this study and are listed in Supplementary Table S2. Isolation of RNA and Analyses of Genes Expression by qRT-PCR Acinetobacter baumannii cells were grown stagnantly in LN-LB at 37 • C with the addition of 10 µM of 3-oxo-C12-HSL or 10 µM of 3-hydroxy-dodecanoyl-HSL (3-OH-C12-HSL) when appropriate, or in M9 liquid medium supplemented with 15 mM acetoin as carbon source at 23 or 30 • C until an OD 600 of 0.4-0.6 was reached, as indicated. RNA extraction and qRT-PCR were performed following procedures described in Lopez et al. (2018) and Tuttobene et al. (2018). Results are informed as normalized relative quantities (NRQs) calculated using qBASE (Hellemans et al., 2007), with recA and rpoB genes as normalizers . The UPL Taqman Probes (Universal Probe Library-Roche, Germany) and primers used are listed in Supplementary Table S3. Growth in the Presence of Acetoin Wild-type and derivative strains A. baumannii ATCC 17978 were grown on acetoin as the sole carbon source. To test the ability of the A. baumannii strains used in this work to grow on acetoin as the sole carbon source, 1/100 dilutions of overnight cultures grown in LB Difco were washed and inoculated in M9 liquid medium supplemented with 5, 10, or 15 mM acetoin or in LB Difco medium and grown without shaking, under blue light or in the dark at 23 or 30 • C. Aliquots were removed at the times indicated in the figures in order to measure the A660 of the culture. Yeast Two-Hybrid (Y2H) Assays Yeast two-hybrid experiments were conducted following procedures described before (Cribb and Serra, 2009;Tuttobene et al., 2018). Saccharomyces cerevisiae Mav 203 strain (MATa, leu2-3,112, trp1-901, his3-D200, ade2-101, gal4D, gal80D, SPAL10::URA3, GAL1::lacZ, HIS3UAS GAL1::HIS3, LYS2, can1R, and cyh2R) was transformed with the different expression vectors. First, BlsA and AcoN were analyzed for self-activation. For this purpose, MaV203 yeast strain containing the pGAD-T7 empty vector was transformed with the DNA DB-fusion protein expressing vectors (pGBK-X) (X = BlsA or AcoN). Conversely, MaV203 yeast strain containing the pGBK-T7 empty vector was then transformed with the AD-fusion protein expressing vectors (pGAD-Y) (Y = BlsA or AcoN). In addition, these strains were used for determination of the optimal 3-amino-1,2,4-triazole (3AT) concentration required to titrate basal HIS3 expression. MaV203/pGBK-X strains were afterward transformed with each pGAD-Y plasmids. Transformations using one or both Y2H plasmids were performed by the lithium acetate/single-stranded carrier DNA/polyethylene glycol method described in Gietz and Woods (2002), and plated in convenient minimal selective medium [synthetic complete (SC) medium without leucine (-leu) for pGAD-Y transformants, SC without tryptophan (-trp) for pGBK-X transformants, and SC-leu-trp transformants carrying both plasmids]. The plates were then incubated at 23 • C for 72 h to allow growth of transformants. A "Master Plate" was then prepared using SC-leu-trp media, in which we patched: four to six clones of each pGBK-X/pGAD-Y containing yeasts, four to six self-activation control clones pGBK-X/pGAD and pGBK/pGAD-Y (Y DNA-binding negative control), and two isolated colonies of each of the five yeast control strains (A-E). The plates were incubated for 48-72 h at 23 • C. This Master Plate was then replica plated to SC-leu-trp-his+3AT and to SC-leu-trp-ura to test for growth in the absence of histidine (his) and uracil (ura), respectively (his3 and ura3 reporter activation), under the different conditions analyzed, i.e., dark/light; 23/30 • C, for at least 72 h. For development of blue color as a result of β-galactosidase (β-Gal) expression, transformed yeasts were replica plated on a nitrocellulose filter on top of a YPAD medium plate and grown at the different conditions (dark/light; 23/30 • C). Then, the cells on the nitrocellulose filter were permeabilized with liquid nitrogen and soaked in X-Gal solution (5-bromo-4-chloro-3-indolyl-b-D-galactopyranoside in Z buffer (60 mM Na 2 HPO 4 , 40 mM NaH 2 PO 4 , 10 mM KCl, 1 mM MgSO 4 , pH 7.0) maintaining the different incubation conditions to be tested. Accession Numbers The genome of the Acinetobacter sp. 5-2Ac02 is deposited in GenBank database (GenBank accession number MKQS00000000; Bioproject PRJNA345289). The genome of A. baumannii ATCC17978 is deposited in GenBank (accession number CP018664.1). Finally, the gene expression microarray results are deposits in GEO database (GEO accession number GSE120392). Transcriptome Adjustments in Response to Mitomycin C Show Induction of Defense and Stress Response Systems in Acinetobacter sp. Strain 5-2Ac02 The airborne Acinetobacter sp. 5-2Ac02 isolate was first characterized to learn about its antibiotic as well as heavy metal susceptibility profiles, since its genome harbored genes of the ter (tellurite resistance) operon (terZABCDEF); klaA and klaB genes from the kil operon, which is associated with the previous one (O'Gara et al., 1997); as well as the arsenic-resistance operon arsC1-arsRarsC2-ACR3-arsH ( Table 1). The data presented in Table 1 show that Acinetobacter sp. 5-2Ac02 is susceptible to all antibiotic tested but resistant to copper as well as to arsenic, as previously reported (Barbosa et al., 2016). This information was confirmed by growth curves in the presence of these heavy metals (Supplementary Figure S1). Arrays performed in the presence of the stressor mitomycin C revealed induction of SOS genes such as those coding for recombinases, polymerases, as well as DNA repair proteins, all with a fold change (FC) > 3 in Acinetobacter sp. strain 5-2Ac02. Also, genes coding for components of six TA systems were found to be induced with a FC > 4.9 in all cases: the RelBE systems (x2), the HigBA system, the ParDE system, and two new putative systems (x2). The data also showed induction of genes involved in heavy metal resistance genes, among which can be highlighted cobalt-zinc-cadmium, copper, and arsenic resistance genes. In addition, the gene coding for colicin V protein was induced with a FC of 3.716 (Table 2). Finally, many mobile element genes, which are extraordinarily abundant in the genome of Acinetobacter sp. 5-2Ac02 strain, were also induced (not shown). The TA systems have been shown to be involved both in tolerance and persistence . We next analyzed the fraction of tolerant or persister cells in populations of Acinetobacter sp. strain 5-2Ac02 by determining the time-kill responses in the presence of ampicillin, ciprofloxacin, mitomycin C, and combinations of these (Figure 1), following protocols described in Hofsteenge et al. (2013). The data show a large decrease in colonies of Acinetobacter sp. strain 5-2Ac02 during the first 24 h in the presence of ampicillin, ciprofloxacin, as well as in the presence of the combination of ampicillin and mitomycin C. Interestingly, the presence of a combination of mitomycin C with ciprofloxacin showed a tolerant population displaying slow growth at 4, 24, and 48 h (Figure 1) under this stress condition, which may result from activation of defense mechanisms such as the toxins and antitoxins systems as well as SOS response. Quorum Sensing Signals Modulate Expression of the Acetoin/Butanediol Catabolic Cluster in Acinetobacter spp., Being AcoN a Negative Regulator in A. baumannii Array expression studies of Acinetobacter sp. 5-2Ac02 in the presence of a mixture of N-acyl-homoserine lactones (AHLs) or 3-oxo-C12-HSL, which are modulators of the quorum network in A. baumannii , indicated induction of the acetoin/butanediol catabolic pathway genes, each with a FC > 1.5 (Tables 3, 4, respectively). We show the genomic arrangement of this cluster in the genomes of Acinetobacter sp. 5-2Ac02 and A. baumannii ATCC 17978 strain (Figures 2A,B). The same genomic configuration in A. baumannii strain ATCC 17978 was observed in 18 clinical A. baumannii strains isolated in the "II Spanish Study of A. baumannii GEIH-REIPI 2000-2010" which included 45 Spanish hospitals with 246 patients (GenBank Umbrella Bioproject PRJNA422585) (Supplementary Table S4). Ten genes were identified in the ATCC 17978 cluster, likely coding for a putative transcriptional regulator (gene 1) followed by a putative lipoyl synthase (gene 2), two oxidoreductases homologous to acoA and acoB (genes 3 and 4), a deaminase homologous to acoC (gene 5), a dehydrogenase homologous to acoD (gene 6), a BDH reductase (gene 7), and a BDH (gene 8), all of which are followed by a hypothetical protein (gene 9) and a putative transcriptional regulator (gene 10) (Figure 2A). Gene 2 is homologous to acoK (Figure 2A) and gene 1 is Genes showing FC >2 are indicated. a RAST server was used to identify the protein-coding genes, rRNA and tRNA genes, and to assign predictive functions to these genes. a RAST server was used to identify the protein-coding genes, rRNA and tRNA genes, and to assign functions to these genes. homologous to a positive transcriptional regulator (activator) homologous to acoR in different organisms (Figure 2A). The genomic configuration in Acinetobacter sp. strain 5-2Ac02 is similar to that of ATCC 17978 except that genes coding for the hypothetical protein and the putative transcriptional regulator (9 and 10 in ATCC 17978, respectively) are absent, while three genes coding for putative transposases were identified following gene 8 ( Figure 2B). Finally, in presence of the AHL mixture, the arrays also revealed increased expression (FC > 2) of genes involved in biodegradation of aromatic compounds (Table 4). We suspected that the absence of the putative transcriptional regulator in Acinetobacter sp. strain 5-2Ac02, designated as gene 10 in the genome locus of A. baumannii ATCC17978 (Figure 2A) and renamed here from now on as acoN, might be responsible for the induced expression of the acetoin catabolic genes in response to quorum network signals. We reasoned that whether this was the case, then a knockout mutant in acoN in A. baumannii ATCC 17978, which would resemble the situation in the so far genetically intractable Acinetobacter sp. strain 5-2Ac02, would result in induction of the acetoin catabolic genes in the presence of quorum sensing signals. As can be observed in Figure 3, the presence of quorum sensing signals resulted in induction of the transcript levels of BDH (bdh, acetoin/butanediol cluster) (RE > twofold) in the A. baumannii ATCC 17978 acoN mutant with respect to the wild-type strain. This provides the first clue that AcoN functions as a negative regulator of acetoin catabolic genes. Further studies showed that the acoN mutant grew much better in media supplemented with acetoin (5 mM) as sole carbon source than the wild-type strain in the dark at 23 • C (Figure 4A), which barely grew at this condition. The acoN mutant containing the pWHAcoN plasmid, which expresses acoN directed from its own promoter, behaved as the wild type showing a reduced ability to grow on acetoin as sole carbon source at 23 • C in the dark, restoring therefore the wild-type phenotype ( Figure 4B). Similar results were obtained at 30 • C and are discussed later in the manuscript. These results provide further evidence of the role of AcoN gene as a negative regulator of the acetoin catabolic cluster. Finally, expression of acetoin catabolic genes such as acoA, acoB, and acoC was induced approximately 150-folds in the acoN mutant with respect to the wild type at 23 • C in the dark (Figure 5). These results confirm the functioning of AcoN as a negative regulator of the acetoin catabolic pathway in A. baumannii. Light Modulates Acetoin Catabolism Through BlsA and AcoN at Moderate Temperatures in A. baumannii Acetoin catabolic genes such as acoA, acoB, acoC, and acoD have been previously shown to be induced by light at moderate temperatures in A. baumannii ATCC 19606 by RNA-seq studies . We thus studied whether light modulated acetoin catabolism in ATCC 17978 at 23 • C and found a differential ability of this strain to grow in the presence of acetoin as sole carbon source between light and dark conditions (Figure 4 and Supplementary Figure S2). Figure 4A shows that A. baumannii ATCC 17978 grows much poorer in 5 mM acetoin in the dark rather than under blue light at 23 • C. The blsA mutant, which lacks the only traditional photoreceptor encoded in the A. baumannii genome, behaved as the wild type in the dark both under blue light or in the dark (Figure 4A), as also did the mutant containing the empty vector pWH1266 (Figure 4B). In contrast, the blsA mutant containing pWHBlsA, which expresses blsA directed from its own promoter, grew better on acetoin under blue light than in the dark, restoring thus the wild-type phenotype ( Figure 4B). The acoN mutant, a RAST server was used to identify the protein-coding genes, rRNA and tRNA genes, and to assign functions to these genes. (2) hypothetical protein; (3) acoA, acetoin dehydrogenase E1 alpha-subunit; (4) acoB, acetoin dehydrogenase E1 beta-subunit; (5) acoC, dihydrolipoamide acetyltransferase (E2) acetoin; (6) acoD, dihydrolipoamide dehydrogenase subunit of acetoin dehydrogenase; (7) 2,3-BDH/2,3-butanediol dehydrogenase, S-alcohol forming, (S)-acetoin-specific; (8) 2,3-BDH/2,3-butanediol dehydrogenase, R-alcohol forming, (R)-and (S)-acetoin-specific; (9) hypothetical protein (A. baumannii ATCC17978) and transposases (Acinetobacter sp. 5-2Ac02 strain); (10) putative transcriptional regulator (AcoN, A. baumannii ATCC17978) and hypothetical protein (Acinetobacter sp. 5-2Ac02 strain). both under blue light and in the dark, behaved as the wild type under blue light, i.e., showed enhanced growth with respect to the wild type in the dark, congruent with the absence of the negative regulator ( Figure 4A); as also did the acoN mutant containing pWH1266 ( Figure 4B). The acoN mutant containing pWHAcoN, which expresses acoN directed from its FIGURE 3 | The BDH (bdh) gene is induced by quorum network signals in the acoN mutant. Estimation of the relative levels of the BDH mRNA by qRT-PCR in the presence of AHLs or 3-oxo-C12-HSL in the wild-type A. baumannii ATCC 17978 and acoN genetic backgrounds. The data shown are mean ± SD of the expression levels relative to the wild type from at least three biological replicates. Asterisks indicate significant differences in acoN compared to wild type as indicated by t-test (p < 0.01). own promoter, grew better on acetoin under blue light than in the dark, therefore restoring the wild-type phenotype ( Figure 4B). Similar results were obtained when acetoin 10 and 15 mM was used as sole carbon source (Supplementary Figure S2). These results show that light modulation of acetoin catabolism depends on the BlsA photoreceptor and the AcoN negative regulator in A. baumannii ATCC 17978. Opposite behavior is observed for blsA and acoN mutants regarding modulation of growth on acetoin by light, indicating that BlsA is necessary for the observed induction, while AcoN for repression. The overall evidence prompts us to postulate a model in which BlsA interacts with AcoN under blue light at 23 • C antagonizing this repressor, with the concomitant induction of acetoin catabolic genes' expression as well as growth on acetoin in this condition. It is important to mention that the viability of cells was not affected by light, as similar growth curves were obtained for the different strains in the complex media LB under blue light and in the dark (Figures 4C,D). Light Regulates Expression of the Acetoin Catabolic Pathway Through BlsA and AcoN at Moderate Temperatures in A. baumannii We then monitored AcoN functioning in response to light by measuring the expression of AcoN-regulated genes under different illumination conditions and genetic backgrounds. To this end, the expression of the acetoin catabolic genes acoA, acoB, and acoC (Figures 5A-C respectively) was analyzed by qRT-PCR at different light conditions at moderate temperatures in A. baumannii strain ATCC 17978. Our results show that the expression levels of these genes were basal in the dark at 23 • C in M9 minimal medium with acetoin as sole carbon source. However, their expression was significantly induced in the presence of blue light (Figure 5). In blsA mutants, expression of acoA-C genes was basal and comparable between blue light and dark, and similar to that observed for the wild type in the dark at 23 • C (Figure 5). Thus, light modulates the expression of the acetoin catabolic genes, acoA-C through BlsA. On its side, the acoN mutant also lost photoregulation, i.e., expression levels of acoA-C genes were similar between the illuminated or dark conditions. However, for this mutant, expression levels were much higher even than those registered in the wild-type under blue light, i.e., in the induced condition ( Figure 5). Indeed, acoA expression levels in the acoN mutant were approximately twofold higher than in the wild type under blue light, while acoB and acoC expression levels were about threefold higher, and >100-folds higher than the wild type in the dark. Opposite behavior is observed for blsA and acoN mutants regarding modulation of acoA-C genes' expression, suggesting that BlsA is necessary for the observed induction while AcoN for repression. Altogether, BlsA antagonizes the functioning of AcoN under blue light at 23 • C, with the concomitant induction of the expression of AcoN-regulated genes at this condition. By analogy with a mechanism described previously for BlsA and Fur (Tuttobene et al., 2018), we hypothesized that BlsA might interact with the AcoN negative regulator, antagonizing its functioning. BlsA Interacts With the Acetoin Catabolic Negative Regulator AcoN Under Illumination at Moderate Temperatures in A. baumannii Yeast two-hybrid assay experiments were conducted to study if BlsA interacts with AcoN, using an adapted system from ProQuest TM Two-Hybrid System, as previously described (Tuttobene et al., 2018). The system includes strain Mav 203, which harbors three reporter genes with different promoters to avoid false positives: lacZ and two auxotrophic markers HIS3 and URA3. If the two proteins studied do interact, the appearance of blue color as well as growth in the absence of histidine or uracil would be observed. Gateway-system vectors pGAD-T7Gw and pGBK-T7Gw adapted to Y2H express each of the studied genes, blsA and acoN, as fusions to GAL4 DNA DB or AD. In each plate were also included self-activation controls (pGAD-T7Gw and pGBK-T7Gw empty vectors) as well as different strength interaction controls (A-E), to give an indication of the reporter genes' expression levels. In our previous report (Tuttobene et al., 2018), we observed that BlsA protein interactions depend on illumination and temperature conditions, so we decided to test its interaction with AcoN, the acetoin catabolism negative regulator, under different conditions. Figure 6 shows results of Y2H assay experiments at the different conditions analyzed. At 23 • C under blue light (Figure 6), the interaction between BlsA and AcoN was demonstrated by the appearance of blue color and growth in SC defined media without the supplementation of histidine or uracil, i.e., results were consistent for the three reporters analyzed. The interactions occurred independently of the vector used, as both pGADblsA/pGBKacoN and pGADacoN/pGBKblsA combinations produced signals ( Table 5). Growth on SC-Ura plates indicates a strong interaction between BlsA and AcoN in the conditions analyzed, since the URA3 reporter is the least sensitive 1 . Moreover, controls indicated absence of self-activation of each protein fused to DB or AD: (pGAD-T7/pGBKblsA or pGBKacoN) or (pGBK-T7/pGADblsA or pGADacoN) ( Table 5). The overall data provide convincing evidence indicating that BlsA interacts with AcoN at 23 • C under blue light. However, no positive signal was detected for AcoN-BlsA interaction by Y2H assays for any of the reporters tested at 23 • C in the dark, while interaction controls behaved as expected (Figure 6 and Table 5). Altogether, the data account for BlsA interacting with AcoN in a light-dependent manner at moderate temperatures. Table 5 summarizes the results obtained for Y2H. AcoN Does Not Modulate A1S_1697 Expression in Response to Light We next analyzed the possibility that AcoN would be directly controlling the expression of the other putative transcriptional regulator identified in this cluster (gene 1, A1S_1697) in A. baumannii (Figure 2), which by analogy with acoR from B. subtilis might be an activator of the acetoin cluster. Whether this hypothesis is correct, AcoN would modulate acoA-C in response to light indirectly by modulation of the functioning of the putative activator. For this purpose, we studied A1S_1697 expression at different illumination conditions and genetic backgrounds. If AcoN functions as a negative regulator 1 http://www.invitrogen.com/content/sfs/manuals/10835031.pdf of A1S_1697 expression in a light-dependent manner, then A1S_1697 transcripts levels would vary between light and dark conditions. This variation would level in the acoN mutant between light and dark, and reach higher expression levels than the wild type, had it been the negative regulator. However, and as seen in Figure 7, A1S_1697 transcripts levels were similar between light and dark for all the genetic backgrounds analyzed, namely, the wild-type strain, and the blsA and acoN mutants. These results indicate that AcoN does not regulate A1S_1697 expression in response to light. BlsA-AcoN Interaction Is Significantly Reduced at Higher Temperatures Since BlsA and AcoN interact at 23 • C under blue light, we wondered whether this interaction is conserved at higher temperatures. Thus, BlsA-AcoN interactions were studied by Y2H at a temperature that supports yeast growth such as 30 • C. A control at 23 • C under blue light was always included for each repetition. Figure 6 shows representative Y2H results indicating null or negligible BlsA-AcoN interactions at 30 • C, neither in the dark nor under blue light. Light Does Not Modulate Acetoin Catabolism at Higher Temperatures We next studied whether acetoin catabolic gene expression and growth was modulated by light at 30 • C, since no interaction between BlsA and AcoN was detected at this temperature. As . The data shown are mean ± SD of normalized relative quantities (NRQs) calculated from transcript levels measured in samples grown in M9 minimal media supplemented with acetoin as sole carbon source under blue light or in the dark at 23 • C, in at least three biological replicates. Different letters indicate significant differences as determined by ANOVA followed by Tukey's multiple comparison test (p < 0.01). FIGURE 6 | BlsA interacts with AcoN only under blue light at moderate temperatures in A. baumannii. BlsA-AcoN interaction was analyzed by Y2H assays at different conditions including 23 • C under blue light (L) or in the dark (D), and 30 • C under blue light (L) or in the dark, following procedures described in Cribb and Serra (2009) and Tuttobene et al. (2018). In each plate were patched six clones of MaV203/pGAD-blsA or MaV203/pGAD-acoN transformed with plasmids pGBK-acoN or pGBK-blsA, respectively, as well as plasmid pGBK-T7 as negative control. Reciprocal combinations were also included, as well as self-activation and different strength interaction controls (strains A-E). The scheme on the right side represents the order of yeast streaks on each plate. Panel A shows results for the lacZ reporter, panel B for the histidine auxothropic reporter and panel C for the uracil reporter. Experiments were performed in triplicates and representative results are shown. 5 | The interaction between AcoN and BlsA was determined by the yeast two hybrid assay, using GAL4 activation domain (AD) and DNA-binding domain (BD) fusion proteins. BlsA_AD AcoN_AD Empty vector pGBK-T7 Both combinations of fusion proteins (AcoN_BD vs. BlsA_AD and BlsA _BD vs. AcoN_AD) were assayed giving the same results with all three reporter genes (β-Gal, HIS 3, and URA 3). +, means reporter gene expression induced by a positive interaction; −, means no interaction, confirming that no "self-activation" of the fusion proteins may result in the reporters expression (interactions using pGAD-T7 and pGBK-T7 empty vectors in combination with the fusion proteins); ND, means that self-interactions of AcoN and BlsA were not determined. expected, acoA, acoB, and acoC gene expression showed no differential modulation by light neither in A. baumannii ATCC 17978 wild type, nor in the blsA or acoN mutants at this condition ( Figure 8A). At 30 • C, acoA-C expression levels in the blsA mutant were similar to the wild-type strain both under blue light and in the dark, i.e., were repressed; while they were FIGURE 7 | A1S_1697 expression does not depend on light nor on AcoN. Estimation of the expression levels of A1S_1697 by RT-qPCR in A. baumannii ATCC 17978 wild-type as well as blsA and acoN genetic backgrounds at 23 • C under blue light (L) or in the dark (D). The data shown are mean ± SD of NRQs calculated from transcript levels measured in samples grown in M9 minimal media supplemented with acetoin as sole carbon source under blue light or in the dark at 23 • C, in at least three biological replicates. Different letters indicate significant differences as determined by ANOVA followed by Tukey's multiple comparison test (p < 0.01). induced in the acoN mutant both under blue light and in the dark. This behavior was congruent with growth curves performed in M9 minimal media supplemented with acetoin as sole carbon source, which showed no significant difference between light and dark for any of the studied strains (Figures 8B,C). Here again, the acoN mutant showed enhanced growth consistent with the absence of the negative regulator, as also did the acoN mutant containing pWH1266 (Figures 8B,C). The overall data indicate that light does not influence acetoin catabolism at 30 • C or above, and are in agreement with available knowledge regarding BlsA functioning (Mussi et al., 2010;Golic et al., 2013;Abatedaga et al., 2017). DISCUSSION Acinetobacter sp. are extremely well adapted to different hostile environments thanks to several molecular mechanisms that enable survival under stress conditions. Here, we characterized the Acinetobacter sp. 5-2Ac02 strain isolated from the air in a hospital from Brazil. Acinetobacter sp. 5-2Ac02 showed an antibiotic susceptible profile. It includes a bla oxa−58 gene as well as tet genes, which have been related to resistance to tetracycline, coded in its genome. This susceptible strain carrying these cryptic genes hence represents a clinical threat as it may act as a reservoir of resistance genes. The high arsenic MIC for Acinetobacter sp. strain 5-2Ac02 may be attributed to the arsenic operon, arsC1-arsR-arsC2-ACR-arsH, which has only been described in the Pseudomonas stutzeri TS44 (Barbosa et al., 2016). We further analyzed the global gene expression adjustments in this strain in response to environmental stressors such as mitomycin C and found induction of genes coding for components of the SOS response, genes involved in numerous TA systems (RelBE, HigBA, parDE, and other two new TA systems) (Barbosa et al., 2016), and resistance to heavy metals and antioxidant enzymes. The TA systems have been shown to be involved both in tolerance and persistence, which presuppose the ability of the bacteria to grow slowly or enter into a dormant state, respectively, to cope with the presence of a stressor . It is thus not surprising that in the presence of mitomycin C and ciprofloxacin a tolerance phenotype was observed in killing curves (Figure 1). Furthermore, the ability of A. baumannii to survive for long periods of desiccation has been related to the achievement of dormant states, via mechanisms affecting control of cell cycling, DNA coiling, transcriptional and translational regulation, protein stabilization, antimicrobial resistance, and toxin synthesis (Gayoso et al., 2014). The fact that this airborne strain, in which desiccation is a common feature in its lifestyle, harbors and modulates numerous determinants leading to persistence in adverse environmental conditions is thus aligned with this notion. Under pressure from the quorum network, both AHLs and 3-oxo-C12-HSL compounds induced the expression of a cluster involved in acetoin/butanediol metabolism in Acinetobacter sp. 5-2Ac02, which was also shown to be induced by light in A. baumannii . Acetoin (3-hydroxy-2butanone) is a four carbon neutral molecule used as substrate by various microorganisms, with multiple usages in flavor, cosmetic, and chemical synthesis . In B. subtilis, acetoin is a significant product generated from glucose metabolism in aerobiosis. Given its neutral nature, acetoin allows the consumption of important quantities of glucose without acidification of the medium. It can also serve as a carbon reserve which can be expelled to the exterior and later re-internalized (Ali et al., 2001). Acetoin and BD are also BVCs, which can influence bacterial pathogenesis (Audrain et al., 2015) by altering the production of virulence factors (Venkataraman et al., 2014) or by affecting host cell functions (Kurita-Ochiai et al., 1995). In addition to the FIGURE 9 | Working model representing photoregulation of acetoin catabolism through AcoN and BlsA. At 23 • C in the dark, BlsA and AcoN do not interact, and AcoN represses expression of the acetoin catabolic genes acoA, acoB, and acoC (A). As a result, growth on acetoin as sole carbon source is severely affected. Under blue light, BlsA acquires an excited state now capable of interacting with AcoN, antagonizing its functioning, allowing expression from the acetoin catabolic operon, and supporting growth (B). Overall, BlsA finely tunes AcoN levels in response to light, modulating therefore acetoin catabolism. At 30 • C, both under blue light or in the dark, BlsA does not interact with AcoN maintaining therefore its functioning as a repressor (C,D), resulting growth severely affected at this condition. fundamental ecological interest, a better understanding of environmental bacteria and of the roles of BVCs (including BD), metabolic pathways, and mechanisms involved could provide new information about the bacterial response to the environment, thus potentially leading to clinical or industrial applications. Comparisons of the genetic organization of this cluster from Acinetobacter sp. 5-2Ac02 with that of A. baumannii ATCC 17978 guided us to further study a gene annotated as a putative transcriptional regulator, then designated AcoN by us. We show here that it behaves as a negative regulator of the acetoin/butanediol cluster in an A. baumannii and is involved in photoregulation of acetoin catabolism in A. baumannii through the photoreceptor BlsA. In this context, we have recently shown that BlsA binds to and antagonizes the functioning of the transcriptional repressor Fur only in the dark at 23 • C, likely by reducing its ability to bind to acinetobactin promoters with the concomitant enhanced gene expression and growth under iron deprivation at this condition (Tuttobene et al., 2018). In this work, we have broadened our understanding of BlsA functioning by showing that this photoreceptor can antagonize the functioning of other transcriptional regulators also under blue light such as AcoN. Our results support a model in which the system is at a basal level or repressed state in most conditions, for example in the dark at 23 • C as well as at 30 • C both in the dark or under blue light, i.e., AcoN is repressing acetoin catabolic genes' transcription resulting in basal gene expression levels as well as severely affected growth on acetoin (Figure 9). However, under blue light at 23 • C the system gets derepressed: BlsA binds to the acetoin repressor AcoN antagonizing its functioning, likely by reducing its ability to bind to acetoin catabolic genes' promoters, allowing thus their expression at this condition (Figure 9). Overall, the global regulator BlsA functions both under blue light and in the dark at low-moderate temperatures modulating different transcriptional regulators, such as Fur and AcoN, as well as the corresponding sets of regulated genes and the corresponding cellular processes. In this sense, BlsA probes to be unique among described photoreceptors regarding its dual activity under illumination and in the dark. Indeed, many photoreceptors have been shown to antagonize transcriptional repressors (Tuttobene et al., 2018), such as AppA from Rhodobacter sphaeroides (Pandey et al., 2017), PixD from Synechocystis sp. PCC6803 (Fujisawa and Masuda, 2017), and YcgF from E. coli (Tschowri et al., 2012). However, their functioning has been reported to occur in the dark for the first two or under blue light for the last one. This constitutes therefore the first report showing that a single photoreceptor can act both under blue light and in the dark for differential modulation by light of diverse cellular processes. The fact that BlsA-Fur modulates photoregulation of iron uptake, while BlsA-AcoN modulates photoregulation of acetoin catabolism in A. baumannii at low-moderate temperatures such as 23 • C but not 30 • C, is consistent with previous findings of our group. In fact, we have previously showed that BlsA integrates a temperature signal in addition to light by mechanisms affecting different points of regulation. On the one side, blsA expression levels are very much reduced at 30 or 37 • C with respect to 23 • C, which correlates with negligible photoreceptor levels in the cells at 37 • C (Abatedaga et al., 2017;Tuttobene et al., 2018), while the other point of control by temperature affects BlsA photoactivity (Abatedaga et al., 2017). The mechanism by which BlsA perceives light and differentially binds to transcriptional regulators is not clear and could result from differential properties displayed by the photoreceptor at each condition, for example regarding the oligomerization state. In this sense, our results show that BlsA forms oligomers both under blue light or in the dark at 23 • C (Tuttobene et al., 2018). Yet, variations in the composition or order level of these oligomers at each condition could account for differential functioning, as is the case of Synechocystis sp. PCC6803 PixD (Fujisawa and Masuda, 2017). Many questions arise from our findings such as why photoregulation of acetoin catabolism at moderate temperatures has evolved in this pathogen. Likely, the answer lies in the lifestyle carried out by the microorganism at this condition. In this context, and as mentioned before, it has been shown that utilization of BD, a common fermentation product of P. aeruginosa co-habitant bacteria, significantly increases virulence and infection of the microorganism (Venkataraman et al., 2014;Nguyen et al., 2016;Liu et al., 2018). The activation of the pathway of BD utilization through acetoin by light observed could plausibly go in this same sense too in A. baumannii. Indeed, we have already seen that light induces factors related to virulence and/or persistence in the environment such as the type VI secretion system T6SS, the phenylacetic acid catabolic pathway, trehalose biosynthesis, tolerance to antibiotics, production of antioxidant enzymes, etc. , which could ultimately contribute to persistence and competition with other microorganisms in the habitat. Future experiments will be devoted to provide a detailed characterization of the mechanism of photoregulation directed by BlsA, AcoN, and their targets. First, we will conduct gel mobility assays (EMSA) to prove that AcoN is a DNA-binding transcriptional regulator, as is strongly suggested by BLAST sequence homology analyses, which show 97-100% identity with proteins annotated as sigma-54-dependent Fis family DNA-binding transcriptional regulators in A. baumannii. If BlsA interacts with AcoN under blue light avoiding or reducing its ability to bind to target promoter regions, as proposed by the evidence accumulated in this work, then the addition of BlsA to these EMSA assays should reduce the delay observed for the AcoN-DNA probe. DNase protection assays will further characterize the AcoN-DNA binding region. Furthermore, by solving the 3D structures and conducting ultrafast structural dynamic studies of BlsA alone as well as bound to AcoN under blue light, we expect to gain detailed knowledge on structural as well as photochemical aspects of the light signal transduction mechanism. Finally, we show in this work that quorum network modulators as well as light both regulate the acetoin catabolic cluster. Whether these are independent signals or share totally or partially the signal transduction cascade components is actually under study in our laboratories. DATA AVAILABILITY The datasets generated for this study can be found in GEO, GSE120392. AUTHOR CONTRIBUTIONS MRT and GLM performed the experiments. PC performed the experiments and collaborated in writing the manuscript. LF-G, LB, and AA performed the experiments and mutant strain. RER and RL-R analyzed the experiments. FF-C and IB analyzed the array studies. BB, RT, ML, and GB developed the RT-PCR experiments. MT designed the experiments. MAM and MT designed the experiments, wrote the manuscript, and provided funding. FUNDING This study was funded by grant PI16/01163 awarded to MT within the State Plan for R+D+I 2013-2016 (National Plan for Scientific Research, Technological Development and Innovation 2008-2011) and co-financed by the ISCIII-Deputy General Directorate for Evaluation and Promotion of Research -European Regional Development Fund "A way of Making Europe" and Instituto de Salud Carlos III FEDER, Spanish Network for the Research in Infectious Diseases (REIPI, RD16/0016/0001, RD16/0016/0006, and RD16/0016/0008) and by the Study Group on Mechanisms of Action and Resistance to Antimicrobials, GEMARA (SEIMC, http://www.seimc.org/). MT was financially supported by the Miguel Servet Research Program (SERGAS and ISCIII). RT and LF-G were financially supported by, respectively, a SEIMC grant and predoctoral fellowship from the Xunta de Galicia (GAIN, Axencia de Innovación). BB was financially supported by CAPES, Process: PDSE 99999.001069/2014-04. This work was also supported by grants from the Agencia Nacional de Promoción Científica y Tecnológica (PICT 2014(PICT -1161 and ASaCTeI (Ministerio de Ciencia, Tecnología e Innovación Productiva de la Provincia de Santa Fe) 2010-147-16 to MAM. PC, GLM, and MAM are career investigators of CONICET. MRT is a fellow from the same institution.
2019-06-20T13:10:50.863Z
2019-06-20T00:00:00.000
{ "year": 2019, "sha1": "36e436ca98770fb3967bfb06ba9bb965446ebe67", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2019.01376/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "36e436ca98770fb3967bfb06ba9bb965446ebe67", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
88855953
pes2o/s2orc
v3-fos-license
Supplementation with a mixture of whole rice bran and crude glycerin on metabolic responses and performance of primiparous beef cows This study investigated the effect of a supplement containing whole rice bran and crude glycerin for 21 days before mating on metabolic, productive, and reproductive responses of 28 primiparous suckling beef cows. Cows were randomly assigned to a control group (CON, n = 14), grazing on grasslands, and a supplemented group (SUP, n = 14), grazing on grasslands and supplemented daily individually with 1 kg dry matter (DM) of whole rice bran + 550 mL crude glycerin (224 g kg−1 DM of methanol) per cow. After 33 days of natural mating, cows that had not expressed estrus were subjected to a fixed-time artificial insemination protocol. Ten days after the insemination program, bulls were reintroduced for 21 days. Supplementation increased milk yield (SUP: 5.7±0.2 vs. CON: 5.0±0.2 kg d−1), milk protein content (SUP: 3.1±0.2 vs. CON: 2.8±0.2%), and body weight of cow (SUP: 379±2 vs. CON: 373±2 kg) and calf (SUP: 150±2 vs. CON: 142±2 kg). Supplementation improved the energy balance, increased plasma concentrations of cholesterol (SUP: 223.2±6.4 vs. CON: 202.1±6.4 mg dL−1) and glucose (SUP: 72.0±1.2 vs. CON: 68.6±1.2 mg dL−1), and reduced non-esterified fatty acids (SUP: 0.45±0.02 vs. CON: 0.56±0.02 mmol L−1). The percentage of cows on superficial anestrous after supplementation was greater in SUP than in CON group (57 vs. 21%, respectively); however, no difference in final pregnancy rate was found (SUP: 79 vs. CON: 64%). There was no evidence that the ingestion of crude glycerin with high content of methanol induced clinical or hepatic disorders. Supplementation of whole rice bran and crude glycerin is not toxic, and can improve the energy balance, reflecting in increase in milk yield and calf growth, with a slight effect on the reproductive activity. Introduction In extensive pastoral systems for meat production, primiparous suckled cows have the lowest reproductive efficiency and wean the lightest calves, which reduces the productivity of the herd (Bellows et al., 1982).The main cause of reproductive failure is the prolonged postpartum anestrus, induced by undernutrition (Short et al., 1990;Hess et al., 2005) and suckling (Williams, 1990).The nutrient supply of grasslands during the winter is insufficient to meet the requirements of the growing fetus in the last third of the pregnancy period, causing a negative energy balance that continues during early postpartum due to the demand for milk production (Bell, 1995;Astessiano et al., 2013).The negative energy balance is evidenced by a decrease in body condition score (BCS) and endocrine changes, such as an increase in non-esterified fatty acids (NEFA) and a decrease in glucose and insulin, which have a negative impact on follicle growth and ovulation (Wiltbank, 1970;Mulliniks et al., 2011). Postpartum supplementation can overcome, at least partially, pre-partum undernutrition (Perry et al., 1991;Ciccioli et al., 2003).Short-term supplementations before or during the mating period, with or without association with temporary weaning, are alternatives to increase pregnancy rates in cows with sub-optimal BCS (Pérez-Clariget et al., 2007;Soca et al., 2013).The supplement most frequently used in these studies has been whole rice bran, an energy nutrient with 130-180 g kg −1 of crude protein (CP) (Wang et al., 2012).On the other hand, the biodiesel industry has increased the availability of crude glycerin that can be used in ruminant nutrition (Donkin, 2008).The main component of crude glycerin is glycerol, a powerful gluconeogenic alcohol (Alexander et al., 2010).However, the main disadvantage of crude glycerin is that its methanol content can impair liver function (Schröder and Südekum, 1999). The hypothesis of this study was that short-term supplementation before mating with whole rice bran and crude glycerin with high content of methanol improves the energy balance and the performance of primiparous beef cows grazing grasslands without impairing liver function.The aim of this study was to evaluate the effect of supplementation for 21 day before mating with whole rice bran and crude glycerin with high level of methanol on body weight, body condition, milk production, hormonal and metabolic profiles, ovarian activity, pregnancy rate, and liver function in primiparous beef cows and the growth of their calves grazing grasslands. Material and Methods The experiment was conducted in an experimental station in eastern Uruguay (32º S, 54º W) according to the experimental procedures approved by the Animal Experimental Committee of Universidad de la República (UdelaR). At the start of supplementation (day 0), BW and BCS were 371±7 kg and 3.8±0.1,respectively.All the cows were suckling and in deep anestrus confirmed by the absence of a corpus luteum and the presence of follicles <9 mm in diameter in the ovaries in two ultrasound studies nine days apart (Wiltbank et al., 2002). Cows were paired based on DPP, BCS, BW, genotype (crossbred vs. pure) and sex of the calf.One member of each pair was randomly assigned to one of the following treatments: Control group (CON, n = 14): grazing grasslands with no supplementation; and Supplemented group (SUP, n = 14): grazing grasslands and supplemented daily with 1 kg DM of whole rice bran + 550 mL crude glycerin per cow for 21 days before the mating period.Whole rice bran and crude glycerin were premixed before individual supplementation.The metabolizable energy (ME) available in the supplement was 5.50 Mcal d −1 , according to NRC (2001), and the CP content was 152 g d −1 . Calves were separated from their mothers for the first 14 days of the supplementation period while the cows were supplemented (30 min) to avoid interference, but they remained within visual, auditory, and olfactory contact.During the last 7 days of the supplementation period, calves of the CON and SUP groups with 61±1 days of age were separated from their mothers and visual, auditory, and olfactory contact was prevented.During the temporary weaning, calves were kept separated in a small paddock and supplemented daily with 0.9 kg DM of alfalfa (Medicago sativa) hay bales per animal and 1.1 kg DM of early weaning ration (Bioración, Melo, Uruguay), containing 180 g kg −1 of CP, per animal.Free access to water and shade was provided. The first mating period lasted 33 days and started when cows were at 68±1 DPP.The breeding soundness of the bulls was tested two months prior to the beginning of the breeding season.Estrus was detected 3 times per day (7.00 h, 13.00 h, and 19.00 h); a cow was considered in estrus when it accepted being mounted by the bull.Cows that did not show estrus during this period were subjected to a fixed-time artificial insemination program.The protocol started on 101±1 DPP in the morning; an inert silicone intravaginal device containing 1 g of progesterone (P 4 ; DIB ® , Syntex Laboratory, Buenos Aires, Argentina) was placed and 2 mg of estradiol benzoate (Syntex Laboratory, Buenos Aires, Argentina) were injected.At the moment of DIB ® withdrawal, in the morning of 108±1 DPP, 500 mcg of cloprostenol (Ciclase D ® , Syntex Laboratory, Buenos Aires, Argentina) and 400 IU of equine chorionic gonadotrophin (Novormon ® , Sintex Laboratory, Buenos Aires, Argentina) were injected.On the following day, 1 mg of estradiol benzoate was applied.All hormones were injected intramuscularly.The fixed-time artificial insemination was performed 52-56 h after removal of DIB ® .This protocol of fixed-time artificial insemination All cows were managed as a single group during the entire experiment; they grazed together in the same pens of native grass, with forage availability greater than 2000 kg DM ha −1 (minimum: 2121±515, maximum: 6757±969 kg DM ha −1 ).Every month, cows were weighed and forage availability was determined by the double sampling method (Haydock and Shaw, 1975) using a 50 cm × 50 cm square, with five points scale and two replicates, cutting the forage at ground level, and herbage allowance was estimated.Forage height was determined as described previously (Soca et al., 2007).The green/dry mass ratio was estimated by visual assessment in the sampling square.These determinations were performed before animals were placed in the pens.Forage height was always greater than 15 cm (minimum: 16±2, maximum: 26±4 cm).Green/dry mass ratio decreased towards the end of winter and increased in spring from 48/52 in August to 84/16 in November.The predominant species were Axonopus sp, Paspalum dilatatum, Paspalum notatum, Paspalum quadrifarium, Stipa sp, Cynodon dactylon, Eryngium horridum, and Bothriochloa laguroides.The average herbage allowance during the entire experiment was 24 kg DM (100 kg BW) −1 [(minimum: 16, maximum: 37 kg DM (100 kg BW) −1 ].During supplementation, the cows remained in a paddock with a forage availability of 2121±515 kg DM ha −1 , 16±2 cm sward height, and 21 kg DM (100 kg BW) −1 herbage allowance.Ten representative samples of herbage were taken and pooled for chemical composition analysis (Table 1). Cow BCS was estimated by two trained technicians every 20 days from 16±1 weeks of pre-partum until calving and every 14 days from calving until the end of the first mating period.The correlation between technicians was 0.91, so the average of both values was used for the statistical analysis.Calf BW was recorded using an electronic scale (FX15, Iconix, Montevideo, Uruguay) at 47 (start of supplementation or day 0), 61 (beginning of temporary weaning with separation or day 14), 68 (end of temporary weaning with separation or day 21), and 82±1 (day 35) days of age and at definitive weaning (186±1 days of age or day 139). Milk production was recorded on days 0, 14, 21, and 35 from the beginning of supplementation, using a portable milking machine according to the method described by Mondragon et al. (1983).In the morning, after cows received their meal, calves were separated and the udder was emptied using 20 IU of oxytocin i/m (Neurofisin, Lab Fatro, Uruguay).Seven hours later, cows were milked again using the same methodology.The total milk was individually weighed on an electronic scale and 24 h production was estimated.On days 0 and 14, individual samples were taken and milk composition (fat and protein) was determined in the laboratory (COLAVECO; Colonia, Uruguay) using infrared radiation absorption. Weekly, from day 0 to day 49, blood samples were collected by jugular venipuncture in Vacutainer ® tubes with heparin (Becton, Dickinson and Company, Franklin Lakes, NJ, USA).Samples were centrifuged within the first hour of collection at 1530 g for 15 min and the plasma was collected and stored at -20 °C until processing. Cows were monitored daily by a veterinary during the supplementation period and one week after.During this period, attention was especially paid to any observable change in behavior, eye alterations or impaired vision, changes in the respiratory rate and depth frequency of the chewing motion, or signs of hypoesthesia (Coppock and Christian, 2012).To evaluate possible hepatic damage due to ingestion of the methanol contained in the crude glycerin, another blood sample was collected at day 110 after the beginning of the supplementation using tubes without anticoagulant.Samples were immediately centrifuged, and the serum was frozen and transported to the laboratory.Liver function was studied through the concentrations of total protein, albumin, globulin, total bilirubin, aspartate amino transferase (ASAT), alkaline phosphatase (ALP), and gamma-glutamyl transpeptidase (GGT). From day -9 to 49, the ovaries were examined weekly by transrectal ultrasonography using a linear bimodal (5.0 to 7.5 MHz) transducer (Ambivision, Digital Notebook B mode, Model AV-3018V, Manufacturer AMBISEA Technology Corp., Ltd., China).Ovarian follicles and corpus luteum were identified according to the criteria described by Griffin and Ginther (1992).The size of the largest follicle was used to classify the type of anestrus.Cows with follicles >8 mm in diameter in two or more occasions without corpus luteum were considered in shallow anestrus and those with follicles ≤8 mm in diameter without corpus luteum were considered in deep anestrus.Resumption of ovarian activity was monitored by the concentration of progesterone (P 4 ), considering that cyclicity was reinitiated if a P 4 concentration ≥1 ng mL −1 was found in two successive samples with one week interval (Meikle et al., 2004) and a corpus luteum was identified in two ultrasound scans with a 7-day interval.Pregnancy was diagnosed by transrectal ultrasonography at 46 and 66 days after fixed-time artificial insemination. Progesterone concentration was determined in all cows in samples collected on days 35, 42, and 49 from the beginning of supplementation.If a corpus luteum was observed by ultrasonography on any of those days, blood samples collected two weeks before and to two weeks after were also analyzed.The P 4 concentration was determined by solid phase radioimmunoassay using commercial kits (DPC, Diagnostic Products Co. Los Angeles, CA, USA).All samples were analyzed in one assay, with the standard curve and controls in duplicate and the samples in single.The assay sensitivity was 0.12 ng mL −1 and the intra-assay coefficients of variation for low (0.5 ng mL −1 ), medium (2 ng mL −1 ), and high (8 ng mL −1 ) controls were 3.5, 2.6, and 2.2%, respectively. Concentrations of insulin and metabolites were determined in samples from days 0, 7, 14, 21, and 28.Glucose, total protein, albumin, urea, cholesterol, and NEFA concentrations were determined spectrophotometrically using commercial kits (Glucose Oxidase/Peroxidase, Biuret, Bromocresol Green; Urease/salicylate; Cholesterol Oxidase/Peroxidase, BioSystems SA, Barcelona, Spain, Wako NEFA-HR (2), Wako Pure Chemical Industries Ltd., Osaka, Japan, respectively), with a sample volume and reagents adjusted to 96 cells and read in a Multiskan EX (Thermo Scientific, Waltham, Massachusetts, USA).The intra and inter-assay coefficients of variation for the high and low controls were less than 15%.Insulin concentration was determined by immunoradiometric assay (IRMA; Diasource, Brussels, Belgium).All samples were analyzed in one assay -the standard curve and controls in duplicate and the samples in single.The sensitivity of the assay was 1.1 uIU mL −1 and the intra-assay coefficients of variation for low (24.7 uUI mL −1 ) and high (55.3uIU mL −1 ) controls were 4.9 and 5.1%, respectively. The data were analyzed using SAS (Statistical Analysis System, version 9.2).The experiment was a completely randomized design and the individual cow was considered the experimental unit.Body condition score data were grouped in two different periods: monitoring phase (last third of gestation -beginning of supplementation) and experimental period (from the beginning of supplementation to the fixed-time artificial insemination).Data of milk production and composition, cow and calf weight, and concentrations of metabolites and insulin were analyzed using repeated measures analysis (MIXED procedure) with the date as the repeated factor.The following statistical model was applied: Yijk = µ + c1 + Ti + Dj + TDij + Eijk, in which µ = overall mean; c1 = covariate with the initial value of the variable; Ti = effect of treatment; Dj = effect of date; TDij = effect of interaction between treatment and date; and Eijk = residual error.Treatment, date, and the interaction between the two factors were included as fixed effect, and animal as the random effect.The first measures were used as covariates in the respective analysis.Data of BW of calves included the effects of sex, and birth weights were used as covariates.When the main effect was significant, the differences among means were analyzed using the Tukey-Kramer test. Reproductive variables were analyzed using generalized model (GENMOD procedure) specifying the binomial distribution with logit transformation of the data (anestrus and pregnancy) or Poisson distribution (interval calving-conception).The model included treatment effects: Yij = µ + Ti + Eij, in which: µ = overall mean; Ti = effect of treatment; and Eij = residual error. Correlation coefficients were estimated using the CORR procedure.Data were expressed as mean and standard error of the mean (Mean ± SEM) and considered statistically significant if P<0.05. Results During the monitoring phase, the BCS of the cows decreased (P<0.001) from 16±1 weeks pre-partum (early winter) to 47±1 DPP.Cows lost an average of 1.5±0.1 BCS units throughout this period, which corresponded to a loss of 1.2±0.1 units in the last gestation and 0.3±0.1 units in the postpartum period.The nadir of BCS was reached in the 4th week postpartum and remained low until the beginning of the supplementation period. Supplementation did not influence the BCS (SUP: 3.9±0.1 vs. CON: 3.9±0.1;P = 0.257), and no interaction was found between supplementation and date (P = 0.540).Body weight was affected by supplementation (P = 0.044).Cows from the SUP group were heavier than cows from the CON group (SUP: 379±2 kg vs. CON: 373±2 kg; Table 2).Supplementation affected milk production (P = 0.017).Cows of the SUP group (5.7±0.2 kg d −1 ) produced 14% more milk than the cows of the CON group (5.0±0.2 kg d −1 ).The supplementation vs. date interaction was significant (P = 0.047, Table 2).Supplementation affected the BW of calves (P<0.001), and a treatment vs. date interaction was found (P<0.001).The calves from dams of the SUP group were heavier from days 14 to 35 than calves from CON group (Table 2).From days 0 to 14 of supplementation, while calves were suckling, they gained 0.26±0.07kg d −1 more than the calves from CON dams (CON: 0.48±0.07 vs. SUP: 0.74±0.07kg d −1 ; P = 0.010).However, during the temporary weaning with separation from their mothers, daily gain did not differ between groups (P = 0.380), and was lower (P<0.001)than in the previous period (0.20±0.05 kg d −1 for both groups).As expected, from days 0 to day 35, a positive correlation was found between BW gain of the calves and milk production of their dams (r = 0.34; P<0.001).At definitive weaning, calves from supplemented dams were on average 8 kg heavier (P = 0.029) than calves from CON dams (Table 2). The supplement did not affect the milk fat content, expressed as a percentage (P = 0.225) or as the total content (P = 0.794).No effect of date (P = 0.127) or treatment vs. date interaction (P = 0.178) was found.The average fat percentage and total content was 3.0±0.1% and 216±10 g d −1 , respectively.On the contrary, supplementation increased (P<0.001) milk protein content (CON: 2.9±0.1 vs. SUP: 3.1±0.1%),and a treatment vs. date interaction was found (P<0.001).In cows from the SUP group, the milk protein content increased from days 0 to 14 (2.9±0.1 to 3.3±0.1%;P<0.001), while in cows of the CON group, it remained unchanged (2.9±0.1 to 2.9±0.1%;P = 0.820). Insulin concentration in SUP cows was higher (P = 0.011) than in CON cows (8.3±0.4 vs. 7.0±0.3uIU mL −1 ).Insulin increased and was higher (P = 0.004; Figure 1d) than the CON cows in the first seven days of supplementation. Independently of the BCS at calving, all cows were in deep anestrus when supplementation began.Twentyone days after the introduction of bulls (day 42 after the start of supplementation), 36% more cows of the SUP group were in shallow anestrus than cows of CON group (Table 3).In the first 33 days of the mating period, only two cows -one of each group -were detected in estrus, and both became pregnant.The pregnancy rate after fixedtime artificial insemination in SUP cows was twice that of CON cows; however, this difference was not statistically significant (Table 3).After the second period with the bulls, more cows became pregnant (8/17; 47%), and no differences between treatments were found (P = 0.410).The final pregnancy rate was not different between groups.The type of anestrus on day 21 of the mating period influenced overall pregnancy rate (P = 0.026).Indeed, more cows showing shallow anestrus became pregnant (91%; 10/11) compared with cows showing deep anestrus (53%; 9/17). No clinical signs of methanol intoxication were observed during the supplementation period or one week after.Hepatic functionality, evaluated by the concentrations of total protein, albumin, globulin, total bilirubin, ASAT, ALP, and GGT performed on all the cows 110 days after the start of supplementation, showed no sign of hepatic damage. Discussion In extensive grassland cow-calf systems, the last third of pregnancy and early post-partum occurs during winter months, when forage availability and quality are lower than in fall or spring (Carámbula, 1991).Cows, in these conditions, show a negative energy balance and lose BCS (Soca et al., 2014a).As reported in other countries (Houghton et al., 1990;Perry et al., 1991;Stalker et al., 2006), and in our conditions (Quintans et al., 2010;Scarsi, 2012), in the present work, the loss of BCS was greater during prepartum than early postpartum.As a consequence, BCS at calving was lower than those recommended (4.5 units; scale of 1-8) to obtain a pregnancy probability similar to or greater than 70% (Orcasberro et al., 1994).The nadir of BCS was observed in the 4th week of the postpartum and remained low until supplementation began.The postpartum supplementation had no effect on BCS and stimulated only a transient increase in BW.The effect of the pre-mating supplementation on BCS is not consistent and seems to depend, at least partially, on the supplement used.Astessiano et al. (2013) and Soca et al. (2013) used a supplement based on whole rice bran and reported no effect on BCS.However, cows grazing on pasture improved with Lotus subbiflorus cv.Rincón, during the same period, increased BCS (Astessiano et al., 2012). Supplementation increased milk production as has been reported previously in dairy (Reis and Combs, 2000;Bargo et al., 2002), dual purpose (Aguilar-Pérez et al., 2009), and beef cows (Perry et al., 1991;Lalman et al., 2000).The observed increase in milk protein content was also reported in dairy cows, and is attributed to a higher energy intake by cows supplemented with concentrates (Dillon et al., 1997;Reis and Combs, 2000;Bargo et al., 2002) and with crude glycerin (Bodarski et al., 2005).At the beginning of the supplementation, cows were in a physiological period (47±1.4DPP) in which the mammary gland is prioritized in the partitioning of nutrients (Bauman and Currie, 1980), so the supplement increased milk production and calf growth.Neville (1962) suggested that during the first 60 DPP, milk production and weight gain of calves are linked, and as calves begin to consume grass, this ratio decreases.The increase in the availability of milk with greater protein content for calves from the SUP cows determined an increase in their daily weight gain.Calf daily weight gain decreased during the temporary weaning with separation from their mothers and increased again after calves returned with them.These findings give major support to the concept that the development of calves is milk-dependent up to 90 days of age (Grings et al., 2008;Quintans et al., 2010).The daily gains observed after temporary weaning with separation are in agreement with those reported by other authors (Beal et al., 1990).Calf daily gains during the evaluated period (0.54 and 0.60 kg d −1 , CON and SUP groups, respectively) were similar to those reported by Quintans et al. (2010) (0.65 kg d −1 ), and Soca et al. (2014b) (0.50 kg d −1 ).These authors worked with multiparous and primiparous grazing cows, respectively, with similar BCS to those of the cows used in the present work.The greater BW shown by calves from SUP cows during the supplementation period remained until the final weaning, which is in agreement with the results reported by Astessiano (2010).These results show that short-term supplementation before mating increases the productivity of primiparous cows. Beef heifers in anestrus show the highest NEFA plasma concentrations, which reflect their negative energy balance (Bossis et al., 2000).The frequency of LH pulses is negatively correlated with plasma concentration of NEFA in primiparous suckling beef cows (Grimard et al., 1995).In addition, an increase in NEFA plasma concentration could have a negative effect on the ovarian function (Bossis et al., 1999).In the present work, the plasmatic concentration of NEFA was different between treatments, and reflected a different adipose tissue rate of lipolysis (Lucy, 2003).These results suggest that SUP cows had a better energy balance than CON cows.Moreover, plasma cholesterol concentration was greater in SUP than in CON cows.The plasma cholesterol concentration increases in supplemented dairy cows as a consequence of an increase in energy intake (Cavestany et al., 2005).In summary, in the present work, the plasma concentrations of NEFA, cholesterol, and glucose suggest an improvement in the energy balance of SUP cows (Bossis et al., 1999;Lucy, 2003). A high energy intake increases the size of the follicles and the number of large follicles (diameter >10 mm) in beef cows (Perry et al., 1991;Aguilar-Pérez et al., 2009) and dairy cows (Lucy et al., 1991).It is possible that nutrition has a direct effect on the ovary, rather than an indirect effect via the hypothalamic-pituitary axis (Khireddine et al., 1998).At the beginning of supplementation, all cows were in deep anestrus, but 21 days after the mating period had begun, more SUP cows were in shallow anestrus than CON cows.Taking into account that one of the most important criteria to classify anestrus is the size of the follicle (Wiltbank et al., 2002), it is conceivable that the extra energy consumed by SUP cows had a stimulatory effect on folliculogenesis.However, either because the temporary weaning with separation failed to stimulate LH pulsatility, or because the supplement did not reach the levels required for this event to occur, the cows remained in anestrus.There is a positive correlation between the concentration of insulin and the reproductive response (Sinclair, 2008) and between insulin concentration and size of follicles (Khireddine et al., 1998).Although the number of SUP cows that became pregnant doubled the number of cows in the CON group, this difference was not significant, possibly because of the low number of animals and the binomial nature of this variable.However, the better results obtained with cows in shallow than deep anestrus after fixed-time artificial insemination reinforces the positive effect of the supplement on the reproductive function (Khireddine et al., 1998).Considering that the cows were primiparous, it is possible to think that the energy partitioning followed the priorities described by Short et al. (1990), so after they achieved the maintenance requirements, milk production had the highest priority, followed by their own growth and reproduction activity had the last. Cows of SUP group consumed 110 g of methanol d −1 during 21 days.During this period, no clinical sings that could be associated with methanol intolerance were observed.Moreover, the concentrations of total protein and albumin during the entire monitored period did not differ between groups, reflecting that liver synthesis of protein seemed not to be affected.The study of liver function at 110 days after the beginning of supplementation did not show impaired liver function.These findings are in agreement with those reported by Winsco et al. (2011), who infused 0 to 210 g of methanol day −1 directly into the rumen of steers and did not observe adverse effects on intake, digestion, or ruminal fermentation.These authors suggested that cattle could tolerate methanol consumption that largely exceeds the current recommendation of the United States Pharmacopeia (150 ppm) or the European Pharmacopeia (2000 ppm).In agreement, Dasari (2007) and Elam et al. (2008) suggested that maximum recommended levels of methanol should be revised; an issue that requires further research. Conclusions In primiparous beef cows grazing native grass, premating supplementation with whole rice bran and crude glycerin with high content of methanol improves energy balance and increases milk yield and calf growth without showing signs of toxicity, with a slight effect on the reproductive activity. Figure 1 - Figure 1 -Concentrations of insulin and metabolites in primiparous cows non-supplemented (■) and supplemented for 21 days with whole rice bran and crude glycerin (□).Day 0 -start of supplementation, at 47±1.4 days postpartum.Differences between treatments are indicated by * when P<0.05. Table 2 - Body condition score (BCS), body weight (BW), and milk production of primiparous cows and BW of the calves whose mothers were not supplemented (CON) or supplemented for 21 days with whole rice bran and crude glycerin (SUP) Table 3 Shallow anestrus was defined as the presence of follicles >8 mm in the absence of corpus luteum in two or more ultrasound studies at 7-day intervals.Cows that were in anestrus in the first 33 days of mating entered the fixed-time artificial insemination program.Ten days after fixed-time artificial insemination, cows were naturally rebred for 21 days.The entire mating period lasted 74 days.Numbers in brackets represent number of cows.
2019-04-01T13:11:58.281Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "eef651be9d27a48975a60ad7371d7f2a4e483b42", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/rbz/v45n1/1516-3598-rbz-45-01-00016.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "eef651be9d27a48975a60ad7371d7f2a4e483b42", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
253568387
pes2o/s2orc
v3-fos-license
Convallatoxin Inhibits Cell Proliferation and Induces Cell Apoptosis by Attenuating the Akt-E2F1 Signaling Pathway in K562 Cells Objective: To determine the effect of convallatoxin on K562 cell proliferation and apoptosis. Methods: CCK-8 assay was used to detect cell proliferation; PI staining, JC-1 staining, and Annexin V-FITC/PI double staining were used to analyze the cell cycle, cell mitochondrial membrane potential, and cell apoptosis; and Western blotting was used to detect cleaved caspase-9, cleaved caspase-3, Bcl-2, Bax, and E2F1 expression and Akt phosphorylation. Subsequently, AutoDock software was used to determine the interaction between convallatoxin and Akt1. Results: Upon treatment with convallatoxin, the proliferation of K562 cells was inhibited, the cells were arrested at the S and G2/M phases, and cell apoptosis was significantly induced. In addition, Akt phosphorylation and E2F1 expression were significantly decreased, whereas E2F1 overexpression rescued convallatoxin-induced cell proliferation and apoptosis. In addition, a molecular docking assay indicated that convallatoxin could bind to Akt1. Conclusion: Convallatoxin inhibited cell proliferation and induced mitochondrial-related apoptosis in K562 cells by reducing the Akt-E2F1 signaling pathway, indicating that it is a potential agent for treating leukemia. Introduction Over the past 10 years, the incidence of leukemia has increased by 2% and its incidence and mortality are among the top 5 among all tumors. The five-year survival rate of chronic myelogenous leukemia (CML), the most common type of chronic leukemia in Asian countries, is 69%. 1 Currently, hematopoietic stem cell transplantation represents a possible cure for leukemia. Effective treatment with chemotherapeutic drugs is a prerequisite for the implementation of hematopoietic stem cell transplantation or other technologies that may cure leukemia. 2 Clinically, drug resistance and acute lesions are the main factors reducing the therapeutic effect on leukemia. Therefore, new treatments and targets must be identified to improve patient compliance and quality of life. Cell cycle regulation is related to both cell proliferation and apoptosis, and abnormal regulation of the cell cycle is an important mechanism underlying tumor development. In cell cycle regulation, two key control points, G2/M and G1/S, are important for intracellular and extracellular signal transmission and integration. Programed death or the stationary G0 phase is key to activating cell cycle regulation. Most of the cytotoxic drugs currently in clinical use are not specific to the cell cycle, such as alkylating agents, cisplatin, and nitrosourea, which have serious toxic reactions. Therefore, research and development of targeted cell cycle-specific drugs are of great significance. The balance between cell proliferation, differentiation, and apoptosis is closely related to the occurrence and development of tumors. The mitochondrial pathway plays an important role in the process of cell apoptosis. 3 Stress-or apoptosis-inducing agents can cause mitochondrial destruction and reduce mitochondrial membrane permeability, which manifests as a decrease in mitochondrial membrane potential (MMP). Once cytochrome C and apoptosis-inducing factors are released into the cytoplasm, caspase-3 is activated, thus leading to apoptosis. The balance between anti-apoptotic molecules and pro-apoptotic molecules in the Bcl-2 family is of great importance in maintaining MMP stability, especially the "molecular switch" role of Bcl-2/Bax in cell apoptosis. 4 After translocating from the cytoplasm to the mitochondrial membrane, Bax can open molecular channels to increase the release of pro-apoptotic factors. Bcl-2 is located on the outer mitochondrial membrane, which stabilizes the mitochondrial membrane and inhibits the release of proapoptotic factors. By forming a dimer with Bax, Bcl-2 can reduce the permeability of the mitochondrial membrane to exert an anti-apoptotic effect. A variety of small molecule inhibitors targeting Bcl-2 have been developed, although most of them are still in clinical trials and have not been officially launched. 5 AT101 is the first compound that can inhibit Bcl-2, Bcl-XL, and Mcl-1, and its anti-leukemia effects, which are associated with Bcl-2 protein inhibition, have been verified by experiments in vitro. 6 Convallatoxin is a strong cardiac glycoside isolated from Adonis amurensis Regel et Radde that presents anti-inflammatory and antiproliferative activities. 7 Convallatoxin is a P-glycoprotein (P-gp) substrate, and an important amino acid involved in its transportation is Val982. 8 Convallatoxin is an enhancer of ligand-induced MOR endocytosis that presents high potency and efficacy. 9 However, the effect of convallatoxin on the treatment of leukemia has not been revealed; therefore, exploring its therapeutic role in leukemia is of great significance. In this study, we evaluated the antitumor effect of convallatoxin on the erythroleukemia cell line K562. Convallatoxin inhibits the proliferation of K562 cells, promotes mitochondria-related apoptosis, and induces cell cycle arrest in the S and G2/M phases, which are related to the downregulation of Akt-E2F1 signal transduction. As mentioned above, convallatoxin may be a novel drug for leukemia treatment. Convallatoxin Inhibits K562 Cell Proliferation To identify the antitumor activity of convallatoxin, we performed a CCK-8 assay to detect the effect of convallatoxin on the proliferation of K562 cells (Figure 1). The cell viability of K562 cells treated with convallatoxin (0, 3, 10, and 30 μM) was reduced by 0%, 20%, 60%, and 65% compared with that of the non-convallatoxin group (IC 50 = 4.88 μM, Figure 1A and B). Also, the cell viability decreased significantly over the treatment time, which decreased by 12%, 48%, and 51%, at 12, 24, and 48 h, respectively ( Figure 1C). These results indicated that convallatoxin has a significant inhibitory effect on K562 cell proliferation in a dose-and time-dependent manner. Convallatoxin Inhibits the Cell Cycle in K562 Cells To explore the mechanism underlying the ability of convallatoxin to inhibit K562 cell proliferation, we examined the effect of the compound on the K562 cell cycle ( Figure 2). The proportion of cells in phase G1 after treatment with 10 and 30 μM convallatoxin decreased by 5% and 12% compared with that of the control group, respectively, while the proportion of cells in phases S and G2/M increased by 6% and 12% after treatment with 30 μM convallatoxin, respectively ( Figure 2). Tumor cell G2/M phase arrest is an important chemotherapeutic pathway; therefore, this finding suggested that convallatoxin could inhibit the proliferation of chronic myeloid leukemia cells through cell cycle arrest at phases S and G2/M. Convallatoxin Induces K562 Cell Apoptosis Proliferation inhibition and apoptosis induction in tumor cells are important strategies for anti-cancer drug research. Therefore, Annexin V-FITC/PI staining was used to detect the effect of convallatoxin on K562 cell apoptosis by flow cytometry (Figure 3A and B). The proportion of apoptotic cells after 24 h of convallatoxin treatment increased in a dosedependent manner. Specifically, the number of apoptotic cells in the 10 and 30 μM convallatoxin groups increased significantly by 18% and 35% compared to the control group. This suggests that convallatoxin significantly induced apoptosis in K562 cells. To confirm the effect of convallatoxin on K562 cell apoptosis, we used Western blotting to detect the expression of apoptosis-related proteins, such as cleaved caspase-3 and cleaved caspase-9 ( Figure 3C to E). The results showed that after treatment with 10 and 30 μM of convallatoxin, pro-apoptotic protein cleaved caspase-3 was upregulated by 28% and 30% in K562 cells, respectively, while cleaved caspase-9 was upregulated by 20% and 35%, respectively. These results suggest that convallatoxin promotes the apoptosis of K562 cells in a dose-dependent manner ( Figure 4). Convallatoxin Induces Cell Apoptosis Through the Mitochondrial Pathway in K562 Cells MMP is considered a hallmark event in early cell apoptosis. As a lipophilic cationic fluorescent dye, JC-1 is widely used for the detection of MMP. When MMP is high, JC-1 aggregates in the matrix of the mitochondria to form a polymer that produces red fluorescence; and when MMP is low, JC-1 exists as a monomer that produces green fluorescence. In this experiment, a JC-1 fluorescent probe and flow cytometry were used to evaluate the early apoptosis of K562 cells. The ratio of JC-1 monomers after treatment with 10 and 30 μM convallatoxin for 24 h increased by 21% and 80%, respectively, indicating that convallatoxin improved cell membrane permeability and enhanced early apoptosis of K562 cells ( Figure 5 A and B). The Bcl-2 protein family is involved in mitochondrial membrane permeability and represents important regulatory proteins released by cytochrome C from the mitochondria. The ratio of the pro-apoptotic protein Bax to the anti-apoptotic protein Bcl-2 is closely related to the sensitivity of the mitochondrial membrane. 4 After treatment with 10 and 30 μM of convallatoxin for 24 h, the expression of Bax was upregulated by 64% and 70% while the expression of Bcl-2 was downregulated by 68% and 70%, respectively, compared with the control group, indicating that convallatoxin induces apoptosis of K562 cells in a dose-dependent manner by increasing the permeability of the mitochondrial cell membrane ( Figure 5C and D). Convallatoxin Inhibits Akt-E2F1 Signaling Pathway in K562 Cells Cell cycle-related transcription factor E2F1 (E2F1) is a member of the E2F family involved in a variety of cellular processes, including the cell cycle, DNA repair, DNA replication, and cell differentiation, proliferation, and apoptosis. To study further the specific mechanism of convallatoxin action, Western blotting was performed to detect the expression of p-Akt (Ser 308), t-Akt, and E2F1 ( Figure 5A). The results showed that Akt phosphorylation and E2F1 expression in K562 cells were significantly decreased after treatment with 10 and 30 μM convallatoxin ( Figure 5B and C). These results indicate that convallatoxin induces K562 cell apoptosis by inhibiting Akt phosphorylation and downregulating E2F1. To confirm further the interaction between convallatoxin and Akt1, we performed a molecular docking assay using the molecular structure of convallatoxin ( Figure 5D) and the crystal structure of Akt1, which contains a critical structural motif for the interactions. The results showed that convallatoxin is directly bound to Akt1 ( Figure 5E) with a binding energy of −10.4 kcal/mol, suggesting that convallatoxin has a strong affinity with the target protein Akt1. The docked convallatoxin showed extensive interactions with Akt1, including those with the Tyr18, Thr82, Ile84, Leu264, Val270, and Tyr272 residues, and Agr273 side chain with 3.9, 3.6, 3.6, 3.9, 3.8, 3.8, and 3.6 Å hydrophobic bonds, respectively. In addition, convallatoxin interacted with Tyr18 through 3.0 Å hydrogen bonds and with the Thr82 residue through 3.2 Å and 4.0Å hydrogen bonds. Thus, our data demonstrate that convallatoxin directly targets the Akt-E2F1 signaling pathway by binding with Akt1. E2F1 Overexpression Rescues the Effects of Convallatoxin in K562 Cells To study further the role of E2F1 in convallatoxin effects, K562 cells overexpressing E2F1 were constructed. The expression of E2F1 was verified using Western blotting ( Figure 6). After treatment with 10 μM convallatoxin, the viability of K562 cells was significantly reduced by 55%, while the viability of K562 cells overexpressing E2F1 was 20% higher compared with that of the 0 μM convallatoxin group ( Figure 6A and B). Similarly, after treatment with 10 μM convallatoxin, the percentage of apoptotic cells was increased by 15% while the percentage of apoptotic cells with E2F1 overexpression was 10% lower compared with that after treatment with 0 μM convallatoxin ( Figure 6C and D). These results suggest that E2F1 overexpression can rescue the effects of convallatoxin. Discussion In this study, convallatoxin inhibited the proliferation of K562 cells in a time-and concentration-dependent manner. Concurrently, convallatoxin induced cell cycle arrest at the S and G2/M phases and promoted apoptosis in a mitochondrialdependent manner. Furthermore, the anti-leukemic effect of convallatoxin was related to the attenuation of the Akt-E2F1 signaling pathway. In recent years, drugs that target different signal transduction pathways of leukemia cells, including proliferation, differentiation, and apoptosis, have attracted much attention. 10 Representative anti-cancer drugs that specifically block the cell cycle include 5-Fu, pemetrexed, cytarabine, and tigeo, which act on the S phase; vinorelbine, paclitaxel, and etoposide, which act on the M phase; and bleomycin, which acts on the G2 phase. As an antimetabolite of pyrimidine, which acts on the S stage, cytarabine (CY) can inhibit DNA synthesis and interfere with the proliferation of tumor cells, and is the preferred chemotherapy drug for the treatment of CML. 11 However, CY is not a targeted therapy drug, and different myeloid leukemia cells respond differently to CY. In clinical treatment, high-dose CY is used to treat CML. However, high concentrations not only have neurotoxic effects but also increase the susceptibility to relapse after drug withdrawal; moreover, drug resistance leads to reduced sensitivity to CY. 12,13 In sequential chemotherapy, tumors with rapid proliferation, such as choriocarcinoma and leukemia, have more cells in the proliferative phase. Cell cycle-specific drugs are usually used to kill cycle-sensitive cells first, and then cell cycle non-specific drugs are used to kill tumor cells at other stages, which makes the tumor treatment process more complicated. 14 Our study showed that different convallatoxin treatments at different concentrations caused cell cycle arrest of K562 cells in both the S and G2/M phases, suggesting that convallatoxin may serve as a potential anti-leukemia drug. Convallatoxin presents various pharmacological activities and has been reported to inhibit NF-κB activity, which decreases the expression of pro-inflammatory cytokines in macrophages, intestinal cells, and other immune cells through the activation of PPARγ, thereby reducing mucosal inflammation and improving DSS-induced colitis. 15 Moreover, convallatoxin inhibits the migration and invasion of lung cancer cells by inhibiting the expression of MMP-2, MMP-9, and P-FAK. 16 The anti-tumor effect of convallatoxin has also been verified in this study, which mainly affects cell proliferation and apoptosis. The proliferation and apoptosis of leukemia cells are regulated by abnormal activation of various signaling pathways. Among them, the PI3K-Akt signaling pathway has been shown to regulate the occurrence, development, and prognosis of leukemia. [17][18][19] As a proto-oncogene, Akt plays an important role in regulating cell metabolism, growth, proliferation, survival, and transcription and protein synthesis. 20 In tumor cells, over-activated Akt activates the NF-κB and mTOR pathways, thus leading to an anti-apoptotic effect. 21 E2F1 is a member of the E2F family and presents high expression in a variety of tumor tissues and cells, and its upregulation is closely related to tumor occurrence, development, metastasis, and prognosis. 14,22,23 Reports have shown that E2F1 upregulates Akt activity through a transcription-dependent mechanism, [23][24][25] suggesting the existence of a negative feedback loop involving E2F and Akt related to cell apoptosis. 23 As the target of convallatoxin, the Akt-E2F1 signaling pathway was demonstrated to be related to the anti-leukemia mechanism. In summary, this study confirmed that convallatoxin significantly inhibits the proliferation of K562 cells by cell cycle arrest, induces mitochondrial-dependent cell apoptosis, and exerts an anti-leukemia effect related to the attenuated Akt-E2F1 signaling pathway. Convallatoxin has potential anti-leukemic activity and can be further developed for clinical treatment, and the Akt-E2F1 signaling pathway may serve as an effective drug target. Drug Configuration Convallatoxin powder was weighed and placed in a 1.5 mL EP tube, and then a certain volume of DMSO, calculated according to the molecular weight, was added to obtain a 50 mM storage solution. An aliquot of this solution was stored at −80°C, and different working concentrations were diluted according to the experimental requirements. Cell Culture K562 cell culture was performed using IMDM medium containing 10% FBS and 1% penicillin-streptomycin. The cells were cultured in an incubator at 37°C and 5% CO 2 . The cells were passaged in a timely manner and cryopreserved when grown to the logarithmic growth phase. Cell Proliferation Measurement For the CCK-8 assay, K562 cells in the logarithmic growth phase were collected, and then 5 × 103 cells per well were seeded in a 96-well plate. Each group contained five replicates. After the cell density reached 80%, the cells were stimulated with 0, 3, 10, and 30 μM convallatoxin for 24 h. Then, 10 μL of CCK-8 solution was added to each well, and cells were cultured in an incubator at 37°C and 5% CO 2 for another 2 h. The optical density value of each well at 450 nm was measured using a microplate reader, and the cell growth curve was plotted to obtain the IC 50 value. Cell Cycle Measurement K562 cells in the logarithmic growth phase were collected, and 4 × 10 5 cells per well were seeded in a 6-well plate. After the cell density reached 80%, the cells were stimulated with convallatoxin for 24 h. Then, the supernatant and cells were washed twice with PBS. The cells were digested, collected by centrifugation (4°C, 1000 r/min, 4 min), resuspended in 1 mL pre-cooled 70% ethanol, and fixed at 4°C for 10 h. The supernatant was discarded after centrifugation (4°C, 1000 r/min, 4 min) and washed with pre-cooled PBS. The cell pellets were resuspended and incubated with propylene glycol iodide (PI) staining solution (0.5 mL) at 37°C for 30 min in the dark. Finally, the cells were analyzed using a Beckman CytoFLEX flow cytometer (Beckman Coulter Biotechnology Co., Ltd). The proportion of cells at each phase (phase G0/G1, S, and G2/M) was determined, and a fluorescence density distribution graph was used to represent the results. Cell Transfection A total of 4 × 10 5 K562 cells per well were seeded into a 6-well plate. After the cell density reached 80%, E2F1 overexpression lentiviral particles were transfected into K562 cells using Lipofectamine 2000 reagent, following the instructions of the manufacturer. The cells were cultured with 10 μM convallatoxin for 24 h to detect related indicators. Western Blot K562 cells in the logarithmic growth phase were collected, and 4 × 10 5 cells per well were seeded in a 6-well plate. After the cell density reached 80%, the cells were stimulated with 0, 3, 10, and 30 μM convallatoxin for 24 h, washed gently with pre-cooled PBS, and lysed with 1 × loading buffer. SDS-PAGE gels were then prepared as previously described. Equal amounts of protein samples were added to the wells, and the voltage was set to 80 V. After 30 min, the voltage was set to 110 V. When the strips reached three-fourth of the way up the gel, the protein was transferred onto a nitrocellulose (NC) membrane at a voltage of 110 V for 90 min in an ice water bath. Then, the membranes were carefully blocked with 5% milk on a shaker at room temperature for 1 to 2 h. Membranes were incubated with the corresponding primary antibodies overnight at 4°C and washed with 1 × TNET buffer. After incubation with horseradish peroxidase (HRP)-linked secondary antibody for 2 h at room temperature, the membranes were washed with 1 × TNET buffer and protein bands were analyzed using a Tanon-5200 system (Shanghai Tianneng Technology Co., Ltd). ImageJ software was used to analyze the gray values of each protein band. MMP Measurement After treatment with 0, 3, 10, and 30 μM convallatoxin for 24 h, K562 cells were collected and washed twice with PBS, resuspended in 0.5 mL culture medium, and incubated with 0.5 mL JC-1 staining working solution at 37°C for 20 min. The supernatant was discarded, and the cells were washed twice and resuspended in staining buffer. Fluorescence intensity was detected using flow cytometry. Red and green fluorescence indicates normal and reduced MMP, respectively. The ratio of red to green fluorescence was used to reflect the change in cell MMP. Molecular Docking Assay Convallatoxin was constructed using Chemdraw software, and the three-dimensional (3D) conformation of convallatoxin was used for docking. The crystal structure of Akt1 was obtained from the Protein Database RCSB (https://www.pdb. org/) and used for docking. This structure was prepared using the Protein Preparation Wizard as follows: Autodock Vina software was used as the molecular docking program of this study to run a program with a semi-flexible docking method; Pymol software was used to separate the original ligands and protein structures, dehydrate them, and remove organic matter; and Autodock tools was used to hydrogenate, check the charge, specify the atomic type as the Ad4 type, calculate the Gasteiger charge, and construct the docking grid box of the protein structure. In addition, convallatoxin was used to determine the root and select the reversible bond of the ligand in AutoDock-tools. After docking with Vina, the scores of each pair of protein molecule combinations were calculated and the analysis results were analyzed and visualized using PyMOL software. Statistical Analysis Experiments were independently performed at least three times in this study, and the results are presented as the mean ± SD. One-way ANOVA and Tukey's test were used to compare the statistical significance of differences between groups using Prism software (ver. 8; GraphPad, San Diego, CA). Statistical significance is displayed as *P < .05, **P < .01, and ***P < .001. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article. This work was supported by the Key Research and Development Projects in Anhui Province for A and Key Program of Natural Science Research of Anhui Provincial Education Department (grant numbers 202104j07020021 and KJ2020A0217).
2022-11-17T16:15:37.651Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "1af2f8418e5b2c102f09b034d6982def96d43651", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1177/1934578x221136929", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "e1f8418ec666abf53b14afad086193ba1de3ce3e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
62881263
pes2o/s2orc
v3-fos-license
T he P roducTiviTy of P ower : h annah a rendT ’ s r enewal of The c lassical c oncePT of P oliTics * this essay traces the development of arendt’s conception of power. this development corresponds to arendt’s conviction that the advent of totalitarian forms of government set the idea of the modern nation-state, and of the rights of “man and citizen” associated with it, in an irrevocable crisis. to respond to this crisis, arendt attempts to conceive of power as something separate from, and in tension with, any form of government. power becomes characterized by its egalitarianism, dynamism, unpredictability, and capacity to innovate. the essay tries to show how these formal characteristics were originally ascribed a purely negative value by arendt, who associated them with totalitarian power, and that only after her work on totalitarianism does she revaluate them and provides arguments as to why her new concept of power is the only possible response to totalitarian phenomena. whether they rule in the form of a democratic rule of law or in the form of a tyrannical despotism.power threatens those who rule with destruction yet, at any moment, it can freeze back into a form of domination, bureaucracy, and despotism.But whereas in The Origins of Totalitarianism these features of power are associated with the highly dynamical movement of negativity, of destruction and self-destruction, a movement which is the characteristic of totalitarian power, in The Human Condition, arendt assigns to this mobile and unruly power a productive and foundational force which, from then onwards, will contain the heart of her political thought. in this later sense, power is conceived as an essentially inter-subjective capacity that not only destroys and annuls all order, state and law, but simultaneously produces, founds, and establishes them. arendt is renowned for her passionate critique of the european nation-state.however, she gained this reputation from the works that follow The Human Condition, primarily On Revolution.less well-known is that she began her investigation of the origins of totalitarianism as a decisive defender of the nation-state, which at the time she understood to be a republic, in the sense that the french revolution gave to this term.arendt's volte-face towards the nation-state occurs in the middle of her book on totalitarianism, first published in 1951.arendt begins by assuming that (1) the democratic nation-state such as it was brought forth by the french revolution allows for political freedom and protects it against the crass impositions of a highly dynamic bourgeois-capitalistic society.Before she takes leave for good of the concept of the nation-state during the 1950s, she defends for a moment (2) the standpoint of liberal rule of law according to which a government of laws is, above all, designed to protect citizens from the claims made on behalf of the idea of nation, understood in nationalistic terms.however, she quickly realizes that the idea of a pure rule of law regime (rechtsstaat), in which democracy is strictly delimited and can function without democracy in cases of necessity, is historically discredited.constitutional government alone is much too weak to prevail against the violent expansionism of capitalism, imperialism, and totalitarianism.in order to address this dilemma, arendt, now living in the united states, rejects the idea of the nation-state, aided by her study of the federalist papers and the writings of thomas Jefferson and James madison.she then connects (3) this american political theory and the everyday experience of democracy in america which, in effect, saved her life, to a renewed analysis of the concept of power. (1) in The Origins of Totalitarianism (arendt, 1979), the dreyfus affair is exemplary of public affairs.public life takes place on the stage provided by the republican-Jacobin nation-state: in the parliament, the courts, the press, and public assemblies.according to this model of the nationstate, written communication between absent persons mediates all processes of communication, and the nation is coeval with the declaration of the rights of man and citizen.consequently, arendt sees in "that Jacobin patriotism for which human rights were always part of its glory" the bond that unifies the nation (arendt, 1991: 170, trans. lemm).this patriotism is rooted in the "latinism" by which the "men of the french revolution identified mentally with rome" and needs to be strictly kept apart from all kinds of ethnic, populist or racist forms of nationalism (arendt, 1979: 164; arendt, 1991: 276; see also arendt, 1991: 370-372; arendt, 1979: 230-1). in the latinism of the Jacobins, arendt recognizes for the first time the origin of a power that can resist the totalitarian powers of society: she writes in 1948, "until now the greatest bulwark against the unlimited domination of bourgeois society, against the conquest of power through the mob and the introduction of imperialistic politics in the structure of western states has been the nation-state.the sovereignty of the nation-state, which once was supposed to express the sovereignty of the people, is now threatened from all sides" (arendt, 1976: 29, trans. lemm).the ideas of 1789 -as arendt puts it in the american edition of Origins-were to triumph for the last time in the first world-war, under clemenceau's government: "the first world war could still be won by the Jacobin appeal of clemenceau, france's last son of the revolution" (arendt, 1979: 79). the sovereignty of the people, the rule of law, and individual rights form a well-ordered unity in the republican idea of the nation-state.the nation-state as conceived in the 18 th century and realized in the 19 th century draws its anti-totalitarian force from this unity.in the republic's hour of destiny, clemenceau, Zola, picquart, labori and other dreyfusards -as arendt repeatedly emphasizes-drew their strength from two and only two principles: 1. the "Jacobin principle of the nation based on human rights", and 2. "the republican view of communal life which asserts (in the words of clemenceau) that by infringing on the rights of one you infringe on the rights of all" (ot, 1979: 106; arendt, 1991: 187).modern republicanism distinguishes itself from its ancient predecessors by founding the roman principle "potestas in populo" on the Jacobin idea of human rights.ancient republics did not legitimize particular civic solidarity by appealing to universal rights of freedom.for the universalistic claims of classical humanism, it sufficed that the universal form of the political community be realized in a few splendid exemplars at the top of the social hierarchy of merit and in a single city.the orientation towards the best political regime depended on an environment characterized by corrupt political forms. it excluded from the start the radical egalitarianism that essentially defines modern human rights.it is only once the modern nation-state begins to combine the ancient principle of the republic with the new idea of human rights that the socially exclusionary, classical paradigm of civic solidarity comes under normative pressure.the strict boundaries which separated the included from the excluded, the citizen from the human being, the property-owners from the property-less, the virtuous from the base, the communicated from the excommunicated, start to lose their contours.the new Jacobin paradigm of civic solidarity -more abstract, egalitarian, and inclusive-opens up the political system for the recognition of the basic rights of those who, like major dreyfus, were excluded from all civic rights to honor and fell outside the bounds of "good society," for those who, in foucault's sense as well as in the sense captured by arendt's book rahel varnhagen, were "infamous men" (arendt, 1981a: 20, 59, 108, 189).only within the confines of the republican nation-state could those majorities or minorities threatened by social exclusion -workers and women, as much as Jews or Blacks-have their rights recognized.for Jews, as arendt soberly notices: "the breakdown of the european system of nation-states was in all respects the greatest of catastrophes" (arendt, 1976: 46, trans. lemm). "society" and its human substratum, the "masses" and the "mob", are the counter-concepts to this republican idea of public life centered on the "state", the "nation" and "the people": "while the people in all great revolutions fight for true representation, the mob always will shout for the 'strong man', the 'great leader'.for the mob hates society from which it is excluded, as well as parliament where it is not represented" (arendt, 1979: 107; arendt, 1991: 188).this sounds almost like carl schmitt, but it isn't.in arendt's early schema, the republic, the sphere of public affairs, corresponds to her idea of a parliamentary democracy and runs counter to the plebisci-tary dictatorship as exemplified in the 18 th Brumaire of louis Bonaparte, under which public life degenerates.however, plebiscitary dictatorship is neither democratic nor liberal in contrast to democratic parliamentarianism and liberalism, which do not constitute an irreconcilable opposition, but rather complement each other.in any case, for arendt, it is not the mobilized and manipulable human masses, and even less the phenomenon of plebiscitary dictatorship, that stand at the origin of totalitarianism.rather, she identifies this origin with a highly abstract, reflexive mechanism: "expansion for the sake of expansion" is what drives the new, bourgeois-capitalistic society toward the imperialistic, boundless self-production and self-enlargement of capital and power.from an imperialistic perspective, power has uncoupled itself from national self-interest, has de-nationalized itself, and is driven to global expansion because imperialistic power produces itself, analogously to a differentiated money economy in the process of its progressive functional differentiation.this idea of society is essentially the counter-concept of the idea of public life.a rapidly expanding modern society eliminates the state as it appears in the republican nation-state ideal, with the state as separate and superior to society, and instead transforms it into a social organization.the classical political community, in which citizens standout in public self-representation and in which they perfect their political human essence, has become an economics-and success-oriented subsystem.power, like money, becomes the countable and functional capital of a politics that only retains its name in common with the ancient polis.imperialism, fascism, nazism and stalinism are nothing but the self-radicalizations of a social process of power accumulation that annihilates the state as the public affair of all citizens. the theory according to which the state ceases to be the object of politics and simultaneously gets swallowed up by society was first advanced in the studies on national socialism by ernst fraenkel in the late 1930s (fraenkel, 1999 and 2001) and by franz neumann in the early 1940s (neumann, 1993).the nazi-regime, represented by the hobbesian figure of the Behemoth, is conceived by them as the anti-state.But arendt, who had at first only applied it to her analysis of German fascism, progressively generalizes the thesis of a reflexively self-increasing and de-centered political power.the first step is to expand this theory to account for stalinism.in a second step, which she takes in The Human Condition, this thesis is applied in an almost systems-theoretical way to modern society in its entirety.in her account, "capital accumulation" and "power accumulation" are both social and anti-political processes that drive each other forward and strengthen each other.finally, they overflow all the dams and sluices of the nation-state, tearing down the well-ordered hierarchies of government and law, destroying the sovereign state by turning it into the totalitarian state, first from the inside and then from the outside: "a society which had entered the path of never-ending acquisition had to engineer a dynamic political organization capable of a corresponding never-ending process of power generation" (arendt, 1979: 146; arendt, 1991: 248).clearly, at this stage, arendt sees the expansionist dynamic of modern civil society as being exclusively negative, as an unbounded force of destruction.in her book on totalitarianism she recognizes only one, purely negative concept of reflexive power: "power appears as an immaterial mechanism which with its every movement produces more power" (arendt, 1991: 646, trans. lemm).power produces itself and ends up destroying itself in the "bad infinity" (hegel) of an endless and pointless motion: completely analogously to the motion of capital in marx's analysis.power that increases itself reflexively is unlimited power and as such is already totalitarian.the republican nation-state is incapable in the long run of resisting this uprising of power because the legally delimited public power at its disposal is oriented toward ends and interests.it is limited and finite and, therefore, cannot be increased in the way that the imperialistic-reflexive power can be. in the long run, the power of the state cannot withstand the continuously growing power of social imperialism. (2) this is the original thesis that underpins the first seven chapters of arendt's book on totalitarianism.But in the eighth and ninth chapters dedicated to "continental imperialism" and the "decline of the nation-state and the end of the rights of man" respectively, she surprises the reader with a second thesis that connects the origin of totalitarianism to the nation-state itself.her critique of modern society now reaches into its democratic state-constitution. in the sixth chapter she still denounced Burke's critique of the rights of man as the common source of "German and english race-thinking" (arendt, 1979: 175; arendt, 1991: 292).then, in the ninth chapter, she adopts without question his critique of the french revolution.human rights are not only worthless to a people without a state to whom they should actually apply, but these rights of the "naked savages" (arendt, 1979: 300) (Burke) carry the seeds of a new barbarism of europe within them.instead of elevating the "naked savages" into civic legal persons, the doctrine of human rights lowers the natural rights of this civic person to the status of the "naked savages".Just like the human rights of the subject are reduced to a world-less status, so the people, in the legal-constitutional concept of the nation and of popular sovereignty, are reduced to a socially retrograde and manipulable mob; the solitary masses. in the end, arendt explains the imperialistic outbreak of violence by appealing to the "horror" and "shock" that "overcame europeans when they got to meet the negroes (neger), not as individuals, exported exemplars, but as the population of an entire continent […]. the horror before the fact that even people like these were human beings, and the immediately following decision that such 'human beings' could under no circumstances be their equal.[…] what distinguished them from other peoples was not the color of their skin; what also made them physically frightening and repulsive was their catastrophic […] belonging to nature, against which they could not hold up a man-made world.their unreality and ghostly wandering is due to this lack of worldliness […].their unreality lies in the fact that they are human beings and, nevertheless, completely lack a specifically human reality.it is this given unreality of the aboriginal tribes together with their lack of worldliness that seduced europeans into murderous destruction and utter lawlessness they displayed in africa" (arendt, 1991: 322, trans. lemm). 1 1 trans.note.arendt oversaw both the american and German editions of The Origins of Totalitarianism.the German edition often varies significantly from the american one.in this particular case, the corresponding passage in the american editions reads as follows: "the Boers were never able to forget their first horrible fright before a species of men whom human pride and a sense of human dignity could not allow them to accept as fellow men.this fright of something like oneself that still under no circumstances ought to be like oneself […].what made them different from other human beings was not at all the color of their skin but the fact that they behaved like a part of nature, that they treated nature as their undisputed master, that they had not created a human world, a human reality, and that therefore nature had remained, in all its majesty, the only overwhelming reality -compared to which they appeared to be phantoms, unreal and ghost-like.they were, as it were, 'natural' human beings who lacked the specifically human character, the specifically human reality, so that when european men massacred them they somehow were not aware that they had committed murder" (arendt, 1979: 192).contemporary readers will find it offensive that arendt depicts african peoples living in tribal societies as unhistorical raw nature and, simultaneously, as the original figure of the mob in order to keep them apart from the historically educated, west european bourgeoisie.But this gesture typified the european bourgeoisie's way of thinking well into the 1960s.it is thus not without good reason that adorno, who himself thought this way, drew this bourgeoisie together with fascism when he wrote that the bourgeois was a virtual nazi.only since the mid-1960s, with the eruption of cultural revolutions such as that of the american "rights revolution" and with the increase in global protest and minority movements, was this self-understanding of white europeans and americans, oriented by the eurocentric opposition between civilization and barbarism, shattered and rendered insecure.But arendt's point is not that the sight of blacks led to european imperialism: the latter, as we saw, is explained sociologically, as the consequence of the structural dynamics of modern european society.But the sight of "negroes," who appear "physically frightening and repulsive" due to "their belonging to nature, against which they could not hold up a man-made world," in short, the sight of inhuman humans, according to arendt, did away with the last civilized inhibitions of europeans, who already had to bear with these inhuman humans as an exported "surplus population" and as slaves, "seducing" them into "murderous destruction and utter lawlessness".this is the context in which arendt changes her mind, at the end of her book on totalitarianism, regarding Burke, the nation and the previously celebrated Jacobin human rights patriotism.without realizing the self-contradiction, she revises her position and denounces human rights as the anticivic and counter-civilizational (and therefore ineffective) "rights of naked savages".the nation, which rests its nature-given sovereignty and "national self-determination" (arendt, 1991: 434) on the basis of such human rights, now represents for her the irruption of raw nature into the civilized reign of a politics centered on the state (staatlich verfasster politik).popular sovereignty becomes the "sovereignty of naked savages," and the state that resists this transformation turns out to be the dusty old liberal constitutional state of 19 th century German jurisprudence. it is this state which finally had to collide against the "secret conflict between state and nation," breaking to pieces (arendt, 1979: 230).from this perspective, whenever the constitutional state unites with the nation in a democratic republic, the social enemy that wants its ruin has already been let into the house.the birth defect, the "tragedy of the nation-state," (arendt, 1979: 230; arendt, 1991: 370) is the democratic separation of powers.once universal suffrage has been introduced, allowing the masses to enter politics, the once primary parliamentary legislature must now face, in the short or long run, the "conquest" and "instrumentalization" of the state at the hands of the nation (arendt, 1979: 230; arendt, 1991: 372).for arendt, the nation is now the enemy of the state within the liberal state.later, in On Revolution, she will remark almost casually that "the birth of the nation-state is the downfall of the free republic" (arendt, 1974: 317, trans.lemm). (3) against the background experience of national socialist and stalinist terror, arendt draws the conclusion that on the basis of the nation and human rights, one cannot make a state.this means that no truly public life, no republic, can be built on those foundations.and yet, the bare constitutional state is much too abstract an entity for it had already demonstrated its fatal weakness to serve as such a foundation during its inglorious downfall under the nazi reich.during the 20 th century, where was one to find a way out of the totalitarian dilemma, a way that leads to the "light of humanity" (arendt, 1991: 362, trans.lemm)? at the end of The Origins of Totalitarianism, arendt retains only two formulas that could replace the failed attempt by the Jacobin nation-state to give a new form to the old idea of the republic by means of the unity of popular sovereignty and human rights.the first formula is that of "the right to have rights" (arendt, 1991: 462), which replaces the idea of human rights.thereby arendt clearly reduces human rights to the right to membership in a particular civilized community, whether the latter is democratically or autocratically constituted.the second formula is that of "natality," the augustinian-christian hope that at "every end in history," in this case the end of world war ii, "a beginning can be made" (arendt, 1979: 479; arendt, 1991: 730).how these two formulas relate to each other and how they can be understood as an alternative to the nation-state is shown by arendt's renewal of the classical positive-freedom conception of power and her return to the revolutionary origin of modern republicanism in the 18 th century. the concept of power found in The Human Condition and On Revolution is a surprising reinterpretation and extension of the concept of power found in The Origins of Totalitarianism.reflexively constituted and therefore unlimited power -"power that with each of its movements produces more power"-no longer appears as exclusively destructive.power is no longer just the totalitarian self-increasing power.through the reinterpretation of a notion of power, which originally was counterfeited for imperialism and totalitarianism, arendt succeeds in drawing out of the modern conception of reflexive power a productive, simultaneously modern and classical, republican feature that can compete with the complexity of an imperial notion of power.this surprising synthesis of a modern concept of highly mobile, infinitely increasable and completely reflexive power with a classical understanding of politics as the public affair (res publica) is an impressive innovation in political theory. it shows that arendt's sense for avant-garde modernism is as intense as her love for the ancient cradle of our political culture.the general features of reflexive power, namely, that power can only become more powerful through power and that the "separation of powers makes a community more powerful than the centralization of power" (arendt, 1974: 198), also apply to the power created by citizens in their public assemblies and common action.power can only be increased through reflexive differentiation and decentralization.arendt shares this thesis with luhmann, for whom it is clear that "absolute power" in a complex society means "small power" (luhmann, 1988: 30).although power can be destroyed through violence, it cannot be "realized" through violence (arendt, 1981: 193; arendt, 1974: 196).violence can be monopolized, but power cannot.power is not at the disposal of those who are in power.it is a public thing and not a private property: "power springs up between men when they act together and vanishes the moment they disperse" (arendt, 1958: 200; arendt, 1981: 194). By means of a federation that unites "separate and independently constituted bodies," common power is literally "predetermined" to "constant enlargement" (arendt, 1979: 168; arendt, 1974: 218).public networked power is -and here arendt's argument is amazingly similar to that of the otherwise despised american pragmatists-"destined to grow" and, since it is generated spontaneously and unpredictably out of common action, it favors the "desire for experimentation" (arendt, 1974: 222).the power of the many knows no membership.this is why it is "from the start open to all," that is, for those who are willing to mutually commit themselves to a new beginning.spontaneity, non-disposability and de-centered networking, transform the powerless public assembly, the bare talking and consulting with each other, into an "unlimited power" (arendt, 1979: 178; arendt, 1974: 228).not unlike the imperialistic power of "conquest" (arendt, 1974: 218), this power is almost infinitely augmentable. the experience of all revolutions is that a "popular revolt against materially strong rulers… may engender an almost irresistible power even if it foregoes the use of violence in the face of materially vastly superior forces" (arendt, 1958: 200-1; arendt, 1981: 194).for this reason, the imperial and dominating power that is violently established and predisposed to the use of violence can preserve itself within the political sphere of public affairs only as long as it is backed up by the "living power of the people": "all political institutions are manifestations and materializations of power; they petrify and decay as soon as the living power of the people ceases to uphold them" (arendt, 1972: 140; arendt, 1970: 42). the concept of the living power of the people in the previous passage from on violence is a literal reference to the american "people".this is why it should under no circumstances be mistaken with a concept of "people" understood as a homogeneous collective subject.this living power is the power (the capacity, potentia = potential) of action, that is, of acting in common, of "acting in concert" (Burke).for an adequate understanding of arendt's text it is important to understand that this capacity is in no way good as such.rather, like in the case of Jesus, who arendt likes to cite in this context, it is a deeply ambivalent capacity.acting is the power that has brought forth the roman republic, the republican nation state, the catholic church, the rule of the committee for public safety, napoleon's dictatorship, the Bolshevik dictatorship of the proletariat, and the islamic republic.arendt adopts Jesus' saying, "for they know not what they do," as the (paradoxical) condition of possibility of political (and aesthetically-innovative) action (arendt, 1994: 74).action is a miracle which gives rise to novelty (arendt, 1994: 192, 221) which can, in turn, lead towards the "light of humanity," but may also lead to terrible failure or horror without end.one cannot know beforehand, "for they know not what they do".the "men of the revolution" always stare into an "abyss" (arendt, 1979a: 30, 185).after all, one "cannot rely" on power and action is "the most dangerous of all human capacities and possibilities" (arendt, 1994: 363).republican power distinguishes itself from imperialistic power through the feature of action.as acting in common, this republican power is powerless over the will of others (arendt, 1981: 194).power as potentia is positive freedom (arendt, 1974: 194).those who have power can do whatever they want.common power is thereby not opposed to the power of the single individual.Just as in spinoza's, hegel's and dewey's theories of power, the power of the individual grows along with the power of the community but without surrendering itself to the community (Brunkhorst, 2000: 225).Just as in dewey's ideal democracy, arendt's ideal republic optimizes the chances and possibilities, that is, the power of the individual to realize itself in and through the increasing power of the community.in dewey's ideal democracy and in arendt's ideal republic, the power that minorities and marginal voices have to be heard and recognized is greater than in any other regime.in taking an oppositional stance, in recognizing the frustration of expectations, and in abnormal and transgressing behavior, there lies not only "the capacity to correct mistakes" (arendt, 1981: 236) but also the egalitarian origin of republican power.everyone who says "no" or whose expectations have been frustrated (arendt, 1979a: 67, 81, 86, 130) has this power.and the right to realize it is what arendt calls "the right to have rights," in contrast to the abstract conception of human rights.Because the former right is determined from the start by the capacity of innovative negation, which arendt considers the basic political capacity, namely, the capacity to act politically; it is thus a right to have rights.accordingly, such a right, understood as a right to partake in a common political world, is from the start not an abstract but a concrete political right: the right of human beings to belong to a civic association. Because individual action is always realized before others and always brings a new perspective into the light of the public sphere, arendt's concept of action corresponds to what evolutionary theory calls the negation-and variation-potential of all activity.in contrast, the power of the community consists of a process of selection that transforms the new into "conventional" action, into an "opinion upon which many have agreed in public" (arendt, 1970: 45; arendt, 1974: 96). the question that arendt addresses in On Revolution is how this power of acting in common, which only exists in its own realization and which, as we saw, cannot be relied upon, may nonetheless be stabilized into a permanent community without thereby fixating the "fleeting instant of acting in common" (arendt, 1981: 195) into a rigid order.the question is not how power can be limited but how "to found a new one" (arendt, 1965: 148; arendt, 1974: 193).arendt tries to conjure an answer to this question by considering an impressive but also irritating array of possibilities, first by working her way through the example of the united states constitution, and then by considering the utopia of the system of worker's councils (rätesystem).although it is now oriented towards robespierre instead of Jefferson, arendt's late republicanism, found in On Revolution, remains a modern republicanism not unlike her earlier Jacobin republicanism.rather, it is her modernism that has shifted its ground, moving from a belief in the autonomy of human rights to a belief in the innovative-creative potential of continuity-disrupting action.arendt is no longer concerned with the late roman question of how rome can be refounded in the hour of its downfall, but instead with the question that follows from the modern concept of revolution, namely, how can the power to found a "new rome," which had never existed before, be established and constitutionally stabilized (arendt, 1974: 273; arendt, 1979a: 185, 195, 197). in the terms of systems-theoretical jargon, arendt's question would be: how can variation generate stability?i think that is an interesting question that touches the nerve of modern society and its political self-organization.But the innovative, world-building potential of common action remains, even in the later arendt, essentially utopian.that is not an objection, but it betrays a weakness in her political thinking with regard to institutions.On Revolution describes the revolutionary founding of a modern constitutional regime out of a type of political constitution that is constituent of power.this type of constitution distinguishes the french and the american revolution (contrary to arendt's own restriction of this type of constitution to the american one) from the type of constitution that delimits power, developed in england and in prussia, as a result of the reform of absolutist state power.however, because arendt excludes from the start the possibility of founding a constitution that is constituent of power on the autonomy of human rights (although such autonomy characterizes the constitutional history of both the united states and france), she necessarily misses the ordinary revolutionary power that has migrated from the revolutionary constitution to the body of statutes and ordinances that define the state administered law of modern democracies.in the end, arendt's arguments against the foundational role played by the autonomy of rights are motivated by the affect of the cultivated bourgeois against democracy.this is why her interpretation of the united states constitution obscures the strictly egalitarian pattern of its division of powers, and instead highlights the role played by those organs of republican power -sometimes the supreme court, others the senate or the town hall meetings of the past-which are characterized by the relatively low "number" of citizens participating in them and whose property is relatively equally distributed among the few.rather than interpreting the division of powers of the united states constitution in terms of the different steps through which a process of democratic will-formation acquires concretion, she interprets these divided powers as institutions containing a clever elite of politically (and not only technocratic) thinking heads designed to tame and bind the will of the people.this is the reason for arendt's insistence on separating, as strictly as possible, the "seat of power" that lies in the people from the "source of law" that lies in the constitution (see also, arendt, 1974: 204, 229 f., 290 ff., 346 ff.).as a consequence, arendt reduces the entire legal apparatus of a parliamentary democracy to the "non-revolutionary" function of "delimitation," which, in turn, is supposedly "independent from the form of the state" (arendt, 1974: 186).what reappears here once again is nothing but the presumably neutral, government-invariant constitutional state imagined by the ideology of German public right which defines, in purely instrumental terms, the social function of law and of the constitution as the protection of the citizens against the state and against the people (Brunkhorst, 2003).arendt fails to see what contemporary democratic theory values about the type of constitution that is constituent of power.what made modern democracy possible was the constitutional innovation of both revolutions of the 18 th century, according to which, popular sovereignty is related to individual rights through a legal system characterized by a thoroughly democratically determined division of powers (habermas, 1992).the entire point of the division and coordination of political and legal powers in a democratic constitution is the guarantee for a free and equal will-formation of those who are subject to the normatively binding consequences of such a will. in this view, the doctrine of the division of powers, as expressed in the words of herrmann heller, "is nothing other than a technique […] that allows the law-making volonté générale found in the law to rule" (heller, 1928: 39 f., trans.lemm).the organizational norms of the constitution, the "entire system of checks and balances, such as elections, countersigning, parliamentarism, referendum, and popular initiatives," the juridification of the rights and duties of the president, the government, the legislative, etc. exist only for the sake of "legally guaranteeing that governmental power springs from the people" (heller, 1971: 98, trans. lemm).the problem of institutionalizing the spirit of revolution in a pluralistic and individualistic constitution of free citizenship has a solution that is compatible with the principles of democracy only when a world-building and world-renewing concept of power is united with the autonomy of human rights proclaimed by the revolutions of the 18 th century.such a unity depends on giving an interpretation of the concept of a people that lies at the basis of the principles of democracy and is inclusive and open to human rights.according to the interpretation offered by friedrich müller, the concept of a people, from which all state power flows and which is the only legitimate source of positive law, refers to the "whole which is subject to norms".the people therefore must be understood as an "open concept" whose "delimitation" must remain "the task of the political process" (müller, 1997: 24, trans. lemm). from this perspective, one function of arendt's innovative and spontaneous power would be to call into question the actual "delimitation" of the concept of the people, recognizing those who have been excluded from the status activus.constituent power would consist in a thematization of such exclusions by articulating it and protesting against it in the process of public will-formation.the political process that concretizes the principles of the constitution must allow for exactly that thematization of exclusion without thereby harming the equality of citizens.if social movements, such as those which triggered the great reforms of universal suffrage, the freedom of association and opinion, the worker's movement and the women's movement, are to transform themselves into public power, they stand in need of the innovative force of world-disclosing sentences such as, for example, those found in the communist manifesto.But without the principle of equal human rights one would lose the possibility to fight for the right to new freedoms: "the same equality of the declaration of independence," writes John rawls, "which lincoln invoked to condemn slavery can be invoked to condemn the inequality and oppression of women" (rawls, 1993: xxxi).this, it seems, is what Jefferson meant when he gave an answer to the question of whether a constitution can be rendered permanent."i think not" for "nothing is unchangeable but the inherent and unalienable rights of man" (arendt, 1979: 233; as cited by arendt, 1974: 299).political praxis is nothing but the unfolding of this paradox.political praxis resolves this paradox by getting caught up in it, over and over again, and therein consists the productivity of power.arendt, hannah. 1965. On Revolution. london: penguin Books. arendt, hannah. 1970. Macht und Gewalt. münchen: piper. arendt, hannah. 1972. On Violence. in Crisis of the Republic, edited by. san diego, new york, london: harcourt Brace and company, pp. arendt, hannah. 1974. Über die Revolution. münchen: piper. arendt, hannah. 1976. Die verborgene Tradition. frankfurt a. m.: suhrkamp. arendt, hannah. 1979. The Origins of Totalitarianism. san diego, new york, london: harcourt Brace and company. arendt, hannah. 1979a. Vom Leben des Geistes, Bd. 2: Das Wollen. münchen: piper. arendt, hannah. 1981 neumann, franz. 1993. Behemoth. Struktur und Praxis des Nationalsozialismus 1933-1944. frankfurt a. m.: suhrkamp. fraenkel, ernst. 1999. Gesammelte Schriften, Bd. 2: Nationalsozialismus und Widerstand. Baden-Baden: nomos. fraenkel, ernst. 2001. Doppelstaat. hamburg: europaische verlagsanstalt. habermas, Jürgen. 1992.Faktizität und Geltung.Beiträge zur Diskurstheorie des Rechts und des demokratischen Rechtsstaats.frankfurt a. m.: suhrkamp.
2019-05-05T13:05:52.034Z
2006-01-01T00:00:00.000
{ "year": 2006, "sha1": "8ad53658a9d887bde37cab66b06b21c1d9919ca1", "oa_license": "CCBYNC", "oa_url": "https://scielo.conicyt.cl/pdf/revcipol/v26n2/art07.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "33e3c7593c19f3d6b0f8f48702ba17059e8fb8e1", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Philosophy" ] }
111414263
pes2o/s2orc
v3-fos-license
Sensitivity Analysis of the Tetrapolar Electrical Impedance Measurement Systems Using COMSOL Multiphysics for the non-uniform and Inhomogeneous Medium One of the major problems with Electrical Impedance Tomography (EIT) is the lack of spatial sensitivit y within the measured volume. In this paper, sensitivity distribution of the tetrapolar i mpedance measurement system was visualized consider i g a cylindrical phantom consisting of homogeneous and inhomogeneous medium. Previously, s ensitivity distribution was analysed analytically o nly for the homogeneous medium considering simple geometries and the distribution was found to be complex . However, for the inhomogeneous volume conductors sensitivity analysis needs to be done using finite element method (FEM). In this paper, the results of sensitivity analysis based on finite element method using COMSOL Multiphysics simulation s ftware are presented. A cylindrical non-uniform, inhomogeneous phantom, which mimics the human upper arm, was chosen to do the ex p riments by varying different parameters of intere st. A successful method for controlling the region of interest was found where th sensitivity was maximum. Refining the finite el ement mesh size and introducing multifrequency input current (up to 1 MHz) this sim ulation method can be further improved. I. Introduction Electrical Impedance Tomography (EIT) is a technique to visualize the spatial distribution of electrical conductivity inside an object. Electrical impedance measurements on the human body have been found a variety of applications in clinical diagnosis and research including the measurement of physiological function 3 , tissue characterisation 4 and imaging 5 . In EIT, usually an alternating current of about 1mA is injected in one pair of electrodes and voltages are measured from the other pairs. Current injection is then moved between another, commonly adjacent pair of electrodes so that all electrode pairs are used ( fig. 1.1). Several electrode configurations can be used in EIT; however, they are all based on tetrapolar measurements because of its ability to minimize the impact of electrodes' contact impedance on the measurements. The tetrapolar electrode configuration has been used in a number of research areas such as respiratory system 6 , cardiac system 7 , cervical neoplasia, tissue characterization 4 etc. However, there is very little information available for the sources of errors, when making tetrapolar impedance measurements. The spatial sensitivity of tetrapolar impedance measurements is complex 1 having regions of negative sensitivity, which may introduce large errors when measuring the impedance for any heterogeneous materials. Earlier, much of the works on sensitivity analysis were done analytically based on phantom experiments and simulation considering simple geometries 1,2 . The availability of COMSOL Multiphysics package has enabled us to get the numerical solution for a complex geometries using finite element method. Through finite element simulation we can obtain a large number of data within a certain range which is impossible to get through experimental techniques. Our aim was to conduct a computer simulation study by COMSOL Multiphysics in order to investigate the sensitivity distributions in a tetrapolar measurement system by applying the Gezelowitz lead field theory 8 . The purpose of the study was to gain further understanding of the problems in EIT measurement. In this paper the simulation results conducted with a 3D model mimicking the anatomy of human upper hand is presented. Fig. 1. 1. Diagram of a 16 electrode EIT system. Current, I, is imposed across the core Ω through a pair of adjacent electrodes while the voltage distribution, V is measured between each set of neighbouring adjacent electrodes. After the voltage measurements around the entire perimeter, the current drive electrodes are rotated to the neighbouring electrode pair and the voltages at all electrodes are measured once again. This process continues until the 256 sets of voltage data are obtained. II. Sensitivity Analysis Sensitivity can be defined as the fractional change of transfer impedance (ratio of the measured potential and the applied current) with the change of conductivity inside a particular region. Considering the divergence theorem of Gauss in an arbitrary closed bounded region V, whose boundary, Ω is a piecewise smooth surface ( fig. 2.1 are scalar potential functions, ds is the unit vector directed outward normal to the boundary and the volume integral is taken over the entire bounded region. of conductivity σ 1 surrounded by are potential electrodes, and ψ, respectively. is surrounded by an insulating boundary Ω 1), the surface integral will be zero except at the electrodes' site where current is passing into and out of the It should be noted that Geselowitz's theorem is only valid for a small change in conductivity within a semi homogeneous and isotropic volume conductor. Assuming the volume conductor consists of a number of discrete elements of uniform conductivity then given by where *ψ is the potential gradient before the conductivity change occurred at point x, y, z between the drive electrodes A, at this point after the change occurred due to injection of unit current between the receive electrodes Both potential gradients *φ and induced by current , at electrodes electrodes C,D respectively. It appears that there is no analytical solution of the sensitivity product between two triple integrals for each coordinate and z. A finite element model can be used to get the solution 9 . III. Material and Methods In tetrapolar impedance measurement, it is intuitively understood that not all small sub contribute equally to the measured impedance. Volumes between and close to the electrodes contribute mor volumes far away from the electrodes. Hence, a careful choice of electrode's size and placement enabled to focus measurements on the desired part of the material. finite element modelling, a plot of sensitivity of a given material can easily be observed, and this method provides a very valuable tool for the experimental design. The sensitivity of a small volume d measure of how much this volume contributes to the total measured impedance 8 . If the resistivity varies material, the local resistivity must be multiplied with the sensitivity to give a measure of the volume's contribution to the total measured impedance. For the tetrapolar impedance measurement system, the sensitivity is computed in the following 1. A current, I between the two drive electrodes is injected and the current density 5 6 in each small volume element in the material is computed as a result of this current. 2. The same current is injected between the receive electrodes and again the resulting current density small volume element is computed. 3. The vector dot product of element, divided by the current squared, is the sensitivity of the volume element and if it is multiplied with the re ρ in the volume, this volume's contribution to the total measured impedance Z is directly obtained. sensitivity, S will be as follows: Onic Islam Shuvo and Md. Naimul Islam (2.7) It should be noted that Geselowitz's theorem is only valid for a small change in conductivity within a semi-infinite homogeneous and isotropic volume conductor. Assuming the volume conductor consists of a number of discrete elements of uniform conductivity and for unit current, ∆Z is is the potential gradient before the conductivity z due to passing of unit current , B and * -+ ∆is the field at this point after the change occurred due to injection of unit current between the receive electrodes C, D ( fig. 2.1). and *ψ are the electrical fields at electrodes A,B and by current at respectively. It appears that there is no analytical solution of the sensitivity S, which is the scalar product between two triple integrals for each coordinate x, y . A finite element model can be used to get the In tetrapolar impedance measurement, it is intuitively understood that not all small sub-volumes in the material contribute equally to the measured impedance. Volumes between and close to the electrodes contribute more than volumes far away from the electrodes. Hence, a careful choice of electrode's size and placement enabled to focus measurements on the desired part of the material. Using finite element modelling, a plot of sensitivity of a given e observed, and this method provides a very valuable tool for the experimental design. The dv within the biomaterial is a measure of how much this volume contributes to the total . If the resistivity varies within the material, the local resistivity must be multiplied with the sensitivity to give a measure of the volume's contribution to For the tetrapolar impedance measurement system, the sensitivity is computed in the following ways: between the two drive electrodes is injected in each small volume element in the material is computed as a result of this current. 2. The same current is injected between the receive n the resulting current density 5 8 in each small volume element is computed. 3. The vector dot product of 5 6 and 5 8 in each volume element, divided by the current squared, is the sensitivity of the volume element and if it is multiplied with the resistivity in the volume, this volume's contribution to the total is directly obtained. Hence, the will be as follows: The equation (3.1) also demonstrates the reciprocal nature of the tetrapolar system ─ under linear conditions the drive and receive electrodes can be interchanged without changing the measured values. In this work, a cylindrical 3-D model consisting of the skin-dry, muscle, fat-averageinfiltrated, bone-cortical and bone-marrow tissue layers was built using COMSOL Multiphysics 4.3 ( fig. 3.1). These layers were assumed to be the replica of human upper arm anatomy. The conductivity (S/m) and relative permittivity (:) values of all these tissues were taken at 100 kHz frequency from the literature ( The solid cylindrical model could easily be converted to a homogeneous media by considering all the layers having same conductivity and relative permittivity values. In COMSOL Multiphysics, the grey scale values were replaced with tissue types or organs. This process is called segmentation. Here, the data were segmented into the most important tissue types: muscle, blood, skin-dry, fat, bonecortical, and bone-marrow. After segmentation a 3-D data set were obtained where each voxel has a name or number that represented a tissue type. When segmentation was complete, electrodes were added on the surface of cylindrical model in a linear fashion ( fig. 3.1). The modelling work started with using electrodes having dimensions of 5mm radius and 5mm height. An alternating current of magnitude 1A was injected through the drive electrode pair using the 'electric current interface' in 'AC/DC module' of COMSOL. The 3-D sensitivity distribution of tetra polar measurement was then computed using the Fred-Johan and Jan expression 10 : The distribution of sensitivity throughout the structure was determined in COMSOL by the finite element method (FEM) which was based on a set of partial differential equations. However, the results were an approximate solution that numerically represented the distribution of sensitivity that would be considerably difficult to obtain manually. The graphical representation of sensitivity had been done after performing mesh by finite element method (FEM). Hence our 3-D cylindrical model had been sectioned into a number of simplistic geometric elements (e.g. triangular, tetrahedral, brick, hexahedral etc.). The collection of elements provided a discrete approximation of the object's curves in a piecewise fashion. The number of elements was finite and in turn each element had a set of known physical laws and finite parameters were applied to it. Hence, the process created a set of partial differential equations that ran simultaneously to solve the system. Here, in this work the continuous medium were subdivided into a mesh of triangular elements inside which the conductivity was assumed constant and the electric potential varied linearly. Triangular elements had been chosen for this work because of its simplicity and suitability for fitting the boundary of different conductivity regions ( fig. 3.1b). In this method, the field pattern set up inside the arm and current density in each region were analysed. This numerical technique was used for both the homogenous and the inhomogeneous media. From the current density, the sensitivity was computed for both the homogeneous and inhomogeneous medium. The sensitivity distribution results for the homogeneous and inhomogeneous medium were compared with each other. In addition to this, the effect of change of electrode's dimensions on the sensitivity distributions was also checked. IV. Results The simulation results of both the homogeneous and inhomogeneous medium are presented separately. Homogenous conductivity The fig. 4.1 displays the sensitivity fields for our cylindrical model of homogenous conductivity with an electrode spacing of 50mm and for a depth of 10 and 60mm from the surface. The sensitivity fields showed much localized areas of positive sensitivity between the receive electrode pair and negative sensitivity between the drive and receive electrode pairs at lower depth ( fig. 4.1a). These regions of positive sensitivity increased with the increase of depths from the surface. However, the magnitude of sensitivity decreased substantially with the increase of depths (decreased almost 25% of the maximum for a 5mm increase of depth). However, these regions of positive sensitivity started diminishing and negative sensitivity region became dominant between the receive electrode pairs for higher depths down the surface ( fig. 4.1b). The maximum sensitivity value on a plane had been found to decrease exponentially with depths ( fig. 4.2) and at depth, 90mm from the surface the sensitivity had fallen to 90% of the maximum sensitivity at depth 1mm. The maximum integrated sensitivity had occurred at a plane of depth approximately 1/3 of drive-receive electrode spacing ( fig. 4. 3). The change of sensitivity with the change of electrode dimensions had also been observed. To do this, the drivereceive electrode spacing had been considered as 50mm and the sensitivity was observed at a fixed depth of 15mm (the depth at which maximum integrated sensitivity occurred, 1/3 of 50 ≈ 15) by varying the diameter of the electrode. The fig. 4. 4 shows that sensitivity changes linearly with the change of electrode dimensions. Heterogeneous conductivity Here different cylinders represented different tissue layers having different conductivity values. The sensitivity distribution showed different results than those of the homogenous medium. Again, fig. 4. 7 shows the change of integrated sensitivity with depth with drive-receive electrode separation of 80mm. Here the integrated sensitivity over a plane happens to be maximum at a depth of 40mm. So, for the heterogeneous medium the maximum integrated or mean sensitivity occurred at half (1/2) of the drive-receive electrode spacing. The mean sensitivity is shown here up to 100mm depth. At a depth above 140mm it has fallen 99% of the maximum value. The change of integrated sensitivity with depth, with electrodes spacing of 50mm is shown in fig. 4. 6. The integrated sensitivity over a plane is found maximum at a depth of 25mm. To confirm the use of sensitivity as an indicator of measurement depth, the tetrapolar configuration was again modelled with a range of electrode separation (drivereceive) and electrode dimensions (only diameter was changed and height was kept fixed). In fig. 4.8, the integrated sensitivity are shown against drivereceive electode spacing at 25mm depth. The integrated sensitivity decreases almost linearly for smaller drivereceive separation, then it approaches towards a constant value. In a plane the overall sensitivity decreases due to the increase of negative sensitivity region with drive-receive spacing. On the other hand, positive sensitivity region decreases with the increase of drive-receive separation. In fig. 4.9, the integrated sensitivity is shown against the electrode dimensions at a depth of 25mm and drive-receive spacing of 50mm. Interestingly V. Discussion The simulation result presented in this paper can be used to predict the positive and negative sensitivity regions which are correlated with higher and lower impedance region of an object. The previous analytical works on sensitivity analysis of homogeneous medium were done considering a simple geometry. The sensitivity distribution at a point inside a volume conductor was calculated by a programme written in MATLAB using Geselowitz lead theorem 1,2 . The previous works found a mean sensitivity of zero at the surface layer, a maximum average sensitivity at a plane one-third of the electrode spacing and regions of negative sensitivity down to half of the electrode spacing 1,2 . This work on sensitivity calculation using finite element method (FEM) confirmed those finding for homogeneous medium. However, for the inhomogeneous medium the maximum average sensitivity was found on a plane at depth half the drive-receive electrode spacing. Beyond the maximum plane, the mean sensitivity falls more slowly in the inhomogeneous medium than does in the homogenous medium. The FEM based solution of sensitivity distribution of the tetrapolar measurement of this work has also shown that the change of sensitivity with electrode dimension give similar results both for the homogenous and inhomogeneous medium. Cylindrical shaped electrodes having larger diameter provides better result in the sensitivity measurements. VI. Conclusion The complex resistivity distributions of the body coupled with the complex sensitivity distribution of the tetrapolar measurement techniques have the potential to produce unrealistic estimate of transfer impedance. The sensitivity distributions obtained by finite element method (FEM) considering a complex shaped object having heterogeneous tissue structure can be considered more realistic than the previous works based on analytical method considering simple geometry done by Brown et al. 1 and Islam et al. 2 . Moreover, if the number of element and nodes are increased by advanced finite element mesh, more accurate results could be obtained. In addition to this, multifrequency (up to 1MHz) sensitivity analysis is necessary to study the complex nature of the human anatomy.
2018-12-28T14:02:39.790Z
2016-06-28T00:00:00.000
{ "year": 2016, "sha1": "f6ceabd11698e2cb91f0159d684b54f77490aa00", "oa_license": null, "oa_url": "https://www.banglajol.info/index.php/DUJS/article/download/28517/19018", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f6ceabd11698e2cb91f0159d684b54f77490aa00", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
25912486
pes2o/s2orc
v3-fos-license
Thyroid dysfunction in Chinese hepatitis C patients: Prevalence and correlation with TPOAb and CXCL10 AIM: To investigate the relationship among pretreatment serum CXC chemokine ligand 10 (CXCL10), thyroid peroxidase antibody (TPOAb) levels and thyroid dysfunction (TD) in Chinese hepatitis C patients. METHODS: One hundred and thirty-nine treatment-naive genotype 1 chronic hepatitis C patients with no history of TD or treatment with thyroid hormones were enrolled in this study. Patients underwent peginterferon alfa-2a/ribavirin (PegIFN α -2a/RBV) treatment for 48 wk, followed by detection of clinical factors at each follow-up point. Hepatitis C virus (HCV) antibodies were analyzed using microsomal chemiluminescence, and serum HCV RNA was measured by real-time PCR assay at 0, 4, 12, 24 and 48 wk after the initiation of therapy and 24 wk after the end of therapy. To assess thyroid function, serum thyroid stimulating hormone (TSH), free thyroxine (FT4), free triodothyronine (FT3) and TPOAb/thyroglobulin antibody (TGAb) levels were determined using chemiluminescent immunoassays every 3 mo. Serum CXCL10 levels were determined at baseline. RESULTS: The prevalence of TD was 18.0%. Twenty-one (84.0%) out of twenty-five patients exhibited normal thyroid function at week 24 after therapy. The rate of sustained virological response to PegIFN α -2a/RBV in our study was 59.0% (82/139), independent of thyroid function. Pretreatment serum CXCL10 levels were significantly increased in patients with euthyroid increases in female patients and patients who are positive for TPOAb at baseline. Core tip: We present novel data on the influence of peginterferon alfa-2a/ribavirin (PegIFN α -2a/RBV) on thyroid function in Chinese genotype 1 hepatitis C virus (HCV)-infected patients over a 48-wk treatment period. The results demonstrate that the prevalence of thyroid dysfunction (TD) was 18.0%. Lower pretreatment serum CXCL10 levels were associated with PegIFN α -2a/ RBV induced TD in genotype 1 HCV-infected patients, and female patients exhibited an increased risk for developing TD compared with male patients. Baseline TPOAb positivity may also be a risk factor for TD development. However, most (84%) of the TD cases were reversible. To our knowledge, this is the first study to investigate the association of CXCL10 levels with PegIFN α -2a/RBV induced TD in genotype 1 HCV-infected patients in China. to dose reductions in up to 40% of patients and drug discontinuation in 14% of patients [5] . Thyroid diseases, such as the emergence of thyroid autoantibodies (TAs) and thyroid dysfunction (TD), are common in CHC patients and represent extrahepatic manifestations of HCV infection [6,7] . Subclinical thyroiditis occurs in 20% to 40% of CHC patients, and clinical thyroiditis occurs in 5% to 10% of CHC patients [8] . TD may result from IFN-based therapy. In some cases, IFN-induced TD may lead to the discontinuation of IFN therapy, thus representing a major clinical problem in hepatitis C patients receiving IFN-α therapy [8] . IFN-α-related TD has been widely investigated, and preliminary studies have suggested that there are at least two different models by which IFN-α may induce TD: immunemediated effects or direct toxicity to the thyroid. IFN-α exerts various effects on the immune system, many of which may lead to the development of autoimmunity. Upon culture with human thyroid follicular cells, type Ⅰ IFNs inhibit thyroid-stimulating hormone (TSH) -induced gene expression of thyroglobulin (TG), thyroperoxidase (TPO), and sodium iodide symporter (NIS). This study assessed TSH receptor, TG, and TPO gene expression levels in a rat thyroid cell line, and the results demonstrated that IFN-α has a direct toxic effect on the thyroid. Chronic HCV infection appears to play a significant role in triggering thyroiditis among IFN α-treated patients [8,9] . CXC chemokine ligand 10 (CXCL10 or IP-10), a member of the CXC chemokine family, is expressed in the liver of CHC patients and selectively recruits activated T cells to inflammatory sites [10] . Evidence also indicates that circulating CXCL10 levels increase in HCV-infected patients with autoimmune thyroiditis [11] , potentially because CXCL10 recruits T-helper (Th) 1 lymphocytes. These cells secrete IFN-γ and tumor necrosis factor (TNF), promoting further CXCL10 secretion and perpetuating the autoimmune process [12,13] . Although most thyroid autoimmunity cases exhibit no clinical symptoms, they are often characterized by the expression of thyroid antibodies (TAs), including thyroperoxidase antibody (TPOAb) and thyroglobulin antibody (TGAb). Data from pooled studies revealed that the risk of developing TD in CHC patients with baseline TAs positivity was 46.1%; whereas this risk was only 5.4% in TAs-negative CHC patients [14] . Our preliminary results indicate that the positive TPOAb IgG2 subclass was a risk factor for TD in untreated HCV patients, and may play an important role in TD development in CHC patients [15] . The appearance of TPOAb before treatment was a strong indicator of subsequent TD for CHC patients receiving PegIFNα-2a/RBV combination therapy. Female and TAs-positive patients were also more likely to develop TD during IFNα/RBV therapy [9] . Previous investigations showed that the addition of RBV to IFN-α therapy in HCV patients could increase INTRODUCTION Of the estimated 185 million people infected with hepatitis C virus (HCV) worldwide, 350000 die each year [1,2] . Currently, the standard treatment for chronic hepatitis C (CHC) patients in China is peginterferon and ribavirin in combination (PegIFNα-2a/RBV), with sustained virological response (SVR) rates of 54% to 80% [3,4] . Despite its success, interferon-alpha (IFN-α) has a well-documented side effect profile, including influenzalike symptoms, and hematologic abnormalities lead the risk of developing hypothyroidism [16] . However, it is not clear whether the addition of RBV affects the emergence of other TDs. Most studies have focused on the effects of combination therapy with standard IFN-α and RBV on the thyroid gland and demonstrated that the risk for developing TD during IFN-α therapy is closely correlated with mixed HCV genotype infection and lower HCV RNA levels, female gender, and pretreatment positivity for TAs (particularly TPOAb) [9] . Corresponding data on PegIFNα-2a/RBV induced TD in genotype 1 HCV-infected patients in China are rare and the related factors have not yet been fully elucidated. In the present study, we investigated the relationship among TPOAb, pretreatment serum CXCL10 levels and the occurrence of PegIFNα-2a/RBV induced TD in patients with genotype 1 HCV infection in China. Ethics statement This study protocol conformed to the ethical guidelines of the 1975 Declaration of Helsinki and was approved by the ethics committee of Peking University People Hospital. Written informed consent was obtained from all participant subjects. Biological and behavioral information was linked anonymously to protect the participants' privacy. This procedure was approved by the ethics committee. Patients Two hundred and sixty CHC patients who visited the Department of Infectious Diseases, Peking University First Hospital from September 2009 to June 2011 were included in this study. These patients came from five different regions of China (Beijing, Hebei province, Henan province, Heilongjiang province and Shanxi province), and the criteria for CHC diagnosis followed the Guideline of Prevention and Treatment of Hepatitis C [17] . All patients had compensated liver disease without cirrhosis, but never received hepatitis C treatment. Patients with hepatitis B virus (HBV) infection, or human immunodeficiency virus (HIV) infection and those who were pregnant or using amiodarone or lithium were excluded. HCV patients with other autoimmune disorders or treated with immuno-modulant drugs were also excluded. Further screening excluded 25 patients with a history of thyroid gland dysfunction, 80 patients who previously received IFN-α treatment and 26 patients who were not infected with genotype 1 HCV. A total of 139 HCV genotype 1 treatment-naïve patients were enrolled in the final study. All participants included had euthyroid status and never received thyroid hormone treatment. All enrolled CHC genotype-1 patients received a weekly 180 μg subcutaneous dose of PegIFNα-2a and a daily 600-1000 mg (according to body weight) dose of RBV for 48 wk. Laboratory assessment All patients fasted for 12 h prior to blood tests. Alanine aminotransferase (ALT), aspartate aminotransferase (AST), total and direct bilirubin (TBIL and DBIL), and albumin (ALB) were determined using an automatic biochemical analyzer [18] . HCV antibodies were analyzed using microsomal chemiluminescence (Abbott Diagnostics Division) [19] , and serum HCV RNA was measured by real-time PCR assay (COBAS Taqman HCV Test; Roche Molecular Systems, Pleasanton, CA) at 0, 4, 12, 24 and 48 wk after the initiation of therapy and 24 wk after the end of therapy. Clinical hypothyroidism was defined as a serum TSH level greater than 5.5 μIU/mL and a FT4 level less than 11.48 pmol/L. Clinical hyperthyroidism was diagnosed when TSH was less than 0.35 μIU/mL and FT4 was greater than 22.7 pmol/L and/or FT3 was greater than 6.5 pmol/L. Subclinical hypothyroidism or hyperthyroidism was diagnosed when serum TSH levels were greater than 5.5 μIU/mL or less than 0.35 μIU/mL, respectively, with normal FT3 and FT4 levels. TAs were considered positive when TPOAb ≥ 35 IU/ mL or TGAb ≥ 40 IU/mL [15] . Serum CXCL10 measurements Serum CXCL10 levels were measured prior to treatment using the Quantikine human CXCL10 immunoassay (RD Systems, Minneapolis, MN, United States). All blood samples were stored at -80 ℃ until use in assays. These samples were diluted 1:2 with Calibrator Diluent RD6Q solution and analyzed in duplicate. The linear dynamic range for CXCL10 measurement in this assay was 7.8 to 500 pg/mL. Statistical analysis Categorical variables were compared between the groups using the χ 2 test or the Fisher's exact test. Continuous variables were assessed using Student's t-test or the Mann-Whitney U test. Differences with a two-tailed P-value < 0.05 were considered statistically significant. Statistical analyses were conducted using SPSS version 16.0 (SPSS Inc, Chicago, IL, United States). General information about the patients The demographic characteristics of the 139 CHC of TPOAb positivity in TD patients were significantly increased compared with NTD patients (AST levels: P = 0.018; TPOAb positivity: 24.2% vs 12.3%, P = 0.047). However, no significant differences in ALT, TBIL, DBIL, ALB, HCV RNA levels, or the percentage of TGAb-positive patients were noted between the TD and NTD groups (P > 0.05). The percentages of patients positive for TPOAb and/or TGAb were 17.3% (24/139) at baseline and 22.3% (31/139) at the end of treatment (Table 4). Nine of twenty-four patients with TPOAb/TGAb at baseline developed TD. By contrast, three (one male and two females) of one hundred and fifteen patients without TPOAb/TGAb at baseline developed TD at the end of the treatment (37.5% vs 2.6%, P = 0.000). DISCUSSION We present novel data regarding the influence of PegIFNα-2a combined with RBV on thyroid function in Chinese adult genotype 1 HCV-infected patients over a 48-wk treatment period. The results demonstrate that the prevalence of thyroid abnormities was 18.0%, and lower pretreatment serum CXCL10 levels were associated with PegIFNα-2a/RBV induced TD. The prevalence of TD was increased in female patients and those who were TPOAb-positive at baseline. However, most (84%) of the TD cases were reversible. To our knowledge, this is the first study to investigate the association of CXCL10 levels with PegIFNα-2a/RBVinduced TD in genotype 1 HCV-infected patients in China. In our study, the PegIFNα-2a/RBV SVR rate was 59.0% (82/139), independent of thyroid function. After 48 wk of PegIFNα-2a/RBV treatment, 25 out of 139 patients developed TD, including 16 patients with subclinical hypothyroidism, 7 with subclinical hyperthyroidism and 2 with hypothyroidism. Although a previous study reported that hypothyroidism was the most common type of TD induced by IFN [20,21] , subclinical hypothyroidism was most prevalent in our study. This discrepancy may be explained by differences in patient ethnicities, genetic backgrounds and the type of IFN used. IFN-associated thyroid disease was first reported in 1985 when three cases of hypothyroidism were observed in breast cancer patients who received IFN α treatment [22] . Studies report an incidence of TD during IFN-α plus RBV combination therapy of 4.7% to 27.8% [23] , which may result from immune activation patients enrolled in the study are presented in Prevalence of TD The overall prevalence of thyroid abnormities was 18.0% (5.0% in men and 13.0% in women) during the therapy. Following 48 wk of exposure to PegIFNα-2a/RBV, 25 out of 139 patients developed TD, including 7 (6 females and 1 male) with subclinical hyperthyroidism, 16 (10 females and 6 males) with subclinical hypothyroidism and 2 female patients with hypothyroidism. mediated by IFN. Jami et al [24] demonstrated that patients who used pegylated IFN had a higher risk of TD than those using conventional IFN (14% vs 7%, P = 0.038). However, in a meta-analysis, Tran et al [25] found that pegylated IFN in combination with RBV did not cause more thyroid diseases in HCV-infected patients than classical IFN plus RBV. This variation may be explained by the differences in the race of the patients. In our study, 18 in which approximately 14% of patients developed TD during PegIFN/RBV therapy [24] . The difference between these two studies may result from differences in the race of the included patient populations and the virus genotypes. relationships among TPOAb, TD and CXCL10 RBV can modulate the Th1 and Th2 subset balance by activating type 1 cytokines in the HCV-specific immune response. Furthermore, RBV could also enhance the non-virus-induced immune response, suggesting that RBV, as a type 1-inducing agent, can trigger autoimmune phenomena in predisposed patients [26] . In previous studies, the incidence of TD induced by IFN monotherapy in CHC patients was 4% to 18%, with a mean incidence of approximately 6% in a meta-analysis study [27] . Earlier studies reveal that the mean incidence of TD in patients treated with combination therapy is increased compared with those treated with IFN alone [16] . In the present study, the enrolled genotype-1 patients received PegIFNα-2a/RBV treatment for 48 wk, and the prevalence of thyroid abnormities was 18.0%. Some studies have suggested that higher doses of IFN-α and longer durations of therapy are risk factors for the development of IFN induced TD [14] . It is therefore possible that the longer time period of 48 wk of therapy in our study increased the likelihood that patients developed TD. Fifteen patients developed TD at 24 wk of therapy, including 9 with subclinical hypothyroidism, 5 with subclinical hyperthyroidism and 1 with hypothyroidism. An additional 2 patients developed subclinical hyperthyroidism at 36 wk of therapy. At 48 wk of therapy, 3 patients with subclinical hypothyroidism and 1 patient with overt hypothyroidism were noted. L-thyroxine treatment was initiated in patients with overt hypothyroidism. It is worth noting that, by the end of the therapy, TD did not further progress among these patients. Whether the long-term evolution of TD is induced by IFN-α therapy remains controversial. Some studies indicate that TD is reversible in all patients, whereas others report that TD is only reversible in a proportion of the patients [28,29] . In our study, at week 24 posttreatment, normal thyroid function was restored in 21 (84.0%) out of 25 patients. Such a discrepancy may result from the short time period of the follow-up study and a subsequently incomplete evaluation of TD status. There also may have been additional factors, such as differences in study designs, the time period of the follow-up, the population races, and individual variations. Our data revealed that 17.3% of patients were TAs-positive, whereas previous studies reported TAs incidence rates in HCV-infected patients ranging from 10% to 45%; this discrepancy may be related to differences in the population race, genetic variations, geographical distribution, and environmental factors [16] . Data from the pooled studies indicate that the risk of TD in patients who were positive for TAs at baseline was increased compared with those without TAs at baseline [14] . Female gender and TAs positivity were shown to be the predictive factors of TD development during IFN-α/RBV therapy [24,30] . In our study, female patients had a higher risk for the development of TD than male patients. The baseline TPOAb-positivity have been suggested to be a risk factor for TD development secondarily to PegIFNα-2a/RBV treatment. We also demonstrated that the percentage of patients with elevated autoantibody levels developing TD was significantly higher than that of patients with normal autoantibody levels before treatment. A previous large-scale study in patients receiving combination therapy demonstrated that TGAb was present in 91.7% of patients, whereas TPOAb was present in 83.3% of those with overt hypothyroidism [31] . Thus, in combination therapy, TAs play an important role in predicting the emergence of TD. Analyzing TAs levels before combination therapy may therefore identify patients at risk for developing PegIFNα-2a/RBVassociated TD. Many studies have noted the Th1 immune response and changes in CXCL10 chemokine level during HCV infection. It was recently reported that HCV-infected patients who developed IFN-induced dysfunction exhibited Th1 polarization in their innate immune responses. The Th1 immune response is characterized by increased IFN-γ and TNF-α production by Th1 lymphocytes. These chemokines subsequently stimulate CXCL10 secretion from the hepatocytes in chronic HCV infection, thus perpetuating the immune cascade [32] . Elevated serum CXCL10 levels are not only associated with the development of autoimmunity, but also lead to thyroid follicular destruction and hypothyroidism. Antonelli et al [33] demonstrated that the development of TD during the IFN-α therapy correlated with significantly reduced CXCL10 serum levels, both before and during the treatment. A prospective study found that CXCL10 increased in HCV-infected patients, with no associated TD development, even after matching for sex and age [34] . We demonstrated that pretreatment serum CXCL10 levels were significantly increased in patients with euthyroid status compared with patients with TD. Although pretreatment serum CXCL10 levels were higher in TPOAb-positive than in TPOAb-negative patients, no significant difference was detected. However, the prevalence of TD was increased 9770 September 7, 2015|Volume 21|Issue 33| WJG|www.wjgnet.com Table 4 Numbers of patients treated with combination therapy positive for thyroid autoantibodies at enrollment and at the end of treatment in patients who were TPOAb-positive at baseline than patients who were not TPOAb-positive at baseline. Evidence also indicates that circulating CXCL10 levels increase in HCV-infected patients with autoimmune thyroiditis [11] , potentially because CXCL10 recruits T-helper (Th) 1 lymphocytes. Indeed, it is reasonable to hypothesize that the changes in serum CXCL10 may be more evident in patients developing overt TD, who show a microenvironment much more enriched in Th1 molecules [33] . In our study, although high standard deviation was observed for both categories of patients, there was an important variation of the CXCL10 levels in HCV patients with or without TD. At least in the studied population, the values of CXCL10 were significantly lower in patients who developed TD. Maybe the reason is that the number of patients with overt TD was too small (only two) in our studied populations. Therefore, our results should be confirmed by studies with a much larger sample size. We studied the occurrence of TD in genotype 1 HCV-infected patients, without examining other genotypes. Our findings must be confirmed by studies using a larger sample size with a longer follow-up period. In conclusion, low pretreatment serum CXCL10 levels were associated with PegIFNα-2a/RBV induced TD in genotype 1 HCV-infected patients in China. The prevalence of TD was increased in female patients and patients who were TPOAb-positive at baseline. The appearance of TPOAb before treatment is predictive of subsequent TD for CHC patients receiving PegIFNα-2a/RBV combination therapy. Screening for TPOAb and CXCL10 before combination therapy may identify highrisk patients who are more likely to develop PegIFNα-2a/RBV-associated TD. Further studies are needed to elucidate the characteristics and mechanisms involved in PegIFNα-2a/RBV-induced TD in HCV-infected patients. Background Currently, the standard treatment for chronic hepatitis C (CHC) in China is combination peginterferon and ribavirin (RBV) therapy, and the sustained virological response rates are 54% to 80%. Despite its success, interferon (IFN)-α has a well-documented side effect profile, including thyroid diseases. The emergence of thyroid dysfunction (TD) may result from IFN-based therapy. In some cases, IFN-induced TD may cause the discontinuation of IFN therapy. Most studies have focused on the effects of combination therapy with standard IFN-α and ribavirin (RBV) on the thyroid gland and demonstrated that the risk for developing TD during IFN-α therapy is closely correlated with female gender and pretreatment TAs positivity (particularly TPOAb). Evidence indicates that circulating CXCL10 levels are increased in HCV-infected patients with autoimmune thyroiditis. The relationship among the pretreatment serum CXCL10 levels, TPOAb levels and the occurrence of peginterferon alfa-2a (PegIFNα-2a)/RBV-induced TD in patients with genotype 1 HCV infection in China is unclear. research frontiers CXCL10 recruits Th1 lymphocytes, which secrete IFN-γ and tumor necrosis factor, leading to further CXCL10 secretion and potentially the development of the autoimmunity. Data from pooled studies revealed that the risk of developing TD in CHC patients who were TAs-positive (TPOAb and TGAb) at baseline was 46.1%. By contrast, this risk was only 5.4% in CHC patients who were TAsnegative at baseline. The preliminary results indicate that the TPOAb IgG2 subclass was a risk factor for TD in untreated HCV patients, and may play an important role in TD development in CHC patients. The appearance of TPOAb before treatment is predictive of subsequent thyroid dysfunction for CHC patients receiving PegIFNα-2a/RBV combination therapy. Innovations and breakthroughs Lower pretreatment serum CXCL10 levels are associated with PegIFNα-2a/ RBV-induced TD in genotype 1 HCV-infected patients in China. The frequency of TD is increased in female patients and patients who are TPOAb-positive at baseline. However, most (84%) of the TD cases were reversible. This is the first study to investigate the association of CXCL10 levels with PegIFNα-2a/RBVinduced TD in genotype 1 HCV-infected patients in China. Applications The study results indicate that screening for TPOAb and CXCL10 before combination therapy may identify the patients who are at high risk for developing PegIFNα-2a/RBV-associated thyroid dysfunction. Terminology Clinical hypothyroidism was defined by serum TSH levels greater than 5.5 μIU/mL and FT4 less than 11.48 pmol/L; whereas clinical hyperthyroidism was diagnosed when TSH levels were less than 0.35 μIU/mL and FT4 was greater than 22.7 pmol/L and/or FT3 was greater than 6.5 pmol/L. Subclinical hypothyroidism or hyperthyroidism were defined by serum TSH levels higher than 5.5 μIU/mL or lower than 0.35 μIU/mL, respectively, with normal levels of FT3 and FT4. The patients was considered to be positive for TAs when TPOAb was greater than or equal to 35 IU/mL or TGAb was greater than or equal to 40 IU/mL. Peer-review Well-written and with valuable data, this manuscript reinforces the recommendation that HCV-infected patients should be screened for the presence of thyroid dysfunction markers before undergoing IFN-a/ribavirin treatment, because such treatment may increase the prevalence of TD. The value of TPOAb positivity as a marker for treatment-induced TD in HCV infected patients is also suggested by several studies. There is variation in CXCL10 levels in HCV patients with or without TD, as demonstrated by a high standard deviation observed for both categories of patients.
2018-04-03T03:45:03.307Z
2015-09-07T00:00:00.000
{ "year": 2015, "sha1": "fda30c4f5579ae2b409a5fd586cdcd43cefbef5b", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3748/wjg.v21.i33.9765", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "a25f364f3ab54b88dbd39cd8a6d1676dfb9e5e8d", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
119183929
pes2o/s2orc
v3-fos-license
Konishi Form Factor at Three Loop in ${\cal N}=4$ SYM We present the first results on the third order corrections to on-shell form factor (FF) of the Konishi operator in ${\cal N}=4$ supersymmetric Yang-Mills theory using Feynman diagrammatic approach in modified dimensional reduction ($\overline {DR}$) scheme. We show that it satisfies KG equation in $\overline {DR}$ scheme while the result obtained in four dimensional helicity (FDH) scheme needs to be suitably modified not only to satisfy the KG equation but also to get the correct ultraviolet (UV) anomalous dimensions. We find that the cusp, soft and collinear anomalous dimensions obtained to third order are same as those of the FF of the half-BPS operator confirming the universality of the infrared (IR) structures of on-shell form factors. In addition, the highest transcendental terms of the FF of Konishi operator are identical to those of half-BPS operator indicating the probable existence of deeper structure of the on-shell FF. We also confirm the UV anomalous dimensions of Konishi operator up to third order providing a consistency check on the both UV and universal IR structures in ${\cal N}=4$. We present the first results on the third order corrections to on-shell form factor (FF) of the Konishi operator in N = 4 supersymmetric Yang-Mills theory using Feynman diagrammatic approach in modified dimensional reduction (DR) scheme. We show that it satisfies KG equation in DR scheme while the result obtained in four dimensional helicity (FDH) scheme needs to be suitably modified not only to satisfy the KG equation but also to get the correct ultraviolet (UV) anomalous dimensions. We find that the cusp, soft and collinear anomalous dimensions obtained to third order are same as those of the FF of the half-BPS operator confirming the universality of the infrared (IR) structures of on-shell form factors. In addition, the highest transcendental terms of the FF of Konishi operator are identical to those of half-BPS operator indicating the probable existence of deeper structure of the on-shell FF. We also confirm the UV anomalous dimensions of Konishi operator up to third order providing a consistency check on the both UV and universal IR structures in N = 4. PACS numbers: 12.38Bx The ability to accomplish the challenging job of calculating quantities is of fundamental importance in any potential mathematical theory. In quantum field theory (QFT), this manifests itself in the quest for computing the multi-loop and multi-leg scattering amplitudes under the glorious framework of age-old perturbation theory. The fundamental quantities to be calculated in any gauge theory are the scattering amplitudes or the correlation functions. Recently, there have been surge of interest to study form factors (FFs) as they connect fully on-shell amplitudes and correlation functions. The FFs are a set of quantities which are constructed out of the scattering amplitudes involving on-shell states consisting of elementary fields and an off-shell state described through a composite operator. These are operator matrix elements of the form p σ1 1 , · · · , p σ l l |O|0 where, O represents a gauge invariant composite operator which generates a multiparticle on-shell state |p σ1 1 , · · · , p σ l l upon operating on the vacuum of the theory. p i are the momenta and σ i encapsulate all the other quantum numbers of the particles. More precisely, FFs are the amplitudes of the processes where classical current or field, coupled through gauge invariant composite operator O, produces some quantum state. Studying these quantities not only help to understand the underlying ultraviolet and infrared structures of the theory, but also enable us to calculate the anomalous dimensions of the associated composite operator. The Sudakov FFs (l = 2) in N = 4 maximally supersymmetric Yang-Mills (SYM) theory [1,2] were initially considered by van Neerven in [3], almost three decades back, where a half-BPS operator belonging to the stressenergy supermultiplet, that contains the conserved currents of N = 4 SYM, was investigated to 2-loop order: Very recently, this was extended to 3-loop in [4]. We will represent scalar and pseudo-scalar fields by φ a m and χ a m , respectively. The symbol a ∈ [1, N 2 − 1] denotes the SU(N) adjoint color index, whereas m, n stand for the generation indices which run from [1, n g ]. In d = 4 dimensions, we have n g = 3. The sum over repeated index will be assumed throughout the letter unless otherwise stated. One of the most salient features of this operator is that, it is protected by the supersymmetry (SUSY) i.e. the FFs exhibit no ultraviolet (UV) divergences but infrared (IR) ones to all orders in perturbation theory. In this article, our goal is to investigate the Sudakov FFs of another very sacred operator in the context of N = 4 SYM, called Konishi operator, which is not protected by the SUSY and consequently, exhibits UV divergences beyond leading order: The existence of UV divergences is captured through the presence of non-zero anomalous dimensions. This operator is one of the members of the Konishi supermultiplet and all the members of the multiplet give rise to same anomalous dimensions. The one and two loop Sudakov FFs of Konishi operator were computed in [5] employing the on-shell unitarity method. In addition, the IR poles at 3-loop were also predicted in the same article using the universal behaviour of those, though the finite part was not computed. In this letter, we calculate the full 3-loop Sudakov FFs using the age-old Feynman diagrammatic approach. In the same spirit of the FFs in quantum chromodynamics (QCD), we examine the results in the context of KG equation [6][7][8][9]. Quite remarkably, it has been found that the logarithms of the FFs satisfy the universal decomposition in terms of the cusp, collinear, soft and UV anomalous dimensions, exactly similar to those of QCD [10,11]! Except UV, which is a property of the associated operator, all the remaining universal anomalous dimensions match exactly with the leading transcendental terms of the corresponding ones in QCD upon putting C F = n f T f = C A . The quantities C F and C A are the quadratic Casimirs of the SU(N) gauge group in fundamental and adjoint representations, respectively. n f is the number of active quark flavors and T f = 1/2. FRAMEWORK OF THE CALCULATION The interacting Lagrangian encapsulating the interaction between off-shell state (J) described by O BPS or O K and the fields in N = 4 SYM are given by We define the form factors at O(a n ) as where, n = 0, 1, 2, · · · and a is the 't Hooft coupling [12]: that depends on the Yang-Mills coupling constant g YM , the loop-counting parameter and C A . The quantity |M ρ,(n) f is the transition matrix element of O(a n ) for the production of a pair of on-shell particles ff from the off-shell state represented through ρ. For the case under consideration, we take f = φ a m =f , ρ = K and BPS for J K and J mn BPS , respectively. The full form factor in terms of the components (4) reads as The transition matrix element also follows same expansion. The quantity Q 2 = −2p 1 .p 2 and µ is introduced to keep the coupling constant a dimensionless in d = 4 + ǫ dimensions. REGULARIZATION PRESCRIPTIONS The calculation of the FFs in N = 4 SYM theory involves a subtlety originating from the dependence of the composite operators on space-time dimensions d. Unlike the half-BPS operator O BPS , the Konishi operator O K involves a sum over generation of the scalar and pseudoscalar fields and consequently, it does depend on d. The problem arises while making the choice of regularization scheme [5], which is necessary in order to regulate the theory for identifying the nature of divergences present in the FFs. Though the FFs of the protected operators are free from UV divergences in 4-dimensions, these do involve IR divergences arising from the soft and collinear configurations of the loop momenta. For performing the regularization, there exists several schemes, the four dimensional helicity (FDH) [13,14] formalism is the most popular one where everything is treated in 4-dimensions, except the loop integrals that are evaluated in d-dimensions. In spite of its spectacular applicability, this prescription may fail to produce the correct result for the operators involving space-time dimensions [5], such as Konishi! However, this can be rectified and the rectification scenarios differ from one operator to another. According to the prescription prescribed in the article [5], in order to obtain the correct result for the Konishi operator, one requires to multiply a factor of ∆ BPS K which is ∆ BPS K = n g,ǫ /2n g with the difference between the FFs of the Konishi and those of BPS i.e. where, ). The second subscript of n g,ǫ represents the dependence of the number of generations of the scalar and pseudo-scalar fields on the spacetime dimensions: n g,ǫ = (2n g − ǫ). The prescription is validated through the production of the correct anomalous dimensions up to 2-loop. In this article, for the first time, this formalism is applied to the case of 3-loop FFs and is observed to produce the correct anomalous dimensions for the Konishi. On the other hand, there exists another very elegant formalism, called modified dimensional reduction (DR) [15,16], which is very much similar to the 't Hooft and Veltman prescription of the dimensional regularization and quite remarkably, it is universally applicable to all kinds of operators including the ones dependent on the space-time dimensions. In this prescription, in addition to treating everything in d = 4 + ǫ dimensions, the number of generations of the scalar and pseudo-scalar fields is considered as n g,ǫ /2 instead of n g in order to preserve the N = 4 SUSY throughout. The dependence on ǫ preserves SUSY in a sense that the total number of gauge, scalar (n g ) and pseudo-scalar (n g ) degrees of freedom continues to remain 8. Within this framework, we have calculated the Konishi FFs up to 3-loop level and the results come out to be exactly same as the ones obtained in Eq. (7). This, in turn, provides a direct check on the earlier prescription. In the next section, we will discuss the methodology of computing the FFs. CALCULATION OF THE FORM FACTORS The calculation of the FFs follows closely the steps used in the derivation of the 3-loop spin-2 and pseudo-scalar FFs in QCD [17,18]. In contrast to the most popular method of on-shell unitarity for computing the scattering amplitudes in N = 4 SYM, we use the conventional Feynman diagrammatic approach, which carries its own advantages in light of following the regularization scheme, to accomplish the job. The relevant Feynman diagrams are generated using QGRAF [19]. Indeed, very special care is taken to incorporate the Majorana fermions present in the N = 4 SYM appropriately. For Konishi as well as half-BPS operator, 1631 number of Feynman diagrams appear at 3-loop order which include the scalar, pseudo-scalar, gauge boson as well as Majorana fermions in the loops. The ghost loops are also taken into account ensuring the inclusion of only the physical degrees of freedom of the gauge bosons. The raw output of the QGRAF is converted to a suitable format for further calculation. Employing a set of in-house routines based on Python and the symbolic manipulating program FORM [20], the simplification of the matrix elements involving the Lorentz, color, Dirac and generation indices is performed. In the FDH regularization scheme, except the loop integrals all the remaining algebra is performed in d = 4, whereas in DR, everything is executed in d = 4 + ǫ dimensions. While calculating within the framework of DR, the factor of 1/3 in the second part of O BPS , Eq. (1), should be replaced by 2/n g,ǫ to maintain its traceless property in d-dimensions. The expressions involve thousands of apparently dif-ferent 3-loop Feynman scalar integrals. However, they are expressible in terms of a much smaller set, called master integrals (MIs), by employing the integration-byparts (IBP) [21,22] and Lorentz invariance (LI) [23] identities. Though, the LI are not linearly independent of the IBP [24], their inclusion however accelerates the procedure of obtaining the solutions. All the scalar integrals are reduced to the set of MIs using a Mathematica based package LiteRed [25,26]. In the literature, there exists similar packages to perform the reduction: AIR [27], FIRE [28], Reduze2 [29,30]. As a result, all the thousands of scalar integrals can be expressed in terms of 22 topologically different MIs which were computed analytically as Laurent series in ǫ in the articles [31][32][33][34][35][36][37] and are collected in the appendix of [38]. Using those, we obtain the final expressions for the 3-loop FFs of the O BPS and O K . RESULTS OF THE FORM FACTORS Employing the Feynman diagrammatic approach described in the previous section, we have first confirmed the form factor results for the O BP S up to 3-loop level presented in [3,4] and for O K up to 2-loop [5]. In the present letter, we present only the ǫ expanded results for the F K,(i) φ , i = 1, 2, 3 (see Eq. (6)). The exact results in terms of d and MIs are too long to present here and can be obtained from the authors. In order to demonstrate the subtleties involved in the choice of regularization scheme, we have expressed them in terms of δ R which is unity in DR scheme and zero in FDH scheme. where ζ 2 = π 2 /6, ζ 3 ≈ 1.2020569, ζ 5 ≈ 1.0369277, ζ 7 ≈ 1.0083492. The presence of the non-zero coefficients of δ R signifies the shortcoming of the FDH scheme in case of Konishi operator. We observe that our results for δF K,(i) φ , i = 1, 2, 3 expressed in terms of d and MIs contain an overall factor (6 − δ R ǫ)/6 explaining the necessity of correcting the results computed in FDH scheme by this factor advocated in [5]. OPERATOR RENORMALIZATION Though the N = 4 SYM is UV finite i.e. neither coupling constant nor wave function renormalization is required, nevertheless the FFs of the composite unprotected operators, like Konishi, do involve divergences of the UV source which are captured by the presence of nonzero UV anomalous dimensions, γ ρ . As a consequence, to get rid of the UV divergences, the FFs are required to undergo UV renormalization which is performed through the multiplication of an overall operator renormalization, Z ρ (a, µ, ǫ): Sinceâ s = a s (µ 0 /µ) ǫ , the solution to the above equation takes the simple form: The UV finite Konishi FFs is obtained as Since, this is a property of the associated composite operator, the γ ρ and so Z ρ are independent of the type as well as number of the external on-shell states. In the next section, we will discuss the methodology to obtain the γ ρ for the Konishi type of operators in addition to discussing the IR singularities of the FFs. UNIVERSALITY OF THE POLE STRUCTURES The FFs in N = 4 SYM contain divergences arising from the IR region which show up as poles in ǫ. The associated pole structures can be revealed and studied in an elegant way through the KG-equation [6][7][8][9] which is obeyed by the FFs as a consequence of factorization, gauge and renormalization group invariances: The Q 2 independent K ρ f (a, ǫ) contains all the poles in ǫ, whereas G ρ f a, Q 2 /µ 2 , ǫ involves only the finite terms in ǫ → 0. Inspired from QCD [12,39,40], we propose the general solution to be with where, A = ∞ j=1 a j A j are the cusp anomalous dimensions in N = 4 SYM. The absence of the superscript ρ and subscript f signifies the independence of these quantities on the nature of composite operators as well as external particles. These are determined by looking at the highest poles of the ln F ρ f which are found to be up to 3-loops which are consistent with the results presented in [41,42]. These are basically the highest transcendental parts of those of QCD [11,[43][44][45]. The other quantities in Eq. (13), G ρ f,j are postulated, like QCD [10,11], to satisfy where, B = up to 3-loop. For the Konishi operator, the results up to 2-loop are in agreement with the existing ones [46][47][48] and the 3-loop result also matches with previous computations [49,50] . By subtracting out the γ j , we can only calculate the combination of (2B j + f j ). However, by looking at the similarities between A j of QCD and N = 4, we propose which are essentially the highest transcendental parts of those of QCD [10,11,44]. The other process dependent constants, that are relevant up to 3-loop, in Eq.(15) are obtained as In a clear contrast to that of QCD, due to absence of the non-zero β-functions in N = 4 SYM, all the higher poles vanish in Eq. (13). We observe that the leading transcendental terms in the operator dependent parts of the FFs of O K and O BPS , namely g ρ,k φ,j , coincide. This is indeed the case with QCD form factors when the color factors are chosen suitably. for d-independent operators are insensitive to the regularization schemes, while for the d-dependent operators, results in FDH scheme need to be corrected by suitable d dependent terms in order to preserve the SUSY. It is also demonstrated that the FFs of Konishi operator computed only in DR satisfies KG equation and also can be described in terms of universal cusp, collinear and soft anomalous dimensions. This implies that infrared factorization of FFs in N = 4 SYM theory can be established only if the supersymmetric preserving regularisation is used when computing higher order effects. Up to third order, we find that the anomalous dimensions resulting from IR region are related to those of QCD when the color factors are adjusted suitably. In addition, we confirm the UV anomalous dimensions of the Konishi operator up to third order, whose extraction depends on the universal IR structure of the FFs. This provides a consistency check of both the UV and IR structure of FFs in N = 4. Agreements of our 3-loop result for the FFs of O BP S and 2-loop result for the FFs of O K computed using Feynman diagrammatic techniques with those obtained using on-shell methods in [3,4] and [5], respectively, establish the power and reliability of various state-of-the-arts approaches to deal with higher order corrections in QFT. Finally, we use KG equation to predict four loop results for both BPS and Konishi operators up to ǫ −1 .
2016-10-17T20:00:10.000Z
2016-10-17T00:00:00.000
{ "year": 2016, "sha1": "aeadd40e1ab65e08e522d8194ee709c71444d86f", "oa_license": null, "oa_url": "https://air.unimi.it/bitstream/2434/861460/2/PhysRevD.95.085019.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "aeadd40e1ab65e08e522d8194ee709c71444d86f", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
215771156
pes2o/s2orc
v3-fos-license
Site-bond percolation solution to preventing the propagation of \textit{Phytophthora} zoospores on plantations We propose a strategy based on the site-bond percolation to minimize the propagation of \textit{Phytophthora} zoospores on plantations, consisting in introducing physical barriers between neighboring plants. Two clustering processes are distinguished: i) one of cells with the presence of the pathogen, detected on soil analysis; and ii) that of diseased plants, revealed from a visual inspection of the plantation. The former is well described by the standard site-bond percolation. In the latter, the percolation threshold is fitted by a Tsallis distribution when no barriers are introduced. We provide, for both cases, the formulae for the minimal barrier density to prevent the emergence of the spanning cluster. Though this work is focused on a specific pathogen, the model presented here can also be applied to prevent the spreading of other pathogens that disseminate, by other means, from one plant to the neighboring ones. Finally, the application of this strategy to three types of commercialy important Mexican chili plants is also shown. Introduction The genus Phytophthora (from Greek, meaning phyto, "plant," and phthora, "destroyer" [1,2,3]) is one of the most aggressive phytopathogens that attack the roots of plants and trees in every corner of the world. The diseases caused by exposition to Phytophthora generate tremendous economical losses in agronomy and forestry. For example, P. capsici cause considerable damage in plantations of chili, cucumber, zucchini, etc. [4,5,6]. The same occurs with tomato and potato plantations, which are affected by P. infestants [7,8,9]. P. cinnamomi harms avocado plantations [10,11,12] and, together with P. cambivora, produce the ink disease which is widely distributed along Europe [13,14,15]. Phytophthora has caused significant devastation on Galician chestnut and the Australian eucalypt, putting them close to extinction [16,17,18]. From a biological perspective, Phytophthora shares morphological characteristics with true fungi (Eumycota) such as mycelial growth or the dispersion of spores of mitotic or asexual origin. Its form of locomotion, by means of flagella [19], is a distinctive feature that enables them to have a great impact on the plant kingdom as phytopathogens. They can disperse through soil moisture or water films including those on the surface of the plants. These motile zoospores, emerging from mature sporangia in quantities of 20 to 40, can swim chemotactically towards the plants [19,20,21]. When they reach the surface of the roots they lose their flagella, encyst in the host and form a germination tube through which they penetrate the surface of the plant [22,23]. Moreover, many species of Phytophthora can persist as saprophytes if the environmental conditions are not appropriate, but become parasitic in the presence of susceptible hosts [21]. Due to the physiology of the oomycetes most of the fungicides have no effect on them [1, 24, 25, ?]. Therefore, research on non-chemical strategies that minimize or eliminate the propagation of the pathogen is necessary. It has been noticed that for some type of plants not all individuals manifest the disease after the exposition to a specific pathogen. We take advantage of this fact to define the pathogen susceptibility (χ) of a plant type as the fraction of individuals that get the disease. It can be interpreted as the probability that a sample of the plant gets sick after being exposed to the pathogen, and can be measured in a laboratory or a greenhouse under controlled conditions, or by direct observation in the plantation. On the other hand, one of the models widely used to describe physical processes is the site-bond percolation, that has been applied to study the spread of diseases [26,27,28,29]. It is a generalization of the site and bond percolation that consists in determining both site and bond occupation probabilities needed to the emergence of a spanning cluster of sites connected by bonds. In this context, two nearest-neighboring sites do not belong to the same cluster if there is not a bond connecting them. In this work, occupied sites in the percolation system represent susceptible plants through which the propagation process can occur, and bonds represent the direction of propagation of the pathogen. It is worth mentioning that zoospores move directly to neighboring plants. Placing physical barriers between them (that is, perpendicularly to the direction of propagation) can help to decrease the opportunity for root to root pathogen transmission. For instance, the Australian government recommends using physical root barriers such as impermeable membranes made of high-density polyethylene [30,31,32,33], which have been used in agriculture and horticulture. Trenches filled with compost (a mixture of manure and crop residues) in addition with biological control agents (for example Trichoderma spp. or Bacillus spp.) could be used as a good barrier against soil-borne pathogens like oomycetes and fungi [34,35,36]. With the use of barriers it could be possible to fragment the spanning cluster of susceptible plants, preventing the propagation of the pathogen. Thus, if the pathogen susceptibility of the plant is known, one can try to determine the minimal density of barriers (p w ) that stops the propagation of the pathogen. However, this density does not necessarily corresponds to the bond percolation threshold. Although this paper is motivated by the important problem caused by the propagation of Phytophthora, which is still unsolved nowadays, the strategy presented here can be adapted to mitigate the spread of other diseases. There exist other phytopathogens relevant to agronomy that disseminate over neighboring plants by, for example, walking [37], rain splashing [38,39,40], swimming [41], etc.; such as the red spider mites, leaf rust, Pythium (with similar propagation mechanisms as Phytophthora) among others. In practice one only needs to find a suitable physical barrier that efficiently avoids nearest-neighbor propagation of the specific phytopathogen. In Sec. 2, we introduce the site-bond percolation model for the pathogen-plant interaction and the role of the barriers. Section 3 describes the simulation method used in this work and provides the simulation rules for the clustering process. It also shows an example of the simulation process and describes the data analysis method. In Sec. 4, we report the critical curves as a function of the initial percentage of inoculated soil for the barrier-free case. These curves indicate the maximum value of the pathogen susceptibility that guarantees a spanning cluster of diseased plants is not formed even if the soil is completely infested with the pathogen. Additionally, we provide the empirical formulae to determine the density of barriers that prevents the emergence of the spanning cluster when the susceptibility exceeds the aforementioned critical value. In Sec. 5, we show the application of this method to three varieties of Mexican chili plants with high comercial value. Finally, Section 6 presents the conclusions of this work. Model The plantation is modeled as a simple two-dimensional lattice (square, triangular, and honeycomb) wherein each site represents a plant. The lattice spacing is chosen as the maximum displacement length that the pathogen can travel before entering a state of dormancy or before dying due to starvation. This condition ensures the pathogen can only move to the nearest neighbor cells as depicted in Fig. 1. We assume a site with an active pathogen will propagate the disease to all nearest-neighbor sites. Here, the pathogen susceptibility plays an important role since resistant plants can act as a natural barrier for susceptible plants by locally containing the propagation process, i. e. a resistant plant does not disseminate the disease. In our model resistant plants are uniformly distributed on the system since it is not possible to determine in advance which seeds will grow into resistant or susceptible plants. In this way the pathogen susceptibility plays the role of the occupation probability in the traditional treatment of percolation theory. Another essential variable that needs to be considered is the initial fraction of inoculated cells at the beginning of the propagation process which is denoted by I. In our model these cells are distributed uniformly over the lattice. This parameter is relevant to amalgamate adjacent-disjoint clusters promoting a favorable environment for the formation of a spanning cluster of diseased plants or of cells with the presence of the pathogen [42]. Additionally, we put barriers that are randomly distributed in the lattice. These are placed perpendicularly to the direction of propagation of the pathogen (see Fig. 1), and its primary function is to prevent the pathogen from reaching neighbor sites. Note that all possible barriers that can be placed form the dual lattice to that formed by all possible directions of propagation of the pathogen. Then the question we want to answer is: what is the minimal barrier density, in terms of χ and I, that guarantees a spanning cluster will not appear? We distinguish two different clustering processes: i) the formation of clusters of cells with the presence of the pathogen, and ii) the formation of clusters of diseased plants. Although both processes are consequence of the propagation of the pathogen they depend in different ways on the intrinsic properties of the plants. In practice one would observe the first process if a pathogen soil test is performed while a visual inspection of the damage on the plantation would reveal the second process. In the following we refer to them as soil and plant cases respectively, and the corresponding variables will be labeled with a superscript. In the soil case, for a lattice with N sites, the mean number of available plants N av to the propagation process is N av = N χ. Since the susceptibility of the plant and the inoculation state of the cell are independent variables, it is necessary to take into account the mean number of inoculated cells N in with a resistant plant. This condition adds N in = N (1 − χ)I extra available cells. Thus, the total mean number of cells where the propagation process can occur is N tot = N av + N in . Therefore, the propagation takes place in a percolating system with an effective occupation probability p soil eff = I + (1 − I)χ. In this case, the spanning cluster emerges if p soil eff ≥ p cs , where p cs is the critical probability in the purely site percolation. Thus the desired percolation threshold is p soil eff = p cs . The introduction of barriers in the soil case makes the system suitable to be modeled with the site-bond percolation. The critical curves as a function of the occupation probabilities of sites (p s ) and bonds (p b ) has been empirically fitted using [43] and p cb is the critical probability in the purely bond percolation. Moreover, since barriers are located in the dual lattice, the density of barriers and the bond occupation probability are related by p b + p soil w = 1, that is, the joint-set of barriers and bonds it is exactly N b . So we finally find that the critical curves for the soil case can be written as On the other hand, for the plant case, inoculated cells with a resistant plant do not belong to the cluster of diseased plants. However, these cells play an essential role since adjacent-disjoint clusters can be amalgamated through them. This fact modifies the nearest neighbor meaning since it is then possible to link two susceptible plants separated by a distance greater than the lattice spacing (see Fig. 1), then the possibility to amalgamate adjacent-disjoint clusters is increased [42]. The main difference between the soil and plant cases is just this amalgamating role played by inoculated cells with a resistant plant at the beginning of the propagation process. In the soil case, these cells are considered as occupied sites, while in the plant case, they do not belong to any cluster; however, they can transmit the disease over neighboring susceptible plants. Schematically, this latter situation looks like a healthy plant with sick neighbors. Simulation method We implemented a modified version of the Newman-Ziff algorithm reported in Refs. [44,45] to determine the percolation threshold. Since the susceptibility condition of each plant and the cells' inoculation state are independent of each other they are stored in separate matrices in the simulation. These matrices, that we call X and I, respectively, are initially null. They are then filled according to the predefined values of χ and I. For the case with no barriers, however, only the knowledge of the inoculated cells is required to determine the percolation thresholds. For simplicity we describe the implementation of the algorithm for a square lattice. However, this algorithm can also be used for other lattices simply changing the implementation of the nearest neighbor definition. Each cell of the L × L matrices X and I is labeled with a progressive number M = iL + j, for the cell at row i and column j. The set of cells' labels is then N = {0, 1, 2, . . . , L 2 − 1}. On the other side, the possible propagation directions for all cells form a network with 2L(L − 1) bonds since the system is considered as free of periodic boundary conditions. As we did with the cells, each bond is labeled with progressive numbers that form the set An initial number of inoculated cells n I is drawn from the binomial distribution B(L 2 , I) and then n I labels are randomly taken from the set N . The corresponding cells are the sites from which the infection process will propagate. These cells are marked by changing their state from 0 to 1. The initial distribution of susceptible plants, that is plants that will get the disease if they are exposed to the pathogen, is obtained in a similar way. Note that only the initial conditions are set so far and the propagation process has not been started so that no cells are linked yet. To add bonds between cells the N b labels are randomly permuted and then the corresponding bonds are added one at a time until a spanning cluster is formed. It should be recalled that bonds determine the direction of propagation in this model. To decide which bonds will connect the sites we impose rules based on the way the pathogen transports itself from site to site. Since the zoospores are capable of detecting the presence of neighboring plants, they will swim towards them as soon as they emerge from the sporangia. If a zoospore reaches a resistant plant it will either enter a latency state or die from inanition so that it won't be able to further propagate the disease. If, on the other hand, the zoospore arrives at a susceptible plant, it will attack the plant and produce new sporangia. They, in turn, will produce new zoospores that will eventually swim towards neighboring plants. Thus the rules can be stated as follows. A bond will connect two nearest-neighbor sites if: 1. Soil case: (a) Any of the sites was inoculated during the initial configuration. This way bonds are added one by one, and sites are connected according to the rules above, until a cluster that connects one side of the lattice to the opposite one, the so-called spanning cluster, appears. The union-find algorithm is used to connect sites. Since not every site pair can interact not every bond can connect adjacent sites. In order to identify the spanning cluster, before starting the simulation process, susceptible plants in the last and first rows are united with auxiliary labels -1 and -2, respectively. Then, the simulation process is stopped when the labels {-1,-2} change to the same value. The essential difference between the two cases is the role played by the inoculated cells with a resistant plant. In the soil case they become occupied sites while in the plant case they may merge disjoint clusters. To visualize the difference between both cases consider an L = 10 system with χ = 0.5 and I = 0.4. Figure 2 shows one possible initial configuration of susceptible plants and inoculated cells before the propagation process starts. In a system of size L = 10 there are 180 bonds. A possible random permutation of their labels is listed below: {118 The bonds are added in this order until a spanning cluster appears. The entries of one of the cells a given bond can connect are given by i = ⌊h/(2L − 1)⌋ and j = h mod (2L − 1), where h is the bond's label and ⌊x⌋ denotes the integer part of x. Note that the orientation of the bond is identified as horizontal if j < L − 2 or vertical otherwise. In addition, the value of j should be corrected for vertical bonds by subtracting L − 1. Then, the cells with entries i, j and i, j + 1 are taken if the bond is horizontal; while the cells at i, j and i + 1, j are taken if the bond is vertical. Finally, if the pair taken fulfills the rules given previously they are connected using the union-find algorithm. Figure 3 shows the networks formed by connected bonds in both cases. While in the soil case 121 bonds were added before the spanning cluster appeared, in the plant case were needed 160 bonds. Note that, although each network has its own topology, in the plant case the fundamental role for the formation of a spanning cluster is played by the modification of the nearest neighbor definition (yellow lines in Fig. 3b) introduced by the interactions between susceptible plants and inoculated cells with a resistant plant on it (dashed lines in Fig. 3b). This clearly shows the consequence of this type of interactions, namely their capacity to merge disjoint clusters of susceptible plants. Data analysis Using this method, we determined the probability P n that a spanning cluster appears after adding n bonds (or sites) [46] as an average over 10 4 runs for each pair (χ, I). Starting in χ = 1 and I = 1 we decreased their values independently in steps of ∆χ = ∆I = 0.05. Then the percolation probability is computed as P (p) = n B(N, n, p)P n where B (N, n, p) is the binomial distribution [44,45], N is the total number of sites or bonds in the lattice and p is the occupation probability of sites or bonds correspondingly. Lastly, the percolation threshold is determined by solving the equation P (p c ) = 0.5 [47]. To this end, the percolation probability is computed from n c /L 2 − 0.15 to n c /L 2 + 0.15 in steps of ∆p = 0.01. Then, P (p) = 0.5[1 + tanh((p − p c )/∆ L )] is fitted to the estimated data. Here p c is the estimation of the percolation threshold and ∆ L is the width of the sigmoid transition [47]. To take finite size effects into account we also performed simulations using the system size L = 32, 64, 128, and 256. Thus the percolation threshold in the thermodynamic limit is estimated by the extrapolation of the scaling relation p c − p c (L) ∝ L −1/ν , where ν is the exponent corresponding to the correlation length [48]. It is well known that the transition width ∆ L scales as a function of the system size L as ∆ L ∝ L −1/ν [49]. From the fit of the percolation probability data, we found that ν = 4/3, which is in good agreement with the results reported in the literature for the percolation theory in 2D. Finally, the critical density of barriers is calculated as p w = 1 − p * cb , where p * cb is the bond percolation threshold as a function of χ and I. Results Simulation results for the critical curves of both soil and plant cases with no barriers are shown in Fig. 4 a). Notably, our results for χ soil c are very well described by the parametrization p soil eff = I + (1 − I)χ = p cs . Notice that the critical curves for χ plant c deviate from those for χ soil c for I > 0.15. This is due to non-susceptible plants lying in inoculated cells which do not belong to the clusters and can serve as a bridge between their adjacent sites. We found that χ plant c can be well fitted by the Tsallis distribution p cs /(1 + aI/n) n , with a =0.91±0.03 and 1.40±0.06 and n =2.0±0.4 and 1.1±0.1 for the square and triangular lattices, respectively. For the honeycomb lattice n takes a large value so we used p cs exp(−aI) with a =0.63±0.01. This behavior can be understood as the collective contribution of the interaction between susceptible plants and infected cells with a resistant plant. Note that the probability of observing this pair become higher as χ decreases and I increases, and thus, the percolating system looks like a lattice formed by regular sites and sites involving complex nearest neighbors. The main result of this analysis is the existence of a minimal susceptibility that guarantees the non-emergence of a spanning cluster of diseased plants even if all cells are inoculated, that is the value of χ plant c for I = 1. However, if χ > χ soil c or χ > χ plant c it is necessary to use the barrier strategy to reduce the connectedness of the lattice. In Fig. 4 b), we show the simulation results for the soil case. Notice that they are well described by Eq. (1), which corresponds to the description of the typical critical curves in the site-bond percolation with an occupation probability p soil eff . This is because in this case the infected cells are taken into account in the cluster formation process even if the plant does not become sick. On the other hand we found, for the plants case, that the relation between χ, χ plant which matches very well the simulation data for the square, triangular and honeycomb lattices as shown in Fig. 6 for different values of I. Table 1 shows the values of the parameters α and β (for different values of I) given by the fit to simulation data for the square, triangular and honeycomb lattices. Moreover, in the case χ = 1, p plant w = 1 − p cb as expected since, under this condition, the system corresponds to the traditional bond percolation model. Application to chili plantations Application of Eq. (2) requires the knowledge of the plant's pathogen susceptibility. This quantity has been measured experimentaly as described in Ref. [42]. In general terms their method consists in sowing plants in previously sterilized soil and innoculating a fraction of the substrate with oomycetes. The pathogen is then allowed to propagate through the plantation and the presence of the pathogen is asessed for each plant. The ratio of the number of live infected plants to the total number of infected plants gives the surviving rate P. The pathogen susceptibility of the plant is then calculated as χ = 1 − P. The reported values of the pathogen susceptibility for the varieties Arbol, Poblano and Serrano plants of chilis (which are of high commercial value un Mexico) are 1.00, 0.89 and 0.60, respectively. Putting these values into Eq. (2) we obtained the curves for p plant w as a function of I shown in Fig. 7 for a square lattice. Note that as the value of χ approaches 1, like for the Arbol and Poblano chilis, the barrier density approaches the bond percolation threshold (p cb = 0.5) since in these particular cases the percolating system is very similar to the bond percolation model. On the other side, as χ approaches the site percolation threshold, like for the Serrano chili, the range of possible values for p plant w becomes larger however p plant w (I = 1) ≈ 0.41 is less than 0.5. In practice this means an 18% less barriers are needed to prevent the disease propagation. Also, as χ becomes less and less than p cs , the value of p plant w decreases until it vanishes. This point, when p plant w (I = 1) = 0, corresponds to the intersection of the critical χ plant c curve with the vertical line I = 1 (see Fig. 4). This is just the greatest value of a plant's susceptibility that makes the barrier strategy unnecessary. Conclusions In summary, we have presented a strategy based on the site-bond percolation model to prevent the propagation of Phytophthora over a plantation. This strategy consists of placing barriers between adjacent cells, whose density depends on χ and I. Two different clustering processes were analyzed: i) clusters of cells with the presence of the pathogen, and ii) clusters of diseased plants. The former is related to a soil test and the latter to a direct visual inspection of the damage on the plantation. It was found that both processes are indistinguishable, and therefore described by the same critical curve, for I < 0.15. On the contrary, for I > 0.15 this behavior does not hold and different approaches for each process are necessary. Differences in the critical density of barriers between the soil and plant cases are a consequence of the hybridization process of the lattice, which leads to a major deviation when I increases and χ decreases (see Fig. 6). The soil case is described by the site-bond percolation model with an effective occupation probability given by p soil eff = I + (1 − I)χ. Then the critical curves are as usual (see Eq. (1)) because the clustering process of the infected cells does not distinguish the sickness states of the plant. In the plant case, the critical curves predict the existence of a minimal susceptibility χ plant c that guarantees a spanning cluster of infected plants will not appear, that is, if χ < χ plant c even when p w = 0 and I = 1. Values for the minimal susceptibility in square, triangular and honeycomb lattices were found to be 0.28883±0.00007, 0.2141±0.0003 and 0.364±0.003, respectively. Particularly, for the square lattice, this value is in agreement with the critical probability of lattices with more complex neighborhoods [50,51]. Based on the obtained results, we would advise farmers and agronomists either to sow types of plants having a pathogen susceptibility lower than χ plant c , or to apply the barriers strategy with a barrier density given by Eq. (2). A very important advantage of this strategy is that it does not require to remove plants therefore avoiding deforestation. This strategy could be verified under controlled conditions, for example, in greenhouses, tree nurseries, and hydroponics, where Phytophthora and other phytopathogens cause great devastation. On the other hand, its application on a real life situation requires to take into account other ecological and environmental variables, such as plant-plant or (beneficial) microorganism-plant interactions, irrigation system, spatial Table 1 for the Arbol (purple), Poblano (black) and Serrano (red) chili plants on a square lattice. distribution of plants, the care provided by the farmer or the possibility of having more than one type of pathogen in the same parcel of soil. Finally, Eq. (2) for I = 0 could be used as an alternative parametrization of the critical curves in the site-bond percolation model even for lattices defined in dimensions higher than two. J. E. R. acknowledges financial support from CONACyT (postdoctoral fellowship Grant no. 289198). C.P. was supported by the grant Maria de Maeztu Unit of Excellence MDM-20-0692 and FPA Project No. 2017-83814-P of Ministerio de Ciencia, Innovación y Universidades (Spain), FEDER and Xunta de Galicia.
2020-03-12T10:37:09.986Z
2020-03-01T00:00:00.000
{ "year": 2020, "sha1": "0edac1525204678a48726f3ec9c2553db9bedf15", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2004.09644", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "210c261e5c4ff3dfa22a0570cb5a775aebbf446d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Biology", "Physics", "Medicine", "Mathematics" ] }
261582374
pes2o/s2orc
v3-fos-license
The influence of black holes on the binary population of the globular cluster Palomar 5 The discovery of stellar-mass black holes (BHs) in globular clusters (GCs) raises the possibility of long-term retention of BHs within GCs. These BHs influence various astrophysical processes, including merger-driven gravitational waves and the formation of X-ray binaries. They also impact cluster dynamics by heating and creating low-density cores. Previous N-body models suggested that Palomar 5, a low-density GC with long tidal tails, may contain more than 100 BHs. To test this scenario, we conduct N-body simulations of Palomar 5 with primordial binaries to explore the influence of BHs on binary populations and the stellar mass function. Our results show that primordial binaries have minimal effect on the long-term evolution. In dense clusters with BHs, the fraction of wide binaries with periods>$10^5$ days decreases, and the disruption rate is independent of the initial period distribution. Multi-epoch spectroscopic observations of line-of-sight velocity changes can detect most bright binaries with periods below $10^4$ days, significantly improving velocity dispersion measurements. Four BH-MS binaries in the model with BHs suggests their possible detection through the same observation method. Including primordial binaries leads to a flatter inferred mass function because of spatially unresolved binaries, leading to a better match of the observations than models without binaries, particularly in Palomar 5's inner region. Future observations should focus on the cluster velocity dispersion and binaries with periods of $10^4-10^5$ days in Palomar 5's inner and tail regions to constrain BH existence. Palomar 5 (Pal 5) is among the Galactic GCs renowned for its long tidal streams and unusually low central density (e.g.Rockosi et al. 2002;Odenkirchen et al. 2001Odenkirchen et al. , 2002Odenkirchen et al. , 2003;;Koch et al. 2004;Odenkirchen et al. 2009;Carlberg et al. 2012;Kuzma et al. 2015;Ishigaki et al. 2016;Price-Whelan et al. 2019;Bonaca et al. 2020;Starkman et al. 2020), which suggests the possible presence of a substantial number of BHs in the cluster (Gieles et al. 2021, hereafter G21).Understanding the properties of the BH population in Pal 5 is also crucial for explaining the pronounced nature of its stream.G21 employed self-consistent -body models that resolve individual stars to propose the existence of a large population of BHs in the cluster core (20% of the total mass), enhancing tidal disruption.However, the BH hypothesis needs further confirmation, because the observed density profiles of the cluster and the stream could also be reproduced by an -body model of a BH-free cluster with a low initial density. The binary population of Pal 5 plays a crucial role in resolving this degeneracy.According to the Heggie (1975)-Hills (1975) law, close encounters with binaries can result in two opposing evolutionary trends: wide/soft binaries become less bound and decay with a few close encounters, while tight/hard binaries become tighter due to the increased kinetic energy of the intruder and the centre-of-mass of the binary.The boundary between these two types depends on the local kinetic energy of particles where the binary resides.G21 argue that the kinetic energy of BHs is higher than that of stars in a cluster without BHs with similar half-light radius.It is therefore expected that fewer soft binaries could survive in the case the cluster contains BHs, which is a prediction that can be tested with observations.Furthermore, due to the large distance of Pal 5, most binaries cannot be resolved spatially by current state-of-art observational instruments.Because unresolved binaries might influence the determination of velocity dispersion and present-day mass functions, it is worthwhile to investigate how primordial binaries and BHs collectively affect the line-of-sight velocity measurement and mass function and whether it can be used to indirectly constrain the existence of BHs. In this study, we perform -body simulations of several Pal 5-like clusters with and without BHs and incorporating a large number of binaries, to examine the impact of BHs on binary disruption and the long-term evolution of Pal 5 and its tidal tails.Section 2 describes the -body simulation method, data analysis tools, and the observational data of Pal 5 utilized in this study.Section 3 presents the results of our -body models, comparing the structural evolution, surface number density, binary properties, and present-day mass function with models from G21 and observational data.Section 4 discusses the limitations of our models and outlines prospects for future observations.Finally, Section 5 concludes this work. N-body code We conducted simulations of Pal 5-like clusters using the highperformance -body code petar (Wang et al. 2020b).To achieve high parallel performance, the framework for developing parallel particle simulation codes (fdps) is implemented in petar (Iwasawa et al. 2016(Iwasawa et al. , 2020)).The code incorporates the particle-tree and particleparticle method (P 3 T) (Oshino et al. 2011), which enables the separate integration of long-range and short-range interactions between particles.For accurate integration of the weak long-range interactions, the code uses a Barnes & Hut (1986) particle-tree method with a 2nd-order Leap-frog integrator, which has a computational cost of ( log()).To accurately follow orbital motions of binaries, hyperbolic encounters, and the evolution of hierarchical few-body systems, the 4th-order Hermite method along with the slowdownalgorithmic regularization (SDAR) method is used (Wang et al. 2020a).One of the major advantages of the petar code is its capability to include a large fraction of binaries, up to 100%, in the simulation of stellar systems without significant performance loss.This feature enables us to carry out the models presented in this work. In our simulations, we included binaries with a wide period distribution (see Section 2.5), requiring the use of Leap-frog, Hermite, and SDAR integrators for integrating binary orbits.While Leap-frog and SDAR are symplectic methods that conserve energy and angular momentum, the Hermite integrator does not.We employ sufficiently small time steps for the Hermite integrator to ensure that the artificial drift of semi-major axes and eccentricities remains insignificant throughout the entire evolutionary time of all our models.The key parameters for switching the integrator and controlling the accuracy of one simulation in this work are provided below: • Changeover inner radius: 0.0027 pc • Changeover outer radius: 0.027 pc • SDAR separation criterion: 0.000216 pc • Tree time step: 0.0009765625 Myr • Hermite time step coefficient : 0.1 See Wang et al. (2020b) for the details on the definition of these parameters. The population synthesis code for single and binary stellar evolution, sse and bse, are implemented in petar (Hurley et al. 2000(Hurley et al. , 2002)).Furthermore, the code utilizes an updated version from Banerjee et al. ( 2020) that incorporates semi-empirical stellar wind prescriptions from Belczynski et al. (2010); Vink et al. (2011), a "rapid" supernova model for remnant formation and material fallback from Fryer et al. (2012), and the pulsation pair-instability supernova (PPSN) model from Belczynski et al. (2016).By including or excluding fallback we control the retention of BHs in our simulations. Mock photometry To convert snapshots from the -body models to photometric data for different filters used in observations, we use the code galevnb (Pang et al. 2016), which selects corresponding spectral templates from the library of Lejeune et al. (1997Lejeune et al. ( , 1998) ) according to the fundamental stellar properties, such as stellar mass, temperature, luminosity and metallicity from -body simulations.By convolving the spectra with the filter response curve from a given filter, we obtain the observational magnitudes of specific filters of main-stream telescopes, such as Hubble Space Telescope (HST) and the future Chinese Survey Space Telescope (CSST) for individual stars in the body models.In this way, we produce mock observations for -body models, which allows a direction comparison with observational data.This is useful to compare the density or surface brightness profiles, unresolved binaries and stellar mass functions between observations and the models.In this study, the line-of-sight velocity of unresolved binaries is calculated using the Johnson I-band filter (as described in Section 3.2.4).For creating the color-magnitude diagram, we employ the HST F555W and F814W filters, along with the CSST g and i fil-ters.To convert luminosity to mass for unresolved binaries, we utilize the HST F555W filter.Further details can be found in Section 3.4. Observational data To validate our -body model and ensure its accuracy in reproducing the surface number density Σ() and mass function of Pal 5, we compare it with observational data.We utilize the data from Ibata et al. (2017) for the surface number density and the masses of stars obtained from two HST observations with Program IDs 6788 (PI: Smith;Grillmair & Smith 2001) and 14535 (PI: Kuepper) as reported in Baumgardt et al. (2023). The observed surface number density Σ() encompasses stars with g-band magnitudes ranging from 19 to 23, with photometry obtained from the Canada-France-Hawaii Telescope.The corresponding mass range of these stars is 0.625 to 0.815 ⊙ , determined using the magnitude-mass conversion provided by G21. Regarding the masses of stars derived from the HST data, Baumgardt et al. (2023) employed Dartmouth isochrones to fit the CMDs of the clusters and employed them to convert magnitudes into masses.Further details can be found in their work. Star cluster models To reproduce Pal 5's observed surface density and present-day position in the Galaxy, we generate the initial conditions of -body models by referring to the wBH-1 and noBH-1 models in G21, which have the closest property to the observational data assuming Pal 5 contains a cluster of BHs and no BH, respectively. For the wBH-1 model, natal kick velocities of BHs after supernovae are affected by the material fallback from Fryer et al. (2012).A large fraction of BHs are retained in the clusters and finally sink to the centre via dynamical friction.The existence of a BH subsystem can significantly affect the structure and evolution of star clusters.As a result, the cluster has a loose core of luminous stars.The wBH-1 model has an initial half-mass radius, h,0 = 5.85 pc, and an initial number of stars, 0 = 2.1 × 10 5 . In contrast, the noBH-1 model assumes BHs have the same high kick velocities as neutron stars and almost none are retained after supernova explosions.Without BHs, the core collapse of luminous stars result in a dense core.In order to reproduce the observed surface brightness profile, G21 find that the cluster must therefore have had a much lower density initially.Thus, for the noBH-1 model, h,0 = 14 pc and 0 = 3.5 × 10 5 . We conducted five -body models with varying setups of primordial binaries and the presence of BHs.The initial conditions for these five models are summarized in Table 1.We assigned labels to the models to indicate the existence of primordial binaries and BHs. For BH treatment, models with the label "BH" refer to the wBH-1 model from G21, where the mass fallback scaling for kick velocities is applied so that a part of the BHs has low kick velocities and stays in the clusters.They also have the same 0 and h,0 as those of the noBH-1 model. Models with the label "noBH" refer to the noBH-1 model from G21.In these models, all BHs have high kick velocities similar to the neutron stars after asymmetric supernovae.The velocity distribution follows a (1D) Maxwellian distribution with a dispersion of 265 km/s.As a result, we found no BHs are retained in our noBH models. The prefix "noBin" and "Bin" represent without and with primordial binaries, respectively.For "Bin" models, all stars are in binaries initially.For massive binaries with the component mass > 5 M ⊙ , except the Bin-noBH-F model, all other "Bin" models have the period and mass ratio distributions follow the observational constraints of OB binaries from Sana et al. (2012). For low-mass binaries, except the Bin-BH-Alt model, all other "Bin" models assume the properties of primordial binaries following the model from Kroupa (1995a,b) and Belloni et al. (2017) (naming as Kroupa binary model).The orbital parameters of this model are derived from the inverse dynamical population synthesis of binaries in the Galactic field.This model assumes an universal property of primordial binaries and all stars forming in star clusters.In addition, a correction of the period and eccentricity distributions from Belloni et al. (2017) is included to better fit the observational data of GCs. For the Bin-BH-Alt model, we assume a different setup of lowmass primordial binaries (referred to as FlatLog model) as a comparison with the Kroupa binary model.The semi-major axes follow a flat distribution in the logarithmic scale where the minimum and maximum value are 3 solar radius and 2 pc, respectively.The eccentricity and mass ratio distributions are the same as those of the Kroupa binary model. The period and eccentricity distributions are shown in Figure 2.For both binary models, the initial distribution of periods covers a wide region with 9 orders of magnitudes.The initial eccentricities exhibit a sharp peak at = 0 and a broader peak at = 0.8, respectively.All binaries with peri-centre separation less than the sum of the stellar radii of the two components are excluded.Thus, an empty region is visible in the period-eccentricity distribution of Figure 2. In addition, the eccentricity distributions of the Kroupa and FlatLog are different after adjustment. These binary setups cover a wide range of binary orbital periods, where a large fraction of binaries are unstable in the cluster environment.After a short time (about one crossing time), the binary fraction significantly reduces.Referring to Pal 5, the binary fraction of our setup may be overestimated.The benefit is that we can investigate how long-term dynamical evolution of the clusters with and without BHs affect both the tight and wide binaries. The Bin-noBH-F model has the same 0 and h,0 as those in the noBH-1 model.However, after finishing the simulation, we found that the Bin-noBH-F model cannot reproduce the final structure of the noBH-1 model at 11.5 Gyr and it has sufferred complete tidal disruption before 10 Gyr.The suffix "F" in the name of the model indicates that this is a failed model.Thus, we conducted another model "Bin-noBH" by reducing h,0 to 13.2 pc.This small modification results in a cluster similar to Pal 5 after 11.5 Gyr. In addition, we excluded massive binaries in the Bin-noBH-F model to prevent non-supernovae BH formation in a binary, but we observed that such events did not occur.Therefore, in the Bin-noBH model, we added the Sana distribution to massive binaries to ensure consistency with the Bin-BH models. The common setup for all models is also summarized in Table 1.All models were evolved for a duration of 12.0 Gyr.At 11.5 Gyr, the clusters are located at the same Galactic position as Pal 5.However, since the model did not precisely reproduce the surface number density of Pal5, we continue to evolve the cluster further to determine the age (referred to as mat ) when the model matches the observation more closely, as detailed in Section 3.1.5.We assumed a spherically symmetric Plummer profile (Plummer 1911) with no primordial mass segregation.The initial mass function (IMF) of stars followed the two-component power-law shape described by Kroupa (2001).We adopted the same mass range of 0.1 − 100 ⊙ as used in G21, and the power-law indices () and mass ranges are described as: In this study, we adopted a cluster metallicity of = 0.0006, which is consistent with the value reported in Smith et al. (2002) of [Fe/H] ≈ −1.4 dex for Pal 5.The initial star cluster models were generated using the updated version (Wang et al. 2019) of the mcluster code (Küpper et al. 2011).This update includes the implementation of the Kroupa binary model generator, as shown in Figure 2. Structural evolution First, we present the evolution of the cluster structure and compare our results to the models from G21 and the observational data.Generally, although the existence of binaries does not significantly affect the structural evolution, the small difference can be amplified by the Galactic tidal field and result in early dissolution of the Bin-noBH-F model.In addition, the existence of primordial binaries reduces the BH populations and results in shorter relaxation times in the early evolution.The stochastic formation of BBHs also affects the expansion of the cluster and eventually influences the disruption of the cluster.The surface number density of -body models roughly agree with observations with a larger central density. Half-mass relaxation time The two-body relaxation time is an important timescale of stellar dynamics, which reflects the speed of changes in the density and mass segregation of a cluster and its tidal dissolution.The onecomponent half-mass relaxation time ( rh1 ) defined in Spitzer (1987) has the form as where is number of stars, h is the half-mass radius, is the average mass of stars, is the gravitational constant, and ln Λ is the Comlumb logarithm.When BHs exist, the binary heating is dominated by BBHs, rh1 leads to an underestimation of the relaxation timescale of the system.Wang (2020) found that a proper two-component relaxation time ( rh ) can be obtained by dividing a correction factor , defined as and where the suffixes 1 and 2 represent the quantities for non-BH and BH components, respectively.Figure 3 illustrates the evolution of rh and .The three BH models exhibit significantly shorter rh compared to the noBH models.During the first 100 Myr, the noBin-BH model displays a longer rh compared to the Bin-BH and Bin-BH-Alt models because the Bin models treat binaries as single objects when calculating rh .Consequently, the Bin-BH and Bin-BH-Alt models experience relatively faster expansion of h and faster mass segregation of BHs (see Section 3.1.2).Subsequently, the trend reverses, and the rh of the noBin-BH model becomes shorter than that of the Bin-BH and Bin-BH-Alt models due to the difference in the number of BHs (see Section 3.1.3).As a result, the h of the noBin-BH model expands faster than that of the other two models.After 8 Gyr, the rh of all three BH models starts to decrease due to mass loss via tidal evaporation. The values of for the BH models exceed 5, indicating that BHs significantly impact the relaxation process of the clusters.Further discussion of h is provided in Section 3.1.2. In contrast, the two noBH models exhibit much longer rh .There is a rapid increase in rh during the first 100 Myr, primarily due to the strong stellar winds from massive stars and the escape of BHs.Consequently, although the morphology appears similar at 11.5 Gyr for models with and without BHs, the relaxation processes differ significantly.These differences can lead to variations in the properties of binaries.In Section 3.2, we analyze the impact of these differences and discuss their implications for binary systems.It is important to note that assuming = 1 for the noBH models is not accurate, as there is still an order of magnitude difference between the minimum and maximum masses of stars. Half-mass radius Figure 4 illustrates the evolution of h for all models, including the ones from G21 for comparison.We observe that the presence of primordial binaries has a weak impact on the evolution of h , consistent with the theoretical findings of Wang et al. (2022).When BHs exist, the long-term structural evolution of star clusters is primarily controlled by binary heating driven by the dynamical interactions between BBHs and the surrounding objects at the cluster center.The majority of primordial binaries have much smaller masses compared to BBHs, and therefore have a negligible impact on the binary heating until most BHs have escaped from the cluster.A small subset of massive primordial binaries can eventually evolve into BBHs.However, even in the absence of these massive binaries, a star cluster can generate BBHs through chaotic three-body interactions when the central density of the cluster reaches a threshold after the core collapse of BHs (see Section 3.1.4).Consequently, we only observe minor differences of h between the Bin-BH, Bin-BH-Alt, and wBH-1 models during the first 10 Gyr of evolution.This can be explained by the differences in relaxation times ( rh ) discussed in Section 3.1.1.The galactic potential also affects h , but since all models share the same orbit, the influence is similar.However, after 10 Gyr, the Bin-BH-Alt model exhibits a similar h to that of the wBH-1 model, but its h shows significant variations, indicating an energy imbalance and the onset of a disruptive tidal phase.In contrast, both the Bin-BH and wBH-1 models remain stable until 12 Gyr.This differing behavior is attributed to stochastic BBH heating, as explained in Section 3.1.4. The BH models with binaries (Bin-BH) and without binaries (noBin-BH) exhibit different timescales for the mass segregation of black holes, as indicated by the initial rapid contraction of h,BH .In the Bin-BH model, h,BH undergoes faster contraction during the early stages of evolution compared to the noBin-BH model.This disparity can be attributed to the difference in rh , as the timescale for mass segregation is proportional to rh . When comparing the noBH models with binaries (Bin-noBH-F) and the model from G21 without binaries (noBH-1), significant differences in the evolution of h emerge after 8 Gyr.The Bin-noBH-F model experiences tidal disruption at around 9 Gyr, whereas the noBH-1 model survives until 11.5 Gyr.G21 noted that the final properties of the noBH models are more sensitive to changes in the initial conditions, and in fact argued that this 'fine tuning' problem disfavours the noBH scenario.An offset of h,0 needs to be introduced in the Bin-noBH model to achieve consistent h at 11.5 Gyr. Two factors may explain the need for this offset.Firstly, in the absence of BHs, binary heating is primarily generated by low-mass binaries.Consequently, the influence of primordial binaries is more pronounced compared to models with BHs.Secondly, due to the larger h,0 , the cluster becomes more sensitive to the galactic tide.The presence of primordial binaries affects the relaxation time of the system, as the dynamical effect of tight binaries is equivalent to that of single objects, resulting in a shorter relaxation time for the sys-tem.Consequently, the system dissolves faster, necessitating a denser initial cluster to allow the cluster's survival, as seen in the noBH-1 model.Additionally, the differences caused by the stochastic scatter of h resulting from the random seeds used to generate the initial conditions may also be amplified by the galactic tide, contributing to the divergent evolution. Mass loss The upper panels of Figure 5 show the evolution of the total mass ( ()) of our models.Data of the wBH-1 and the noBH-1 from G21 are also shown as references.The mass loss has two channels: wind mass loss driven by stellar evolution and escapers via stellar dynamics of star clusters.To have a consistent definition of , all models use the same criterion to select escapers.First, we calculate the bound energy of stars and centre-of-the-mass of binaries without external potential and then select escapers with energy >0. Here we compare the three cases: For models with no primordial binary and with BHs, () of our noBin-BH model agrees with the wBH-1 model from G21.The final mass of the noBin-BH model at 11.5 Gyr is slightly larger than that of the wBH-1 model. For models with primordial binaries and with BHs, compared to the wBH-1 model, the Bin-BH and the Bin-BH-Alt models lose mass faster during the first few hundred Myr, but mass loss of the Bin-BH model becomes slower near the end of the simulation.Finally, the Bin-BH and the wBH-1 models agree with each other, while the Bin-BH-Alt model dissolves after about 11 Gyr. For models with no BHs, the Bin-noBH-F model with primordial binaries loses mass faster than the noBH-1 model with no binaries.The Bin-noBH model, with a smaller h,0 , experiences a relatively slower mass loss, and its () remains slightly above that of the noBH-1 model at 11.5 Gyr.In general, the evolution of () and h are similar for all three cases. Black holes BHs significantly affect the long-term dynamical evolution.We investigate the mass fraction of BHs ( BH ) and the bound mass of BHs ( BH ) in Figure 5.The evolution of BH in the noBin-BH and the wBH-1 models agree with each other in the first 8 Gyr.Then, BH increases more slowly in the noBin-BH model and is half that in the wBH-1 model at 11.5 Gyr. BH of the noBin-BH model is slightly smaller than that of the wBH-1 model initially and such a difference is inherited in the long-term evolution.Finally, as a large fraction of stars escape, such initial differences lead to a large difference of BH at the end. For the Bin-BH and the Bin-BH-Alt models, BH are significantly smaller than that of the noBin-BH model during the early evolution.This difference is due to the stellar evolution of massive binaries.Based on the orbital parameters of binaries from Sana et al. (2012), the progenitors of BHs (massive stars) are all in binaries.A fraction of the tight binaries suffers mass transfer and mergers.The BHs formed from these binaries can have different distribution of masses.The maximum BH of the Bin-BH model is about 250 ⊙ less than that of the noBin-BH model.Then, after the mass segregation of BHs (a few hundreds Myr), binary heating of BBHs start to kick out BHs from the cluster, and result in larger difference of BH during the long-term evolution.Although the Bin-BH (Bin-BH-Alt) and the noBin-BH models show a large difference of BH , their evolution of and h is similar before 10 Gyr.This was also observed in Wang et al. (2022).The evolution of the semi-major axes () of BBHs reflects both binary heating and mergers driven by gravitational wave (GW) radiation.Figure 6 provides a comparison of this evolution for the three BH models.Despite the absence of primordial binaries in the noBin-BH model, we can still observe the formation of BBHs and their orbital contraction.The frequency of BBH formation and the overall trend of are similar for all three models, except that the two models with primordial binaries exhibit a higher number of BBHs formed from these binaries during the first 1000 Gyr.Some of these BBHs with < 1 AU undergo orbital shrinking due to GW radiation, ultimately merging to form more massive BHs.These newly formed BHs lead to the creation of massive BBHs with masses exceeding 100 ⊙ .The presence of these massive BBHs can have a substantial impact on the evolution of the star cluster, influencing its dynamical and structural properties. In particular, for the Bin-BH-Alt model, the formation of a massive BBH around 8 Gyr coincides with a faster expansion of h compared to the Bin-BH model, ultimately leading to an earlier disruption of the Bin-BH-Alt model.Hence, the divergent evolution of the Bin-BH and Bin-BH-Alt models after 8 Gyr is attributed to the stochastic formation of BBHs. It is important to note that our models do not account for the high-velocity kicks experienced by newly formed black holes due to asymmetric GW radiation following mergers.Therefore, the formation of such massive BBHs might not be as common as our models suggest.Consequently, the stochastic effect of massive BBH heating could be overestimated in our cases. Surface number density profiles The determination of h and relies on the selection criteria for identifying cluster members.When comparing the -body models with observational data from Pal 5, it is challenging to use the exact same selection criterion for both.A more appropriate approach is to compare the surface number density (Σ()), where represents the angular distance from the cluster center in the International Celestial Reference System (ICRS). Figure 7 illustrates the Σ() profiles for our -body models and the observational data of Pal 5 obtained from Ibata et al. (2017).To ensure consistency with the observations, only main-sequence stars with masses ranging from 0.625 ⊙ to 0.815 ⊙ are considered in the -body data (see G21 for details). No stars are removed during the simulation, allowing for the tracking of the tidal tail evolution.The centre-of-mass position of the star clusters in the Galaxy at exactly 11.5 Gyr does not perfectly align with that of Pal 5.This is due to the long-term evolution of star cluster, where the center of the cluster drifts as a result of asymmetric mass loss due to stellar winds, supernovae, and the escape of stars.Therefore, we select snapshots from the simulations that have the closest centre-of-mass distance to that of Pal 5 whenever a comparison is required in the subsequent analysis.We then correct the positions and velocities of the stars by applying the offset between the centre-of-mass of the -body models and the observational data.The results of this correction are presented in the upper panel of Figure 7. Due to the complete disruption of the Bin-noBH-F model, it is not possible to determine the centre-of-mass position for this particular model.Therefore, it is excluded from some analysis and comparisons. The vertical lines in Figure 7, representing the half surface number radii ( hn ), indicate that all models except the Bin-BH-Alt model are more centrally concentrated than the observed Pal 5.In Figure 5, it is shown that these models retain more mass at 11.5 Gyr compared to the models presented in G21. The Bin-noBH and Bin-BH models exhibit similar Σ() profiles, but this similarity is coincidental since they had different initial density profiles and evolved in opposite ways, as demonstrated in Figure 4. Given the time-consuming nature of the simulations, it is challenging to precisely reproduce the models of G21 and the observational data.To enhance the comparison with the observational data, we selected snapshots at different ages that match the observed Σ() profile.These results are displayed in the bottom panel of Figure 7.Although the tidal streams differ substantially, we can still compare the internal properties of binaries and mass functions using these snapshots. Binding energy of binaries While the BH and noBH models may exhibit a similar Σ() profile, as demonstrated in Figure 7, their relaxation processes differ.This discrepancy can lead to different properties of binaries at 11.5 Gyr. In star clusters, perturbations from incoming objects can significantly alter the orbits of binaries.According to the Heggie (1975)-Hills (1975) law, wide or soft binaries are prone to disruption after experiencing a few close encounters with intruding objects.Conversely, tight or hard binaries tend to become even tighter after these encounters. The hard-soft boundary of binding energy ( hs ) at the distance to the cluster center () is determined by the local velocity dispersion: where 0.5⟨ 2 ⟩ is the average kinetic energy of stars and binaries at , and is the velocity.The hard-soft boundary of binaries evolves as the structure of the cluster changes over time.Initially, during the first 100 Myr of star cluster evolution, there is a rapid reduction in the hard-soft boundary.This is due to the expansion of h caused by the strong stellar wind mass loss from massive stars, as shown in Figure 4.After 100 Myr, the evolution of h slows down, and the hard-soft boundary, hs , evolves more gradually.The Bin-BH and Bin-noBH models have different initial hs () curves as shown in Figure 4, but their final hs () curves at 11.5 Gyr converge to a similar shape.This indicates that the distribution of binary binding energy at 11.5 Gyr may reflect the different evolutionary histories of hs . To further analyze the distribution of binary binding energy, Figure 8 presents a comparison of the contour plot of b versus at approximately 11.5 Gyr for the Bin-BH and Bin-noBH models.Across a wide range of values, spanning from the center of the cluster to the distant tidal tail, two distinct peaks can be observed.The first peak, located around 10-30 pc, represents the population of binaries inside the cluster.The second peak, with > 3000 pc, corresponds to binaries that have escaped from the cluster and are distributed along the tidal tail. We focus on the discussion of binaries within the cluster and examine the hard-soft boundaries, hs (), at three different ages: 0 Myr, 100 Myr, and 11.5 Gyr.These boundaries are plotted as reference curves.To calculate hs (), we divide the cluster into 10 radial bins, ensuring an equal number of objects per bin.Binaries are treated as unresolved objects in this analysis.The maximum value of is set to be at 90% of the Lagrangian radius, providing a radial range that reflects the cluster's size at the three ages. The results show that hs () does not exhibit strong variations along .The two models, Bin-BH and Bin-noBH, have similar hs () curves, except for an offset in the radial region at 0 Myr and 100 Myr.The peak of b falls between the hs () curves at 100 Myr and 11.5 Gyr.This suggests that during the first 100 Myr, not all soft binaries with b < hs are immediately disrupted, and many of them can survive and become hard binaries by 11.5 Gyr. Therefore, the final distribution of b does not clearly reflect the initial conditions of the two models, as anticipated by G21.However, the Bin-noBH model has a relatively larger number of binaries compared to the Bin-BH model.This difference suggests that the overall rate of binary disruption depends on the evolutionary history of the cluster density. Period distribution To analyze the binary disruption rate in relation to cluster dynamics, we examine the period distributions normalized by the bound mass of the cluster ( hb ) for three models: Bin-BH, Bin-noBH, and Bin-BH-Alt, as depicted in Figure 9.The period distributions at the initial phase (0 Gyr) and the median age (5 Gyr) are compared. In the Bin-BH and Bin-noBH models, the initial period distributions are the same, but they exhibit different density profiles.At 5 Gyr, the Bin-noBH model retains more wide binaries compared to the Bin-BH model.The hard-soft boundaries of periods, estimated for stars within h , do not exhibit significant differences between the two models.However, the peak of the period distribution in the Bin-BH model is closer to the hard-soft boundary at zero age, whereas in the Bin-noBH model, it aligns with the boundary at 5 Gyr.This disparity suggests that the disruption rate of binaries is not solely determined by the hard-soft boundary.During long-term evolution, the Bin-BH model, which is denser and contains BH subsystems, experiences a higher rate of disruption for wide binaries, resulting in the peak of the period distribution being closer to the boundary.In contrast, the Bin-noBH model preserves more wide binaries, and the peak of the period distribution reflects the boundary at 5 Gyr for the cluster. Comparing the Bin-BH and Bin-BH-Alt models, they share a similar density evolution but differ in the assumptions of their primordial binaries.The ratio of hb at 5 Gyr to the initial phase, hb (5 Gyr)/ hb (0), exhibits an identical trend for both models.This finding implies that the binary disruption is not highly sensitive to the assumption of the initial period distribution.Consequently, it is possible to infer the initial binary properties through inverse derivation if the evolution history of the cluster density is known (see Kroupa 1995a;Marks et al. 2011;Marks & Kroupa 2012).Moreover, by utilizing the derived ratio, we can extrapolate the evolution of the period distribution of binaries for any arbitrary assumption regarding the primordial binary populations.This provides a valuable tool for understanding the long-term dynamical evolution of binary systems within star clusters and can aid in studying the impact of different initial binary properties on the binary disruption rate and cluster dynamics. Radial distribution Figure 10 compares the radial distribution of the binary fraction ( bin ) for the Bin-BH and Bin-noBH models at 11.5 Gyr. In the upper panel, the real bin is plotted as a function of the 3D radial distance from the cluster center.Both models exhibit a similar trend, with a systematic offset of bin along .The central region of the cluster shows a higher bin compared to the outer halo.At the distant tail of the cluster, bin experiences a significant increase.This can be attributed to binaries that escaped from the cluster during the early stages of evolution, as they suffer fewer dynamical perturbations and have a higher chance of survival. The lower panel of Figure 10 presents the predicted observed binary fraction as a function of projected distance.To identify binaries from the color-magnitude diagram, we assume that unresolved binaries with B-band magnitudes between 20.5 and 23 mag and a mass ratio above 0.6 can be detected.The B-band magnitudes for stars are generated by using galevnb.Notably, bin (obs) for both models is nearly identical within a projected distance up to 30 arcmin, unlike the real bin for all binaries.The observed binary fraction bin (obs) falls in the range of 0.2 to 0.3. Half-year evolution of line-of-sight velocities With high-resolution multi-epoch spectroscopic observations, it is possible to identify binaries by comparing the line-of-sight velocity changes (|Δ LOS |) over a span of approximately six months. The line-of-sight velocity LOS of an unresolved binary is the combination of two LOS of two components and is dominated by the brighter component.Thus, the |Δ LOS | values exhibit considerable variation during the multiple epochs of observation.These variations are determined by the periods, eccentricities, inclinations, and orbital phases of the binaries.Notably, larger variations are observed for short-period binaries, which could potentially aid in distinguishing these binaries from other effects that cause changes in velocity.The baseline of approximately half a year is sensitive to a maximum period of ∼ 10 4 days. We estimate LOS of binaries by taking the I-band flux-weighted average of the LOS of the two components.In Figure 11, we present the |Δ LOS | versus period plot for observable unresolved binaries with |Δ LOS | > 0.3 km/s and < 10 arcmin after multiple epochs, respectively.We specifically select binaries with at least one bright (post-main-sequence) star component, and some binaries include white dwarfs.These bright stars have a luminosity in the HST 555 filter brighter than 20 mag.The three models (Bin-noBH, Bin-BH, and Bin-BH-Alt) exhibit observable binaries across a wide range of period distributions, spanning from 1 to 10 4 days.The snapshots at mat (see the bottom panel of Figure 7) are chosen as the first epoch of observation.The choices of time intervals between epochs were chosen to be roughly equal space in half a year time interval, and the exact values are defined by the time step algorithm of the petar code.The number of detectable binaries is similar for all three models, with the Bin-noBH model exhibiting slightly more binaries with periods above 3000 days.This trend aligns with the period distributions shown in Figure 9, although some stochastic scatter may be present. To assess the completeness of detectable binaries via multi-epoch observations of |Δ LOS |, we compare the number counts of detectable binaries and all bright binaries as a function of periods, as shown in Figure 12.For all models, periods up to 10 4 days are detectable and all binaries with periods below 10 3 days can be detected with multiple epochs.From Figure 11, one binary in the Bin-BH model with a period between 10 3 − 10 4 days has only one epoch that shows |Δ LOS | > 0.3 km/s.A few binaries above 10 3 days in the Bin-noBH models have epochs where |Δ LOS | < 0.3 km/s, indicating that they might be missed if the observational epochs are limited to two. The observed LOS of unresolved binaries does not represent the LOS of the center-of-mass of the binaries, which complicates the determination of the physically useful line-of-sight velocity dispersion ( LOS ).A complete sample of detectable bright binaries with periods below 10 4 days can mitigate this effect and significantly improve the determination of ( LOS ).When binaries are detectable from multiepoch observations, we can exclude them from the computation of dimensional velocity dispersion 1D within h , assuming a virial equilibrium state of the cluster: This normalization allows us to account for any differences in the overall dynamical state of the clusters and facilitates a more meaningful comparison of the LOS . The presence of BHs affects the LOS in the cluster center.To illustrate the difference between models with and without BHs, we calculate the LOS of single stars within a projected distance of < 3 arcmin ( LOS,S,hn ), which corresponds to the hn (17 pc).All three models exhibit similar values of LOS,S,hn .Additionally, the LOS values of single stars within a projected distance of < 10 arcmin (58pc), which includes stars outside the effective radius of the cluster, are similar to LOS,S,hn , except the Bin-noBH model, which has a lower value. Since the normalization factor 1D is different for the three models, and the observation cannot directly obtain and h , the difference in the observed estimates of LOS for the three models may be larger than what we found in our simulations.This should be taken into consideration when interpreting the results and comparing them with observations. The sample that includes all bright singles and binaries exhibits much larger dispersion values ( LOS,SB ) than the values ( LOS,S ) of the sample containing only singles.By excluding detectable binaries, the values ( LOS,SCB ) are significantly lower than LOS,SB , roughly 1.5-2 times of LOS,S .This procedure helps to obtain more accurate estimates of LOS . Binaries with BHs The Bin-BH model at 11.5 Gyr exhibits several binaries which contain one or two BHs (BwBHs), as depicted in Figure17.It is important to investigate whether these BwBHs can be detected, serving as evidence for the existence of BHs.Table 3 provides a summary of the parameters for these binaries, which include three types: BBHs, BH with MS (BH-MS), and BH with WD (BH-WD).Other types of BH-star binaries are not detected. The presence of BBHs has also been illustrated in Figure 6, with the possibility of some being detected by GW detectors.Three BBHs are inside the clusters and the other three distribute in the tidal stream. An interacting BwBH that contains an accreting BH primary and a non-BH secondary star is particularly interesting as a potential Xray or radio source that could be detected, providing evidence for the presence of BHs in Pal 5. Unfortunately, there is no BwBH that contains a bright post-main sequence star at 11.5 Gyr, only a few BH-MS and BH-WD exist. We calculate the Roche lobe radius using Equation 53from Eggleton (1983); Hurley et al. (2002), with the semi-major axis replaced by the peri-center distance : where = 2 / 1 .The original formula assumes a circular orbit, which misses the eccentric binaries where the accretion may occur at the peri-center separation.To account for this, we use the pericenter distance instead.When the stellar radius of the secondary star ( 2 ) is greater than or equal to the Roche lobe radius ( RL2 ), the secondary star fills its Roche lobe, and the accretion process might result in observable radiation. The 2 / RL2 values of BH-MS binaries in our models are below 10 −3 , indicating that no accretion occurs in these cases.The BH-WD binaries have the potential to become ultraluminous X-ray sources (ULXs).Detailed studies of the dynamical formation scenarios for these ULXs in globular cluster environments have been conducted by Ivanova et al. (2010).One BH-WD binary in our simulations has a period of 2.5 days and a peri-center distance () of 2 ⊙ , located ∼ 4.5 pc away from the cluster center.The ratio 2 / RL2 is approximately ∼ 0.04, which does not yet reach the criterion for accretion. In our investigation of the BH-MS binaries, we have discovered that their formation occurs through a similar dynamical channel.The MS star originates from a primordial binary of two MS stars (MS-MS).The BH originates from a primordial binary of two massive stars, which forms a BBH.The formation process of the BH-MS binaries in the Bin-BH model involves several steps: (i) The BBH undergoes several interactions with other BHs in the cluster. (ii) After one of the BHs escapes from the cluster following a strong interaction with an intruder, it becomes a single BH. (iii) This single BH eventually encounters the MS-MS binary and participates in a binary exchange event. (iv) As a result of the binary exchange, the BH joins the MS-MS binary, forming the BH-MS binary. The described process is visually illustrated in Figure 14.The dynamical formation of BH-MS binaries in star clusters have been discussed in several works (Kremer et al. 2018;Di Carlo et al. 2023;Rastello et al. 2023;Tanikawa et al. 2023). Although no observable events from interacting BwBH occur at 11.5 Gyr, we can estimate the frequency of such events by collecting the interacting BwBHs recorded in the evolution of star clusters.The criterion to select interacting BwBHs are 2 / RL2 ≥ 1. Events that occurred in the first 100 Myr are excluded, as they mostly involve primordial binaries that are not significantly affected by stellar dynamics.The results are summarized in Table 4. The Bin-BH and Bin-BH-Alt models have a dozen of such interacting BwBHs, including both primordial and dynamically formed BwBHs.The dynamically formed BwBHs contribute to approximately half of the interacting BwBHs.The secondary stars involved in these BwBHs include several types, with one being BH-NS, which can trigger a GW merger. The Bin-noBH model also includes 5 events, all of which consist of primordial binaries.Among these events, four are BH-MS binaries, and one is a BH-NS binary.Despite the high supernovae kick velocities in the Bin-noBH model, these binaries were strongly bound before the supernovae, and the random natal kick did not disrupt the binaries.Instead, the binaries escaped from the cluster after the kick. In general, the formation rate of an interacting BwBH is estimated to be about one per 2 Gyr.Therefore, the possibility of detecting an interacting BwBH in the present-day Pal5 is practically zero. The noBin-BH and Bin-noBH-F models do not exhibit any interacting BwBH events, and thus, they are not included in the table.One common feature of these two models is the absence of massive primordial binaries, which is different from all other models that have OB binary properties from Sana et al. (2012).As a result, the possibility of dynamical formation of BwBHs is also low in these models.One important channel for the formation of interacting BwBHs is through the dynamical exchange of binary components after a close encounter between a BH and a binary.The lack of primordial binaries in these models suppresses this formation channel. Multi-epoch observations of |Δ LOS | can also be used to detect non-interacting BwBHs.For instance, utilizing multi-epoch MUSE spectroscopy, Giesers et al. (2018Giesers et al. ( , 2019) ) discovered three BwBHs in NGC3201.The stellar companions in these BwBHs have mass values of 0.6 − 0.8 ⊙ .The four BH-MS binaries in the Bin-BH model at 11.5 Gyr have comparable companion masses.Therefore, it is possible to detect BHs in Pal 5 via multi-epoch observations of |Δ LOS |.However, due to the long periods of these binaries, a long-term observation plan (several years) is needed to accurately constrain the masses of the BHs.Despite the fact that these binaries are not LOS variable over a short baseline of a few months, they may still be found: they should appear as member stars according to their position in the CMD, parallax and propor motion, but they have a large LOS offset.A solar-type star orbiting a 15 M ⊙ BH with a 10 4 d period has an orbital velocity of ∼ 25 km/s.This predicted signal is worth looking for. Color-magnitude diagram By utilizing the galevnb code, we can convert our simulation data into mock photometry.As an example, we present the colormagnitude diagram (CMD) of the Bin-BH model at 11.5 Gyr, using HST 555 and 814 filters, and CSST and − filters (Fig- ure15). In the CSST filters, we observe binary stars distributed between the MS and WD sequence.These binaries consist of a WD and a lowmass main sequence star (LMS).Similar features in the CMD have been seen in -body simulations by Pang et al. (2022) (see figure 5 in Pang et al. 2022) 1 .In these binary systems, the luminosity is mainly dominated by the WD, as both components have very similar masses.They are considered as candidates for cataclysmic variable (CV) stars. The CSST -band magnitudes of WD and CV are below 26 mag, while the corresponding HST F555W magnitudes are above 26 mag.Therefore, CSST has the advantage of potentially detecting many WD and CV candidates in Pal 5. We also highlight the BH-MS binaries shown in Table 3.Among them, three have the HST F555W magnitude below 21 mag and the CSST -band magnitude below 16 mag.If the multi-epoch spectroscopy observation can reach this magnitude limit, it is possible to detect these binaries via the observation of |Δ LOS |. Mass functions The present-day mass function of a star cluster is influenced by various factors, including the IMF, mass segregation, and tidal evaporation.To investigate the impact of primordial binaries and black holes (BHs) on the mass function, we compare the mass functions of our -body models with the observed ones.In order to make a meaningful comparison with the observed data, we select snapshots from our models that closely match the observed surface number density profile (Σ()), as shown in the lower panel of Figure 7. It is important to consider the resolution limitations when comparing with observations.The widest binary in our models has a semi-major axis of approximately 1.8 × 10 4 AU.Given the distance to Pal 5, a spatial resolution of less than 1 ′′ is required to resolve this binary.The best resolution achievable by HST is around 0.05 ′′ , which means that only a small fraction of wide binaries with periods above 1.4 × 10 7 days can potentially be resolved.Therefore, we assume that most binaries remain unresolved in observations and calculate their magnitudes by summing the fluxes of their two components.Figure 15 shows the color-magnitude diagram (CMD) of unresolved binaries, which appear redder and brighter compared to the single stars. To investigate this effect, we compare the (actual) total masses ( tot ) of binaries with the masses converted from their F555W-band magnitudes ( obs ). For main-sequence binaries, we calculate the absolute F555Wband flux and then determine the mass of a single star that has the closest flux value, which serves as the converted mass obs .The comparison between tot and obs is depicted in Figure 16. Table 3.The parameters of BwBHs for the Bin-BH model at 11.5 Gyr. 1 and 2 denote the masses of the primary and secondary components, respectively; represents the peri-center distance; 2 / RL2 indicates the secondary stellar radius relative to the Roche lobe overflow radius; and represents the distance of the binary from the cluster center. Type The difference between tot and obs is highly sensitive to the mass ratio = 1 / 2 and luminosity ratio as well.Here, the mass ratio is defined as the minimum mass divided by the maximum mass of the two components in a binary.A higher leads to a larger difference between the tot and obs values.Consequently, the obs of equal-mass unresolved binaries can be significantly lower than their true tot . Furthermore, for binaries with the lowest values, there is a systematic offset between tot and obs .As a result, if unresolved main-sequence binaries cannot be distinguished from single stars, the total masses of all these binaries would be underestimated. The offset between tot and obs is determined by the minimum .There is a nonlinear relation between stellar luminosity () and mass ().For MS stars in the mass range of 0.3-0.8 ⊙ , ∝ 4 , and thus, we can roughly estimate the relation between the total binary mass ( tot ) and the binary mass used in the mass function estimation 3. The left panel corresponds to the HST F555W-F814W and F555W filters, while the right two panels correspond to the CSST g-i and g, and u-y and u filters, respectively.( obs ) as follows: In our model, the minimum is about 0.12, which corresponds to a maximum obs / tot ≈ 0.93. To compute the mass functions, we collect stars within the same observational fields used by the HST observation from the Smith field (Grillmair & Smith 2001) and the Kuepper field (unpublished; reported in Baumgardt et al. 2023), as shown in Figure 17.The center position of the star cluster model is defined as the centre-of-mass of allowing us to obtain mass functions as a function of radial distance.The intersection between the two observational fields and the three radial bins (referred to "Field" regions) are used for selecting samples of stars. It's important to note that due to the limited observational coverage and stochastic scatter, the comparison between the observed and modeled mass functions may be affected.To improve statistical robustness, we also select stars for measuring the mass functions using only the three radial bins of the -body models (referred to "Ring" regions). By comparing the mass functions obtained from the -body models and from the observed data, we can investigate the effects of primordial binaries and black holes on the mass function of Pal 5. We conducted an analysis to assess the impact of unresolved binaries on the determination of the mass function in the Kuepper field, using the Bin-BH model.The results are depicted in Figure 18.We considered two scenarios for the treatment of binaries in the mass function: • RB (Resolved Binaries): All binaries are resolved, meaning that individual masses of binary components are counted in the mass function. • URB (Unresolved Binaries): obs is utilized for mass estimation.This scenario represents a real observation where binaries are unresolved. The mass functions obtained from the RB and URB scenarios display steeper slopes compared to the observational mass function. In Figure 19, we present a comparison between the mass functions obtained from the -body models using the URB method and the observational data.The upper panel of Figure 19 shows the number counts ().The -body models exhibit a comparable number of stars within the three Field regions when compared to the observed data.The Bin-noBH model shows a slightly higher number of stars, indicating that a longer evolution time of more than 12 Gyr might be necessary for a better match.However, this slight discrepancy does not impact our comparison with the observed normalized counts. The median and lower panels of Figure 19 display the normalized cumulative distributions, f (), for the Field regions and a () for the Ring regions.In the inner radial bin, no significant difference is observed when comparing f () and a ().However, for the median and outer radial bins, a noticeable stochastic scatter is present in f ().This scatter is particularly evident in the f () of the Bin-BH model in the outer radial bin.These findings suggest that the observational data may also exhibit similar scatter, and it is important to consider this when comparing the -body model with the observational data. The standard way to characterize a mass function is by using a power-law form given by the equation: where is a normalisation constant and is the power-law index used for fitting.We employ the fitting method outlined in Khalaj & Baumgardt (2013) to determine the statistical error accurately.The formula for fitting is: where represents the total number of stars, is the mass of an individual star, min is the minimum mass of stars, and is the ratio of the maximum to the minimum masses of stars.Iterative calculations are necessary to solve this fitting equation.The corresponding error can be described as: The power-law indices of the mass functions () obtained from fitting are summarized in Table 5.In the inner radial bin, the values for the three Bin models are in rough agreement with the observational data, while the noBin-BH model shows a significantly higher .This result remains consistent when comparing the mass functions within the Field and the Ring regions. In the middle and outer radial bins, all of the -body models exhibit higher values compared to the observational data.This discrepancy is more pronounced when considering the normalized cumulative distribution in the Ring regions ( a ()).These differences suggest that the -body models exhibit more pronounced mass segregation than what is indicated by the observational data, although we need to take into account the potential stochastic scatter inherent in the observational data.The presence of BHs does not appear to have a clear impact on the mass functions.The models incorporating primordial binaries exhibit better agreement with the observed data, particularly in the inner radial bin. Uncertainty of initial condition Due to the computational expense, we are unable to explore the entire parameter space of the initial condition of Pal 5, resulting in several aspects not being addressed in this study.These include assumptions regarding the properties of primordial binaries, the evolution of the Galaxy, the uncertainty associated with stellar evolution, the gravitational wave kicks following mergers of binary black holes (BBHs), and the realistic formation environment of the cluster. In our study, we have adopted two extreme assumptions for the primordial binaries (Kroupa and FlatLog) with a 100% initial binary fraction.However, these assumptions may not accurately reflect the true properties of primordial binaries in Pal 5. Nonetheless, Fig, 9 suggests that the initial period distribution has no significant impact on the survival fraction of binaries as a function of period, as long as the cluster possesses a similar initial density profile and orbit in the Galaxy.Furthermore, the evolution of the binary fraction ( hb ) can be utilized to derive the period evolution for different assumptions regarding the initial binary populations.By using a 100% initial binary fraction, we also explore the maximum potential dynamical impact of primordial binaries.The wide range of periods considered allows us to investigate the behavior of hard and soft binaries with and without black holes (BHs). Our model assumes a static Galactic environment, which is consistent with the setup employed in G21 to facilitate proper comparison.Incorporating a realistic time-dependent Galactic potential, which may be important to understand the density profile of the stream (Pearson, Price-Whelan & Johnston 2017), is challenging due to the limited observational constraints on Galactic evolution.It is plausible that Pal 5 was formed in a significantly different Galactic environment, potentially leading to variations in mass loss and density evolution compared to our models.However, we believe that the overall trend driven by the presence of BHs should be similar.Thus, our results offer a general perspective on how the existence of BHs impacts the binary populations. The retention of BHs in clusters after supernovae remains an open question based on stellar evolution models.Our models do not consider gravitational wave kicks following BBH mergers, which could lead to an overprediction of massive BBHs with masses exceeding 100 ⊙ .Although such BBHs can influence the timescale of cluster disruption as shown in Figure 4 and 6, their impact on the period distribution of binaries is limited since the hard-soft boundary is not determined by a single specific BBH. The initial conditions of the clusters assume spherically symmetric Plummer models, similar to previous N-body simulations of GCs.However, the initial complexity of GC formation, including irregular cluster structures prior to achieving virial equilibrium and the presence of gas, may affect the binary populations during the gasembedded phase. Observation of binaries In Section 3.2.4,we conducted an analysis to assess the feasibility of detecting binaries by measuring the radial velocity difference (|Δ LOS |) through multiple half-year observations.The maximum time-interval reaches half a year.The results indicate that approximately 40 binaries could be identified, covering a period distribution ranging from a few to 10 4 days.The model without BHs tends to exhibit a higher fraction of long-period binaries.While this observation cannot directly constrain the existence of BHs, it can provide insights into the presence of wide (long-period) binaries.Such information may be valuable in constraining the initial period distribution by utilizing the hb values depicted in Figure 9. To obtain a stronger constraint on the existence of BHs, it is crucial to obtain additional observations of binaries in the period range around 10 5 days, which has proven to be challenging thus far.Furthermore, it is necessary to observe binaries in different regions of Pal 5, including the inner region and the distant tail.Given the uncertainties associated with the properties of primordial binaries, assuming an initial period distribution becomes essential for constraining the density evolution based on the observed period distribution of present-day binaries.Notably, wide binaries disrupted within the dense cluster can survive along the low-density tidal tail.Therefore, the difference in the fraction of wide binaries inside the cluster and in the distant tail can help constrain both the initial period distribution of binaries and the density evolution of clusters, ultimately shedding light on the existence of BHs. Another approach to constrain the BH population is by detecting BH-star binaries.We find four BH-MS binaries with relatively high MS masses, as shown in Table 3 and 4 and also illustrated in Figure 8. Figure 15 suggests that the CSST has the potential to detect CVs, thereby providing additional constraints on binaries with WDs. Multi-epoch spectroscopic observations for |Δ LOS | offer another possibility to detect non-interacting BH-star binaries.By utilizing this data, we can obtain better constraints on LOS , providing an indirect constraint on the dynamical impact from BHs in the cluster center. CONCLUSIONS In this study, we performed -body simulations of the Galactic halo globular cluster Pal 5 with and without the inclusion of BHs, while considering a significant fraction of primordial binaries.Our main objectives were to investigate the influence of binaries and BHs on the cluster's dynamical evolution and to understand how the presence of BHs affects the binary populations within Pal 5. Additionally, we aimed to determine whether the observations of binary populations could provide indirect evidence for the existence of BHs in Pal 5. Our findings indicate that the presence of primordial binaries has a noticeable but not drastic effect on the cluster's dynamical evolution, consistent with previous work Wang et al. (2022).In models with BHs, the existence of primordial binaries alters the half-mass relaxation time ( rh ) and reduces the number of BBHs that contribute to binary heating.However, the influence on mass loss and radial evolution is more complex.Models with primordial binaries (Bin-BH and Bin-BH-Alt) exhibit shorter initial rh compared to models without primordial binaries (noBin-BH model).After 1 Gyr, the situation reverses due to larger half-mass radius ( h ) and lower total BH mass ( BH ) in the Bin models.This trend changes again after 8 Gyr when a massive BBH forms in Bin-BH-Alt, accelerating the cluster's dissolution (see Figure 6).Thus, the tidal dissolution time does not exhibit a simple dependence on the presence of primordial binaries. In models without BHs and a low initial density (Bin-noBH and Bin-noBH-F), the evolution is more sensitive to the presence of primordial binaries compared to the BH models.Achieving a similar cluster at 11.5 Gyr requires a higher initial density in these cases. Conversely, the assumption of BH existence significantly affects the population of wide binaries.Over long-term evolution, hard binaries are less affected by dynamical disruption.The fraction of hard binaries remains independent of the initial period distribution (Figure 9).The remaining fraction of wide binaries depends on the evolution of the hard-soft boundary.The period distribution of models with BHs peaks at a shorter period compared to models without BHs, consistent with the hard-soft boundary.However, we find that not all wide binaries outside the hard-soft boundary are immediately disrupted.Many wide binaries outside this boundary can persist in the cluster for a long time.This suggests that the observation of wide binaries may not readily constrain the actual hard-soft boundary and be used to determine the cluster's density evolution history. We have found that multi-epoch spectroscopic observations can detect most binaries with bright stars and periods below 10 4 days.By excluding these binaries, the measurement of LOS of bright stars can be significantly improved, providing better indirect constraints on the BH population through dynamical analysis. Additionally, we have identified 4 BH-MS binaries in the Bin-BH model at 11.5 Gyr, which could potentially be detected using the same method, offering an additional possibility to provide evidence for the existence of BHs. We also investigated how binaries and BHs influence the presentday mass function of Pal 5. Our results suggest that models with primordial binaries have mass function more consistent with the observational data, while the impact of BHs on the mass function is weak.All -body models exhibit mass segregation features that are not observed in the outer region of Pal 5.However, it is important to consider the potential impact of stochastic scatter, which may in-fluence the conclusions drawn from the comparison.This indicates the need for alternative initial mass functions or additional observations of mass functions, with improved statistical precision, to better understand the underlying reasons for this discrepancy. Figure 1 . Figure 1.The orbit of the Pal 5 in the Galactocentric frame.The upper and lower panels show the projected trajectory in the G − G and G − G planes, respectively. G is the projected radial coordinate in the G − G plane.The symbols '+' and 'x' represent the zero-age and present-day positions, respectively. Figure 2 . Figure 2. Initial periods () v.s.eccentricities () of primordial binaries for the Kroupa binary model and the FlatLog model.The central plot of each panel shows - of individual binaries.The upper and the right histograms show the normalized distribution of and , respectively.The distribution of massive binaries is shown by blue lines. 5 10Figure 3 . Figure 3.The evolution of two-component half-mass relaxation time for all models ( rh ; upper two panels) and factors (lower panel) for BH models. Figure 4 . Figure 4.The evolution of half-mass radius of all objects ( h ; dashed curves) and the half-mass radius of BHs ( h,BH ; solid curves).The wBH-1 and noBH-1 models from Gieles et al. (2021) are shown as references. Figure 5 . Figure 5.The evolution of the bound mass (), the BH mass fraction ( BH ) and the bound mass of BHs ( BH ) for all models.The data of the wBH-1 and noBH-1 models are shown for comparison. Figure 6 . Figure 6.The evolution of the semi-major axes of BBHs within the core radius ( c ) of the three BH models.The colors of the lines indicate the masses of the BBHs.We can observe a reduction of the semi-major axes of individual BBHs, indicating their dynamical hardening over time ( > 10 AU) and inspiral by GW radiation ( < 1 AU). Figure 7 . Figure 7.The surface number density (Σ ()) profiles are presented for the -body models along with observational data from Ibata et al. (2017).The upper panel displays snapshots of the -body models at the present-day Galactic position and at apporximately 11.5 Gyr.The lower panel shows -body snapshots that match the observed Σ () profile.The ages of the corresponding snapshots ( mat ) are indicated in the legend.Vertical lines are used to indicate the the 'effective radius' -the radius containing half the number of stars in projection -( hn ) of the clusters. Figure 8 . Figure 8.The contour of - b at 11.5 Gyr for the Bin-BH model (upper panel) and the Bin-noBH model (lower panel).Binaries with one or two compact objects are excluded in the contour.Instead, BH-MS and BH-WD binaries are marked as blue and lightblue stars, respectively.Three curves show the hard-soft boundaries hs ( ) at zero age, 100 Myr and 11.5 Gyr, respectively.The white region outside the color region indicates no binary. Figure 9 . Figure9.The period distribution of binaries within h at two different stages: the initial phase (represented by steps) and at 5 Gyr (shown as filled histograms).The upper panel displays the number of binaries within h normalized by the bound mass of the cluster ( hb ) .The lower panel shows the ratio between hb at 5 Gyr and the initial hb .The vertical dashed and solid lines represent the hard-soft boundary of period within h at 0 and 5 Gyr, respectively. Figure 10 . Figure10.Upper panel: binary fractions of all objects along the 3D radial direction for the Bin-BH and Bin-noBH models; Lower panel: prediction for the observed binary fractions with an I-band magnitude range of 20.5 and 23 mag (corresponding to main sequence stars) and mass ratio > 0.6. Figure 11 . Figure 11.The line-of-sight velocity difference of binaries (|Δ LOS |) as a function of period for multi epochs of observation.The initial snapshots of the three models are chosen at = mat .Each binary type, classified according to the sse (Single Stellar Evolution) code, is represented by a different color.The stellar types include: MS (Main Sequence), HG (Hertzsprung Gap), GB (First Giant Branch), CHeB (Core Helium Burning), AGB (Asymptotic Giant Branch), and WD (White Dwarf). Figure 12 . Figure12.The number counts of bright binaries with post-main-sequence component for three models at .The legend "tot" include all binaries and the "obs" include only detectable binaries with |Δ LOS | > 0.3 km/s. Figure 13 . Figure 13.The line-of-sight velocities of individual bright stars and binaries are plotted, and detectable binaries with |Δ LOS | > 0.3 km/s are indicated as green dots. Figure 14 . Figure 14.Illustration of the BH-MS formation process.The black and grey circles represent BHs, and the blue circles represent MS stars. Figure 15 . Figure 15.The color-magnitude diagram of the Bin-BH model at 11.5 Gyr.Red points are single stars.Other points are unresolved binaries where colors represent mass ratio ().The black crosses are BH-MS binaries shown in Table3.The left panel corresponds to the HST F555W-F814W and F555W filters, while the right two panels correspond to the CSST g-i and g, and u-y and u filters, respectively. Figure 16 . Figure 16.The total masses ( tot ) v.s. the F555W-band flux-converted masses ( obs ) for main-sequence binaries of the Bin-BH model at 12 Gyr.The grey line shows the case of tot = obs .Colors represent mass ratio (). Figure 17 .Figure 18 . Figure 17.The 2-dimensional density map of the noBin-BH model at 11.8Gyr.The color contours with solid lines represent the Smith and Kuepper fields, which have available HST data.The boundaries of the three ring radial bins are indicated by dashed grey circles.Two approaches are employed for selecting samples to measure the mass functions: 1) using the intersection between the Smith/Kuepper fields and the ring regions (referred to as "Field" regions); and 2) using only the ring regions themselves (referred to as "Ring" regions) to enhance statistical accuracy. Figure 19 . Figure 19.The mass functions of four -body models in three radial bins, with the observational data shown as a reference.The upper panel displays the number counts (), the middle panel shows the normalized cumulative distribution f () for the Field regions, and the lower panel shows the normalized cumulative distribution a () for the Ring regions. Table 2 . The table displays the line-of-sight velocity dispersion ( LOS ) estimated from bright stars and binaries.The last column, 1D , represents the estimation of LOS based on Equation 6, which serves as the unit for the other four columns.In particular, the column LOS,S,hn presents the LOS value derived from single stars within < 3 arcmin (17 pc, approximately the hn ).The remaining three columns depict LOS within < 10 arcmin (58 pc), where LOS,S , LOS,SB , and LOS,SCB represent the LOS values from only single stars, both single stars and binaries, and both single stars and undetectable binaries with |Δ LOS | ≤ 0.3 km/s, respectively.LOS .In our -body model, we simulate the impact of excluding binaries with |Δ LOS | > 0.3 km/s on the determination of LOS .Figure13displays the individual line-of-sight velocities of bright stars ( LOS ), undetectable bright binaries with |Δ LOS | ≤ 0.3 km/s ( LOS,SB ), and detectable binaries with |Δ LOS | > 0.3 km/s, aligned with the projected distance.Most binaries with LOS > 1 km/s are detectable, and thus, we can remove them for the calculation of LOS .Table2demonstrates how removing detectable binaries improves the determination of LOS .To have a consistent comparison among the three models, we scale the value of LOS by the estimated 1- Table 4 . 1 [ ⊙ ] 2 [ ⊙ ]The accretion events of BwBHs after 100 Myr.The "Primordial" column indicates whether the binary is primordial (formed during the initial star cluster formation) or dynamically formed (formed through interactions within the star cluster after its formation).The "Type" column indicates the combination of binary companions.The secondary stellar types involved in the accretion events include: MS, HG , GB, CHeB, AGB, HeHG (Hertzsprung Gap Naked Helium star), WD and NS (Neutron star). Table 5 . Fitting result of the power-law indices () of the mass functions in different radial bins.The column labeled "region" distinguishes between the Smith and Kuepper fields (referred to "Field") and the ring regions (referred to "Ring").
2023-09-08T06:42:52.774Z
2023-09-07T00:00:00.000
{ "year": 2023, "sha1": "d6f86432ec761812230b5712e86783abd1b55765", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/mnras/advance-article-pdf/doi/10.1093/mnras/stad3657/53863189/stad3657.pdf", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "d6f86432ec761812230b5712e86783abd1b55765", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
25812209
pes2o/s2orc
v3-fos-license
Functional Renormalization Group and the Field Theory of Disordered Elastic Systems We study elastic systems such as interfaces or lattices, pinned by quenched disorder. To escape triviality as a result of ``dimensional reduction'', we use the functional renormalization group. Difficulties arise in the calculation of the renormalization group functions beyond 1-loop order. Even worse, observables such as the 2-point correlation function exhibit the same problem already at 1-loop order. These difficulties are due to the non-analyticity of the renormalized disorder correlator at zero temperature, which is inherent to the physics beyond the Larkin length, characterized by many metastable states. As a result, 2-loop diagrams, which involve derivatives of the disorder correlator at the non-analytic point, are naively"ambiguous''. We examine several routes out of this dilemma, which lead to a unique renormalizable field-theory at 2-loop order. It is also the only theory consistent with the potentiality of the problem. The beta-function differs from previous work and the one at depinning by novel"anomalous terms''. For interfaces and random bond disorder we find a roughness exponent zeta = 0.20829804 epsilon + 0.006858 epsilon^2, epsilon = 4-d. For random field disorder we find zeta = epsilon/3 and compute universal amplitudes to order epsilon^2. For periodic systems we evaluate the universal amplitude of the 2-point function. We also clarify the dependence of universal amplitudes on the boundary conditions at large scale. All predictions are in good agreement with numerical and exact results, and an improvement over one loop. Finally we calculate higher correlation functions, which turn out to be equivalent to those at depinning to leading order in epsilon. I. INTRODUCTION Elastic objects pinned by quenched disorder are central to the physics of disordered systems. In the last decades a considerable amount of research has been devoted to them. From the theory side they are among the simplest, but still quite non-trivial, models of glasses with complex energy landscape and many metastable states. They are related to a remarkably broad set of problems, from subsequences of random permutations in mathematics [1,2,3], random matrices [4,5] to growth models [6,7,8,9,10,11,12,13,14] and Burgers turbulence in physics [15,16], as well as directed polymers [6,17] and optimization problems such as sequence alignment in biology [18,19,20]. Foremost, they are very useful models for numerous experimental systems, each with its specific features in a variety of situations. Interfaces in magnets [21,22] experience either short-range disorder (random bond RB), or long range (random field RF). Charge density waves (CDW) [23] or the Bragg glass in superconductors [24,25,26,27,28] are periodic objects pinned by disorder. The contact line of liquid helium meniscus on a rough substrate is governed by long range elasticity [29,30,31]. All these systems can be parameterized by a N -component height or displacement field u x , where x denotes the d-dimensional internal coordinate of the elastic object (we will use u q to denote Fourier components). An interface in the 3D random field Ising model has d = 2, N = 1, a vortex lattice d = 3, N = 2, a contact-line d = 1 and N = 1. The so-called directed polymer (d = 1) has been much studied [32] as it maps onto the Kardar-Parisi-Zhang growth model [6] for any N . The equilibrium problem is defined by the partition function Z = D[u] exp(−H[u]/T ) associated to the Hamiltonian which is the sum of an elastic energy which tends to suppress fluctuations away from the perfectly ordered state u = 0, and a random potential which enhances them. The resulting roughness exponent ζ is measured in experiments for systems at equilibrium (ζ eq ) or driven by a force f . Here and below . . . denote thermal averages and (. . . ) disorder ones. In some cases, long-range elasticity appears e.g. for the contact line by integrating out the bulk-degrees of freedom [31], corresponding to q 2 → |q| in the elastic energy. As will become clear later, the random potential can without loss of generality be chosen Gaussian with second cumulant with various forms: Periodic systems are described by a periodic function R(u), random bond disorder by a short-range function and random field disorder of variance σ by R(u) ∼ −σ|u| at large u. Although this paper is devoted to equilibrium statics, some comparison with dynamics will be made and it is thus useful to indicate the equation of motion η∂ t u xt = c∇ 2 x u xt + F (x, u xt ) + f , (1.4) with friction η. The pinning force is F (u, x) = −∂ u V (u, x) of correlator ∆(u) = −R ′′ (u) in the bare model. Despite some significant progress, the model (1.1) has mostly resisted analytical treatment, and one often has to rely on numerics. Apart from the case of the directed polymer in 1+1 dimensions (d = 1, N = 1), where a set of exact and rigorous results was obtained [2,5,33,34,35], analytical methods are scarce. Two main analytical methods exist at present, both interesting but also with severe limitations. The first one is the replica Gaussian Variational Method (GVM) [36]. It is a mean field method, which can be justified for N = ∞ and relies on spontaneous replica symmetry breaking (RSB) [37,38]. Although useful as an approximation, its validity at finite N remains unclear. Indeed, it seems now generally accepted that RSB does not occur for low d and N . The remaining so-called weak RSB in excitations [39,40,41] may not be different from a more conventional droplet picture. Another exactly solvable mean field limit is the directed polymer on the Cayley tree, which also mimics N → ∞ and there too it is not fully clear how to meaningfully expand around that limit [42,43,44]. The second main analytical method is the Functional Renormalization Group (FRG) which attempts a dimensional expansion around d = 4 [26,28,45,46,47]. The hope there is to include fluctuations, neglected in the mean field approaches. However, until now this method has only been developed to one loop, for good reasons, as we discuss below. Its consistency has never been checked or tested in any calculation beyond one loop (i.e. lowest order in ǫ = 4 − d). Thus contrarily to pure interacting elastic systems (such as e.g. polymers) there is at present no quantitative method, such as a renormalizable field theory, which would allow to compute accurately all universal observables in these systems. The central reason for these difficulties is the existence of many metastable states (i.e. local extrema) in these systems. Although qualitative arguments show that they arise beyond the Larkin length [48], these are hard to capture by conventional field theory methods. The best illustration of that is the so called dimensional reduction (DR) phenomenon, which renders naive perturbation theory useless [21,49,50,51,52,53] in pinned elastic systems as well as in a wider class of disordered models (e.g. random field spin models). Indeed it is shown that to any order in the disorder at zero temperature T = 0, any physical observable is found to be identical to its (trivial) average in a Gaussian random force (Larkin) model, e.g. ζ = (4 − d)/2 for RB disorder. Thus perturbation theory appears (naively) unable to help in situations where there are many metastable states. The two above mentioned methods (GVM and FRG) are presently the only known ways to escape dimensional reduction and to obtain non-trivial values for ζ (in two different limits but consistent when they can be compared [26,28,47]). The mean field method accounts for metastable states by RSB. This however may go further than needed since it implies a large number of pure states (i.e. low (free) energy states differing by O(T ) in (free) energy). The other method, the FRG, captures metastability through a non-analytic action with a cusp singularity. Both the RSB and the cusp arise dynamically, i.e. spontaneously, in the limits studied. The 1-loop FRG has had some success in describing pinned systems. It was noted by Fisher [46] within a Wilson scheme analysis of the interface problem in d = 4 − ǫ that the coarse grained disorder correlator becomes non-analytic beyond the Larkin scale L c , yielding large scale results distinct from naive perturbation theory. Within this approach an infinite set of operators becomes relevant in d < 4, parameterized by the second cumulant R(u) of the random potential. Explicit solution of the 1-loop FRG for R(u) gives several non-trivial attractive fixed points (FP) to O(ǫ) proposed in [46] to describe RB, RF disorder and in [26,28], periodic systems such as CDW or vortex lattices. All these fixed points exhibit a "cusp" singularity as R * ′′ (u) − R * ′′ (0) ∼ |u| at small |u|. The cusp was interpreted in terms of shocks in the renormalized force [54], familiar from the study of Burgers turbulence (for d = 1, N = 1). The dynamical FRG was also developed to one loop [55,56,57] to describe the depinning transition. The mere existence of a non-zero critical threshold force f c ∼ |∆ ′ (0 + )| > 0 is a direct consequence of the cusp (it vanishes for an analytic force correlator ∆(u)). Extension to non-zero temperature T > 0 suggested that the cusp is rounded within a thermal boundary layer u ∼ T L −θ . This was interpreted to describe thermal activation and leads to a reasonable derivation of the celebrated creep law for activated motion [58,59]. In standard critical phenomena a successful 1-loop calculation usually quickly opens the way for higher loop computations, allowing for accurate calculation of universal observables and comparison with simulations and experiments, and eventually a proof of renormalizability. In the present context however, no such work has appeared in the last fifteen years since the initial proposal of [46], a striking sign of the high difficulties which remain. Only recently a 2-loop calculation was performed [60,61] but since this study is confined to an analytic R(u) it only applies below the Larkin length and does not consistently address the true large scale critical behavior. In fact doubts were even raised [47] about the validity of the ǫ-expansion beyond order ǫ. It is thus crucial to construct a renormalizable field theory, which describes statics and depinning of disordered elastic systems, and which allows for a systematic expansion in ǫ = 4 − d. As long as this is not achieved, the physical meaning and validity of the 1-loop approximation does not stand on solid ground and thus, legitimately, may itself be called into question. Indeed, despite its successes, the 1-loop approach has obvious weaknesses. One example is that the FRG flow equation for the equilibrium statics and for depinning are identical, while it is clear that these are two vastly different physical phenomena, depinning being irreversible. Also, the detailed mechanism by which the system escapes dimensional reduction in both cases is not really elucidated. Finally, there exists no convincing scheme to compute correlations, and in fact no calculation of higher than 2-point correlations has been performed. Another motivation to investigate the FRG is that it should apply to other disordered systems, such as random field spin models, where dimensional reduction also occurs and progress has been slow [45,62,63,64,65]. Insight into model (1.1) will thus certainly lead to progresses in a broader class of disordered systems. In this paper we construct a renormalizable field theory for the statics of disordered elastic systems beyond one loop. The main difficulty is the non-analytic nature of the theory (i.e. of the fixed point effective action) at T = 0. This makes it a priori quite different from conventional field theories for pure systems. We find that the 2-loop diagrams are naively "ambiguous", i.e. it is not obvious how to assign a value to them. We want to emphasize that this difficulty already exists at one loop, e.g. even the simplest one loop correction to the two point function is naively "ambiguous". Thus it is not a mere curiosity but a fundamental problem with the theory, "swept under the rug" in all previous studies, but which becomes unavoidable to confront at 2-loop order. It originates from the metastability inherent in the problem. For the related theory of the depinning transition, we have shown in companion papers [66,67] how to surmount this problem and we constructed a 2-loop renormalizable field theory from first principles. There, all ambiguities are naturally lifted using the known exact property that the manifold only moves forward in the slowly moving steady state. Unfortunately in the statics there is no such helpful property and the ambiguity problem is even more arduous. Here we examine the possible ways of curing these difficulties. We find that the natural physical requirements, i.e. that the theory should be (i) renormalizable (i.e. that a universal continuum limit exists independent of short-scale details), (ii) that the renormalized force should remain potential, and (iii) that no stronger singularity than the cusp in R ′′ (u) should appear to two loop (i.e. no "supercusp"), are rather restrictive and constrain possible choices. We then propose a theory which satisfies all these physical requirements and is consistent to two loops. The resulting β-function differs from the one derived in previous studies [60,61] by novel static "anomalous terms". These are different from the dynamical "anomalous terms" obtained in [66,67,68] showing that indeed depinning and statics differ at two loop, fulfilling another physical requirement. We then study the fixed points describing several universality classes, i.e. the interface with RB and RF disorder, the random periodic problem, and the case of LR elasticity. We obtain the O(ǫ 2 ) corrections to several universal quantities. The prediction for the roughness exponent ζ for random bond disorder has the correct sign and order of magnitude to notably improve the precision as compared to numerics in d = 3, 2 and to match the exact result ζ = 2/3 in d = 1. For random field disorder we find ζ = ǫ/3 which, for equilibrium is likely to hold to all orders. By contrast, non-trivial corrections of order O(ǫ 2 ) were found for depinning [66,67]. The amplitude, which in that case is a universal function of the random field strength is computed and it is found that the 2loop result also improves the agreement as compared to the exact result known [69] for d = 0. For the periodic CDW case we compare with the numerical simulations in d = 3 and obtain reasonable agreement. Some of the results of this paper were briefly described in a short version [66] and agree with a companion study using exact RG [70,71]). Since the physical results also seem to favor this theory we then look for better methods to justify the various assumptions. We found several methods which allow to lift ambigu-ities and all yield consistent answers. A detailed discussion of these methods is given. In particular we find that correlation functions can be unambiguously defined in the limit of a small background field which splits apart quasi-degenerate states when they occur. This is very similar to what was found in a related study where we obtained the exact solution of the FRG in the large N limit [72]. Finally, the methods introduced here will be used and developed further to obtain a renormalizable theory to three loops, and compute its β-function in [73]. Let us mention that a first principles method which avoids ambiguities is to study the system at T > 0. However, this turns out to be highly involved. It is attempted via exact RG in [70] and studied more recently in [74,75] where a field theory of thermal droplet excitation was constructed. A short account of our work has appeared in [66], and a short pedagogical introduction is given in [76]. The outline of this paper is as follows. In Section II we explain in a detailed and pedagogical way the perturbation theory and the power counting. In Section III we compute the 1-loop (Section III A) and 2-loop (Section III B) corrections to the disorder. The calculation of the repeated 1-loop counter-term is given in Section III C. In Section III D we identify the values for ambiguous graphs. This yields a renormalizable theory with a finite β-function, which is potential and free of a supercusp. The more systematic discussion of these ambiguities is postponed to Section V. We derive the βfunction and in Section IV present physical results, exponents and universal amplitudes to O(ǫ 2 ). Some of these quantities are new, and have not yet been tested numerically. In Section V we enumerate all the methods which aim at lifting ambiguities and explain in details several of them which gave consistent results. In Section VI we detail the proper definition and calculation of correlation functions. In Appendix A and B we present two methods which seem promising but do not work, in order to illustrate the difficulties of the problem. In Appendix F we present a summary of all one and 2-loop corrections including finite temperature. In Appendix D we give details of calculations for what we call the sloop elimination method. The reader interested in the results can skip Section II and Section III and go directly to Section IV. The reader interested in the detailed discussion of the problems arising in this field theory should read Section V. A. Replicated action and effective action We study the static equilibrium problem using replicas, i.e. consider the partition sum in presence of sources: from which all static observables can be obtained. The action S and replicated Hamiltonian corresponding to (1.1) are a runs from 1 to n and the limit of zero number of replicas n = 0 is implicit everywhere. We have added a small mass which confines the interface inside a quadratic well, and provides an infrared cutoff. We are interested in the large scale limit m → 0. We will denote For periodic systems the integration is over the first Brillouin zone. A short-scale UV cutoff is implied at q ∼ Λ, but for actual calculations we find it more convenient to use dimensional regularization. We also consider the effective action functional Γ[u] associated to S. It is, as we recall [77,78], the Legendre transform of the generating function of connected correlations If we had chosen non-Gaussian disorder additional terms with free sums over p replicas (called p-replica terms) corresponding to higher cumulants of disorder would be present in (2.2), together with a factor of 1/T p . These terms are generated in the perturbation expansion, i.e. they are present in Γ[u]. We do not include them in (2.2) because, as we will see below, these higher disorder cumulants are not relevant within (conventional) power counting, so for now we ignore them. The temperature T appears explicitly in the replicated action (2.2), although we will focus on the T = 0 limit. Because the disorder distribution is translation invariant, the disorder term in the above action is invariant under the so called statistical tilt symmetry [17,79] (STS), i.e. the shift u a x → u a x + g x . One implication of STS is that the 1-replica replica part of the action (i.e. the first line of 2.2) is uncorrected by disorder, i.e. it is the same in Γ[u] and S[u] [80]. Since the elastic coefficient is not renormalized, we have set it to unity. B. Diagrammatics, definitions We first study perturbation theory, its graphical representation and power counting. Everywhere in the paper we denote the exact 2-point correlation by C ab (x − y), i.e. in Fourier: while the free correlation function (from the elastic term) used for perturbation theory in the disorder is denoted by G ab (x − y) = δ ab G(x − y) and reads in Fourier: (2) is a three replica term which is represented graphically by a line: (2.8) Each propagator thus carries one factor of G(q) = T /(q 2 + m 2 ). Each disorder interaction vertex comes with a factor of 1/T 2 and gives one momentum conservation rule. Since each disorder vertex is a function, an arbitrary number of lines can come out of it. k lines coming out of a vertex result in k derivatives R (k) after Wick contractions Since each disorder vertex contains two replicas it is sometimes convenient to use "splitted vertices" rather than "unsplitted ones". Thus we call "vertex" an unsplitted vertex and we call a "point" the half of a vertex. (2.10) Each unsplitted diagram thus gives rise to several splitted diagrams, as illustrated in Fig. 1 One can define the number of connected components in a graph with splitted vertices. Since each propagator identifies two replicas, a p-replica term contains p connected components. When the 2-points of a vertex are connected, this vertex is said to be "saturated". It gives a derivative evaluated at zero R (k) (0). Standard momentum loops are loops with respect to unsplitted vertices, while we call "sloops" the loops with respect to points (in splitted diagrams). This is illustrated in Fig. (2) The momentum 1-loop and 2-loop diagrams which correct the disorder at T = 0 are shown in Fig. 3 (unsplitted vertices). There are three types of 2-loop graphs A, B and C. Since they have two vertices (a factor R/T 2 each) and three propagators (a factor of T each) the graphs E and F lead to corrections to R proportional to temperature and will not be studied here (see however Appendix F). It is important to distinguish between fully saturated diagrams and functional diagrams. The FS diagrams are those needed for a full average, e.g. a correlation function. There all fields are contracted and one is only left with the space dependence. These are the standard diagrams in more conventional polynomial field theories such as φ 4 . Then all vertices are evaluated at u = 0, yielding products of derivatives R (k) (0). These are also the graphs which come in the standard expansion of Γ[u] in powers of u which generate the "proper" or "renormalized" vertices, i.e. the sum over all 1-particle irreducible graphs with some external legs, from which all correlations can be obtained. Note that in the fully saturated diagrams there can be no free point, all points in a vertex have to be connected to some propagator (and to some external replica) otherwise there is a free replica sum yielding a factor of n and a vanishing contribution in the limit of n = 0. However, since we have to deal with a function R(u) we will more often consider functional diagrams. A functional diagram still depends on the field u. It can depend on u at several points in space (multi-local term), as for example: (2.11) Such a graph with p connected components corresponds to a p replica functional term. Or it can represent the projection of such a term onto a local part, as arises in the standard operator product expansion (OPE): (2.12) Typically using functional diagrams we want to compute the effective action functional Γ[u], or its local part, i.e. its value for a spatially uniform mode u a x = u a , which includes the corrections to disorder. Specifying the two replicas on each connected component, one example of a 1-particle irreducible diagram producing corrections to disorder is (2.13) The complete analysis of these corrections will be made in Section (III). Finally, note that functional diagrams may con-tain saturated vertices, whose space and field dependence disappears (such as (c) in Fig. 2) and that the limit n → 0 does not produce constraints. An example is the calculation of Γ[u] since one can always attach additional external legs to any point by taking a derivative with respect to the field u. C. Dimensional reduction If we consider fully saturated diagrams and analytic R(u) we find trivial results. This is because at T = 0 the model exhibits the property of dimensional reduction [21,49,50,51,52,53] (DR) both in the statics and dynamics. Its "naive" perturbation theory, obtained by taking for the disorder correlator R(u) an analytic function of u has a triviality property. As is easy to show using the above diagrammatic rules (see a typical cancellation due to the "mounting" construction in Fig. 4, see also Appendix D in Ref. [70]) the perturbative expansion of any correlation function i u ai xi S (of any analytic observable) in the derivatives R (k) (0) yields to all orders the same result as that obtained from the Gaussian theory setting R(u) ≡ R ′′ (0)u 2 /2 (the so called Larkin random force model). The 2-point function thus reads to all orders: (2.14) (at T = 0 correlations are independent of the replica indices a i ). This dimensional reduction results in a roughness exponent ζ = (4 − d)/2 which is well known to be incorrect. One physical reason is that this T = 0 perturbation theory amounts to solving in perturbation the zero force equation This, whenever more than one solution exists (which certainly happens for small m) is clearly not identical to finding the lowest energy configuration [102]. Curing this problem within the field theory, is highly non-trivial. Coarse graining within the FRG up to a scale at which the renormalized disorder correlator R(u) becomes non-analytic (which includes some of the physics of multiple extrema) is one possible route, although understanding exactly how this cures the problem within the field theory is a difficult open problem. It is important to note that dimensional reduction is not the end of perturbation theory, since saturated diagrams remain non-trivial at finite temperature, so one way out is to study T > 0. This is not the route chosen here, instead we will attempt to work at T = 0 with a non-analytic action and focus on functional diagrams which remain non-trivial. D. Power counting Let us now consider power counting. Let us recall the conventional analysis within e.g. the Wilson scheme [46,47]. The elastic term is invariant under x → bx, u → b ζ u and T → b θ T , with θ = d − 2 + 2ζ. ζ is for now undetermined. Under this transformation the disorder function R is multiplied by b d−2θ = b 4−d+2ζ . It becomes relevant for d < 4, provided ζ < (4 − d)/2 which is physically expected (for instance in the random periodic case, ζ = 0 is the only possible choice, and for other cases ζ = O(ǫ)). The rescaled dimensionless temperature term scales as −m∂ mT = −θT (see below) and is formally irrelevant near four dimension. In the end ζ will be fixed by the disorder distribution at the fixed point. To be more precise, we want to determine in the field theoretic framework the necessary counter-terms to render the theory UV finite as d → 4. The study of superficial divergences usually involves examining the irreducible vertex functions (IVF): with E u external fields u (at momenta q i , i = 1, ..E u ). The perturbation expansion of a given IVF to any given order in the disorder is represented by a set of 1-particle irreducible (1PI) graphs (in unsplitted diagrammatics). Being the derivative of the effective action they are the important physical objects since all averages of products of fields u can be expressed as tree diagrams of the IVF. Finiteness of the IVF thus implies finiteness of all such averages. However since Γ[u] is non-analytic in some directions (e.g. for a uniform mode u a x = u a ), derivatives such as (2.16) may not exist at q = 0, and we have to be more general and consider functional diagrams. The (disorder part of the) effective action is the sum of k-replica terms, noted Γ k [u] is the sum over 1PI graphs with k connected components (using splitted vertices), and itself depends on T as where l is the number of sloops. Thus at T = 0 there are no sloops and Γ k [u] = Γ k,l=0 [u] is the sum over 1PI tree graphs with k connected components (trees in replica-space, not position-space). Let us compute the superficial degree of UV divergence δ of a functional graph entering the expansion of the local part of the effective action. We denote v the number of unsplitted disorder vertices, I the number of internal lines (propagators), L the number of loops and l the number of sloops. One has the relations (2.20) The total factors of T are T I−2v = T l−k . At T = 0 (l = 0) the superficial degree of UV divergence is thus Thus in d = 4 the only graphs with positive superficial degree of divergence are for k = 1 (quadratic ∼ Λ 2 ), and k = 2 (log divergence). k = 1 corresponds to a constant in the free energy. Because of STS all single replica terms are uncorrected and there is no wave-function renormalization in this model. Thus to renormalize the T = 0 theory we need a priori to look only at graphs with p = 2 connected components, which by definition are those correcting the second cumulant R(u), compute their divergent parts, and construct the proper counter-term to the function R(u). As mentioned above, higher cumulants are irrelevant by power counting, and are superficially UV-finite. The graphs which contribute to the 2replica part Γ 2 [u] have L loops with L = 1 + v + l. At zero temperature, l = 0, thus L = 1 + v. The loop expansion thus corresponds to the expansion in power of R(u) and, as we will see below, to an ǫ-expansion. More generally using the above relation one has, schematically where the number of internal lines gives the total number of derivatives acting on an argument u of the functions R. For instance, the 2-replica part at T = 0 is a sum over L-loop graphs of the type Each additional power of T yields an additional quadratic divergence, more generally a factor of T Λ d−2 . Thus to obtain a theory where observables are finite as Λ → ∞ one must start from a model where the initial temperature scales with the UV cutoff as This is similar to φ 4 -theory where it is known that a φ 6 term can be present and yields a finite UV limit (i.e. does not spoil renormalizability) only if it has the form g 6 φ 6 /Λ d−2 . Such a term, with precisely this cutoff dependence, is in fact usually present in the starting bare model, e.g. in lattice spin models. It then produces only a finite shift to g 4 without changing universal properties [103]. Here each factor ofT comes with a (b) (a) factor of Λ 2−d which compensates the UV divergence from the graph. Thus the finite-T theory may also be renormalizable. Computing the resulting shift in R(u) to order R 2 by resumming the diagrams E and F of Fig. 3 and all similar diagrams to any number of loops has not been attempted here (see however Appendix F). The "finite shift" here is, however, much less innocuous than in φ 4 -theory since it smoothes the cusp. The effects of a non-zero temperature are explored in [72,74,75,81]. One can use the freedom to rescale u by m −ζ . The dimensionless temperatureT = T m θ is then defined. The disorder term in Γ[u] is then is as in (2.2) with R(u) replaced by m ǫ−4ζR (um ζ ) in terms of a dimensionless rescaled functioñ R of a dimensionless rescaled argument. This will be further discussed below. III. RENORMALIZATION PROGRAM In this section we compute the effective action to 2-loop order at T = 0. We are only interested in the part which contains UV divergences as d → 4. We know from the analysis of the last section that we only need to consider the local k = 2 2-replica part, i.e. the corrections to R(u). These L = 1 and L = 2 loop corrections contain v = L + 1 vertices. Higher v yields higher number of replicas. A. 1-loop corrections to disorder To one loop at T = 0 there is only one unsplitted diagram v = 2, corresponding to two splitted diagrams (a) and (b) as indicated in figure 5. Both come with a combinatorial factor of 1/2! from Taylor-expanding the exponential function and 1/2 from the action. (a) has a combinatoric factor of 2 and (b) of 4. Together, they add up to the 1-loop correction to disorder Note that (b) has a saturated vertex, hence the factor R ′′ (0). This does not lead to ambiguities in the 1-loop β-function, since the FRG to one loop yields a discontinuity only in the third derivative and R ′′ (u) remains continuous. B. 2-loop corrections to disorder There are only three graphs correcting disorder at T = 0 with L = 2 loops and v = 3 vertices. They are denoted A, B and C and we will examine each of them. We begin our analysis with class A. Class A The possible diagrams with splitted vertices of type A are diagrams (a) to (f) given in Fig. 7. The resulting correction to R(u) is written as: where the combinatorial factors are: 1/3! from the Taylorexpansion of the exponential function, 2/2 3 from the explicit factors of 1/2 in the interaction, a factor of 3 to chose the vertex at the top of the hat, and a factor of 2 for the possible two choices in each of the vertices. Furthermore below some additional combinatorial factors are given: A factor of 2 for generic graphs and 1 if it has the mirror symmetry with respect to the vertical axis. Each diagram symbol denotes the diagram including the symmetry factor. The first two graphs are: (3.5) To obtain the sign one can choose an "orientation" in each vertex (u a − u b ), the final result does not depend on the choice. The minus sign in a comes because the two legs enter on opposite points in the top vertex. Define the 2-loop momentum integral (see Appendix A in Ref. [67]) Graphs a and b are non-ambiguous. They are the only contributions in an analytic theory. The other graphs are and vanish if R(u) is analytic (since then R ′′′ (0) = 0) but a priori should be considered when R(u) is non-analytic. We have indicated their "natural" sign and amplitude (e.g. symmetry factor setting λ i = 1) but have introduced factors λ i to recall that they are ambiguous: since R ′′′ (0 + ) = −R ′′′ (0 − ) one is confronted to a choice each time one saturates a vertex and there is no obvious way to choose the sign at this stage. We recall that we have defined saturated vertices as vertices evaluated at u = 0 while unsaturated vertices still contain u and do not lead to ambiguities. At this stage we will not discuss in detail how to give a definite values to these contributions to disorder. This will be done in Section V. We will just use the most reasonable assumptions, which will be reevaluated, and justified later. A natural step is to set since these graphs cannot correct R(u) as they are odd functions of u, which yields no contribution when inserted into the action ab R(u a − u b ). Class B We now turn to graphs of type B (bubble-diagrams), g to l represented in Fig. 8. We use the same convention as in to an extra factor of 2. The final result is Only k and l are ambiguous but it is also natural to set: which we do for now, and discuss later. Class C Diagrams m, n, p, q of class C are represented in Fig. 9 m There it is natural to assume which we do for now and discuss it later. This leaves no correction to disorder from graphs C, as is the case for depinning [67]. This is fortunate, since the integral I t has a quadratic UV-divergence in d = 4, while I T is UV-finite. Physically, it is unlikely that these could enter physical observables as the tadpole divergence can usually be eliminated by proper field reordering (normal-ordering) or vacuum subtraction. To summarize, for the equilibrium statics at T = 0 in perturbation of R ≡ R(u), the contributions to the disorder to one and two loops, i.e. the corresponding terms in the effective action Γ[u,û] are We have allowed for a yet undetermined constant λ = λ e − 2λ f . We now show that requiring renormalizability allows to fix λ. C. Renormalization method to two loops and calculation of counter-terms Let us now recall the method, also used in our study of depinning [67], to renormalize a theory where the interaction is not a single coupling-constant, but a whole function, the disordercorrelator R(u). We denote by R 0 the bare disorder -this is the object in which perturbation theory is carried out -i.e. one considers the bare action (2.2) with R → R 0 . We denote here by R the renormalized dimensionless disorder i.e. the corresponding term in the effective action Γ[u] is m ǫ R (i.e. the local 2-replica part of Γ[u]). Symbolically, we can write (3.29) We define the dimensionless symmetric bilinear 1-loop and trilinear 2-loop functions (see (3.26) and (3.27)) such that They can be extended to non-equal argument using f (x, y) := and a similar expression for the trilinear function. Whenever possible we will use the shorthand notation δ (1) (R) = δ (1) (R, R) and δ (2) (R) = δ (2) (R, R, R). The expression of R obtained perturbatively in powers of R 0 at 2-loop order reads: (3.32) It contains terms of order 1/ǫ and 1/ǫ 2 . This is sufficient to calculate the RG-functions at this order. In principle, one has to keep the finite part of the 1-loop terms, but we will work in a scheme, where these terms are exactly 0, by normalizing all diagrams by the 1-loop diagram. Inverting (3.32) yields where δ (1,1) (R) is the 1-loop repeated counter-term: (3.34) The β-function is by definition the derivative of R at fixed R 0 . It reads Using the inversion formula (3.33), the β-function can be written in terms of the renormalized disorder R: In order to proceed, let us calculate the repeated 1-loop counter-term δ 1,1 (R). We start from the 1-loop counter-term (3.26), which has the bilinear form with the dimensionless integralĨ 1 := I 1 m=1 ; we will use the same convention forĨ A := I A m=1 . Thus δ 1,1 (R) reads In the course of the calculation the only possible ambiguity could come from but there is no ambiguity since the function R ′′′ (u) 2 is con- This is exactly the same calculation as is done to one loop when computing the non-trivial fixed point for the pinning force correla- Thus there is no doubt that the graph G with the 1-loop counter-term inserted in a 1-loop diagram is non-ambiguous. D. Final β-function, renormalizability and potentiality The 2-loop β-function (3.36) then becomes with the help of (3.38) − m∂ m R(u) = ǫR(u) The first result is that, apart from the last "anomalous" term, the 1/ǫ 2 -terms cancel in the corrections to disorder. In the terms coming from graphs A this works because, as we recall, Graphs B cancel completely since we have chosen as counter-term the full 1-loop graph. So for an analytic theory the above β-function would be finite. This however is incomplete, since the flow of such a β-function leads to a non-analytic R(u) above the Larkin scale. Thus we must consider the last, "anomalous" term in (3.40). It clearly appears that the only value of λ compatible with the cancellation of the 1/ǫ 2 poles is leading to a finite β-function. Thus the requirement that the theory be renormalizable (i.e. yield universal large scale results independent of the short-scale details) fixes the value λ = 1. Note that the cancellation of the graphs B also works thanks to (3.17). It is interesting to compare with what happens at depinning. There the cancellation of the 1/ǫ 2 -terms in the anomalous part is more complicated but automatic. It requires a consistent evaluation of all anomalous non-analytic diagrams. In the depinning theory the cancellation was unusual: a non-trivial bubble diagram (called i 3 in [67]) was crucial in achieving the cancellation. In the statics the 2-loop bubble diagrams of type B appear to be simply the square of the 1-loop ones which is the usual situation. This however is clearly a consequence of (3.17) so the previous experience with depinning indicates that care is required and we will discuss some justification for (3.17) below. In the search for a fixed point it is convenient to write the β-function for the rescaled functionR(u) defined through which amounts to rescale the fields u by m ζ . Note that this is a simple field rescaling and different from standard wavefunction renormalization, since as mentioned above there is none in this theory due to STS. We have also included the 1-loop integral factor to simplify notations and further calculations (equivalently it can be absorbed in the normalization of momentum or space integrals). With this, the β-function takes the simple form: We have left a λ for future use, but its value in the theory we study here is set to 1. Also for convenience we have introduced which is X = 1 + O(ǫ) in the ǫ-expansion studied here, but has a different value for LR elasticity, see below. In fact it is shown in Appendix E that lim ǫ→0 X is independent of the particular infrared cutoff procedure (here a massive scheme). Although the global rescaling factor ofR, ǫĨ 1 , has O(ǫ) corrections which depend on the infrared cutoff chosen, the FRG equation above does not depend on it. Note that the above equation remains true in fixed dimension, with the appropriate value for X, up to terms of orderR 4 . We will see that the value λ = 1 in (3.43) has other highly desirable properties. First this value is the only one which guarantees that the non-analyticity inR(u) does not become more severe at two loops than it is at one loop. Let us take one derivative of (3.43) and take u → 0 + . One finds: (3.46) Thus if λ = 1 the cusp inR ′′ and the resulting finite value of R ′′′ (0 + ) immediately creates a cusp inR ′ . The singularity has become worse! We call this a supercusp. It must be avoided in the statics (see also discussion in Section V). Interestingly it does occur in the driven dynamics, where it is a physical signature of irreversibility. Indeed this property is intimately related to another highly desirable property of the statics: potentiality. This property is more conveniently described by considering the flow equation for the (rescaled) correlator of the pinning forcẽ ∆(u) = −R ′′ (u), the second derivative of (3.43): Formally, this equation could have been obtained directly from a study of the dynamical field theory. Such an equation was indeed obtained at depinning but with a different value of λ: which shows that statics and dynamics differ not at one, but at two loops. Integrating the equation for ∆(u) once yields a non-zero fixed point value for ∆(u) unless λ = 1. Potentiality on the other hand requires that the force remains the derivative of a potential and that, for short-range disorder (e.g. RB for interface) one must have ∆(u) = 0. While violating potentiality is desirable at depinning where irreversibility is expected, this would be physically incorrect in the statics, and thus again points to the value λ = 1 as the physically correct one. Thus we will for now assume that this is the correct theory of the statics and explore its consequences in the next section. In section V we will provide better justifications, and explain our understanding of the tantalizing problem of ambiguous diagrammatics in the non-analytic theory of pinned disordered systems. Especially we will present methods, which satisfy all the above constraints of renormalizability, absence of a supercusp and potentiality up to 3-loop order [73]. IV. ANALYSIS OF FIXED POINTS AND PHYSICAL RESULTS The FRG-equation derived above describes several different physical situations, and admits a small number of fixed-point functionsR * (u) describing a few universality classes. The fixed point associated to a periodic disorder correlator describes single component periodic systems (such as charge density waves). The fixed point associated to a short-range (exponentially decaying) correlatorR(u) describes a class of systems with so called random bond disorder. There is also a family of fixed points associated to long range, i.e. algebraic, correlations. This includes, as one particular example, the random field disorder, which will be discussed separately. We now give the results for these fixed points, first for short-range elasticity, then for LR elasticity, and compare with available numerical and exact results. The most important quantity to compute is the roughness exponent ζ. Since we have shown that X in (3.43) is universal to dominant order this proves universality of ζ to the order in ǫ studied here (i.e. O(ǫ 2 )). For LR disorder and for periodic fixed points we can also compute the universal amplitudes for the correlation function of displacements, and discuss their dependence on large scale boundary conditions. Anticipating a bit, let us summarize the general result that we use in that case, which is derived in Section VI. The T = 0 disorder-averaged 2-point function for q → 0, q/m fixed, reads for any dimension d, in Fourier The amplitudec(d) is given by the relation (exact to all orders in the present scheme): It is found to be universal only for long range and periodic disorder. The scaling function, computed in Section VI for SR and LR elasticity, is always universal (independent of short scale details) and satisfies F d (0) = 1 and where b is computed in Section VI. This gives us all we need for a calculation to O(ǫ 2 ) of the universal amplitude, e.g for the propagator in the massless limit m ≪ q: The result for C(q = 0) in presence of a mass is also interesting since it gives the fluctuations of the center of mass coordinate for an interface physically confined in a quadratic well. Although that situation would be interesting to study numerically, most numerical results are for finite size systems of volume L d (and m → 0). We thus also define in that case: with lim z→∞ g d (z) = 1. For periodic boundary conditions q = 2πn/L, n ∈ Z d and n = 0. The prime indicates that the value of this amplitude depends on the large scale boundary conditions, i.e. it depends on whether e.g. a mass is used or periodic boundary conditions as an infrared cutoff. The ratio, computed in Section VI for short range elasticity, is unity only for periodic disorder, in which case the amplitude is independent of both large and small scale details. Before studying the different fixed points, let us mention an important property, valid under all conditions: IfR(u) is solution of (3.43), then is also a solution (for κ a constant independent of m). We can use this property to fixR(0) orR ′′ (0) in the case of non-periodic disorder. (For periodic disorder the solution is unique, since the period is fixed.) A. Non-periodic systems: Random bond disorder Let us now look for a solution of our 2-loop FRG equation which decays exponentially fast at infinity as expected for SR random-bond disorder. To this aim, we have to solve order by order in ǫ the fixed-point equation (3.43) numerically. Making the ansatzR (u) = ǫr 1 (u) + ǫ 2 r 2 (u) + . . . the partial differential equation to be solved at leading order is where we have used our freedom to normalizeR(0) := ǫ. (4.14) has a solution for any ζ 1 , but only for one specific value of ζ 1 does this solution decay exponentially fast to 0, without crossing the axis, see figure 10. The strategy is thus the following: One guesses ζ 1 , and then integrates (4.14) from 0 to infinity. In practice, however, there are numerical problems for small u. One strategy, which we have adopted here, and which works very well, is to use the value of ζ 1 , to generate a Taylor-expansion about 0. This Taylor-expansion is then evaluated at 0.5, where the numerical integration of (4.14) is started, both forwards to infinity (which in practice is chosen to be 25) and backwards to 0. This enables to control the accuracy of both the Taylor-expansion and the numerical integration. The result for the best value is given on figure 10. (Note that in [46] only the first four digits were given.) On this scale, Taylor-expansion and numerical integration are indistinguishable. The error-estimate on the last digit comes from moving the starting-point of the numerical integration (which was 0.5 above) up to 1, which allows for a crude estimate of the error. We also reproduce the At second order in ǫ, we have to solve where the last equation reflects our choice ofR(0) = ǫ. Note that to solve the 2-loop order equation, one has to feed in the solution at 1-loop order, both the Taylor-expansion about 0 and the numerically obtained solution for larger u. Again ζ 2 is determined from the condition that the solution decays at infinity. Following the same procedure as at 1-loop order, we find The function r 2 is plotted on figure 11. The Taylor One observes that ζ SR is necessarily bounded from above by ǫ/4 as no SR solution can cross this value (to any order) without exploding. This reflect the exact bound for SR disorder θ < d/2, which simply means that optimization of energy must lower energy fluctuations compared to a simple sum of random numbers. Equality is obtained for the trivial constant eigenmodeR(u) =R(0) corresponding to ζ = ǫ/4, associated to the fluctuation of the zero mode of the random potential. We can now discuss our results for the roughness exponent. These are summarized in Table 12 and compared to numerical simulations in d = 3, 2 and the exact result for the directed polymer in d = 1. A first observation is that the corrections compared to the 1-loop result have the correct sign and, further, that they improve the precision of the 1-loop result. Given the difficulties associated with this theory, this is a significant achievement. Second, the error bars given in Table 12 are estimated as half the 2-loop contribution, which should not be taken too literally, as it is difficult to obtain a good precision from only two terms of the series and no currently available information about the large order behavior of this novel ǫ-expansion. Third, one may try to improve the precision using the exact result ζ = 2/3 in d = 1. Estimating the third order correction in the three possible Pade's in order to match ζ = 2/3 for ǫ = 3, we obtain consistently the values quoted in the fourth column of Table 12. We hope that these predictions can be tested in higher precision numerics soon. B. Non-periodic systems: random field disorder Let us first recall that at the level of the bare model the static random field disorder correlator obeysR(u) ∼ −σ|u| at large |u| [46,70], whereσ = (ǫĨ 1 )σ is proportional to the amplitude of the random field. If one studies the large u behavior in the FRG equation (3.43) one clearly sees that the non-linear terms do not contribute, thus one has: Thus for a RF fixed point to exist, the O(ǫ 2 ) correction to ζ has to vanish. This will presumably hold to all orders. Indeed it is clear that if there is a similar β-function to any order, since each R carries at least two derivatives and at least one must be evaluated at u = 0, the sum of all non-linear terms to a given finite order decreases at least as R ′′ (u) ∼ 1/u. (This does not strictly excludes that summing up all orders may yield a slower decay, although it appears far fetched and does not occur in the non-perturbative large-N limit.) The above value of ζ ensures that m ǫ R(u) ∼ −σ|u| in the effective action, i.e. non-renormalization of σ. Note that this argument based on long-range large u behavior is a priori valid for any λ. Since it is made on the R equation (no such argument can be made on the equation for ∆) it uses the property of potentiality. However, from (3.46) with ζ = ǫ/3 one sees that λ = 1 is incompatible with the existence of a fixed point, even of a fixed point with a supercusp. Thus, the only way to satisfy potentiality for the static random field problem seems to have σ unrenormalized, ζ = ǫ/3 and λ = 1 (the previous discussion of potentiality in Section III.D assumed short-range disorder). This must be contrasted with the theory of depinning, where we found that: following from λ dep = −1 in (3.47). Since in that case the RG-flow is non-potential, it is clear that no similar argument as above exists to protect the value ζ = ǫ/3. (The force correlator is short range). The conjecture of [57] thus appears rather unphysical in that respect. Fixed-point function We first study the fixed-point equation for and later use the rescaling freedom to tune the solution to the correct value of σ at large scale u. One can then integrate once with respect to u: There is no integration constant here because the second line precisely vanishes at u = 0 + (absence of supercusp). The 1-loop solution involves the first line only. Dividing by y and integrating over u yields: i.e. an implicit equation for y, which defines y = y 1 (u). It satisfies We can put the 2-loop solution under a similar form. Making the ansatz At this order, one can replace y by y 1 , i.e. use uy = y ′ (y − 1) to eliminate y ′ . This gives, changing variables from u to y: The last term in the brackets is easily integrated. For the remaining terms, we integrate by part and use (4.30) to replace u 2 /2 by y − 1 − ln y: We find: has a quadratic behavior around y = 1, similar to the 1-loop result, and corrects the value of the cusp. Universal amplitude Since we know the exact fixed point function up to a scale factor, we can now fix the scale by fitting the exact large |u| behavior to R(u) ∼ −σ|u| where σ is the amplitude of the random field. The general fixed point solution reads: where ξ can be related to σ as: We need One can now express and thus compute, using (4.4) the universal amplitude (4.3) associated to the mode q = 0 in presence of a confining mass: where one has restored the factors ǫĨ 1 absorbed in∆ andσ. Expanding all factors in a series of ǫ one finds: The lowest order was obtained in Ref. [70] and we have obtained here the next order corrections. It is interesting to compare our result with the exact result in d = 0, which is [69]: While the simple extrapolation setting ǫ = 4 of (4.44) to one loopc(d = 0) = 5.59σ 2/3 is very far off, to two loop it gives c(d = 0) = 0.99σ 2/3 , surprisingly close to the exact result. It was noted in Ref. [70] that extrapolation of the 1-loop result could be considerably improved by not expanding (4.43) in ǫ but instead directly setting ǫ = 4 (with γ 2 = 0) in (4.43). That givesc 1 (d = 0) = 0.821σ 2/3 , an underestimate already reasonably close from the exact result. We extend this procedure to two loop by truncating the ǫ expansion of I The universal amplitude for the massless case (4.7) (or q ≫ m) is obtained from (4.8) with b = −1/3 from Section VI as: and writing c(d) =c(d)/(1 + 1 3 ǫ) should provide a reasonable extrapolation to low dimensions. Finally, we recall that for random field disorder, this coefficient is different for different large scale boundary conditions. The result for periodic boundary conditions can be obtained from formula (4.10). In Ref. [70], the 1-loop result was compared to the result of the Gaussian Variational Method (GVM). It is instructive to pursue this comparison to two loops. We get from [70]: where in the last line we have inserted ζ = ǫ/3 and performed the ǫ expansion. Thus one finds, quite generally that b var = 3b/4. As noted in [70] to one loop the FRG and the GVM give rather close amplitudes (differing by about 5 per cent). We see here that to two loop, i.e. next order in ǫ, the difference increases. Finally, and the coefficient remains rather close to the one in (4.46). C. Generic long range fixed points There is a family of fixed points such that . These fixed points where found for infinite N in any d in Ref. [36,72] (we use the same notations). They were studied to first order in ǫ for any N in [47], and argued to be stable only for γ < γ * (d) the value of the crossover to short range identified in [47] as ζ SR = ζ LR (γ * (d)). Here, we have not studied these fixed points in detail but we note that the 2-loop corrections do not change ζ, by the same discussion as for the random field case γ = 1/2. They will however affect the amplitudes. D. Periodic systems 1. Fixed point function For periodic R(u) as e.g. CDW there is another fixed point of (3.43). It is sufficient to study the case where the period is set to unity, all other cases are easily obtained using the reparametrization invariance of equation (4.11). No rescaling is possible in that direction, and thus the roughness exponent is The fixed-point function is then periodic, and can in the interval [0, 1] be expanded in a Taylor-series in u(1 − u). Even more, the ansatz 4.52) allows to satisfy the fixed-point equation (3.43) to order ǫ 2 and will presumably work to all orders. For a more general case of this see Ref. [68]. To gain insight into the more general case, let us write the fixed point for (3.43) with arbitrary λ: One can see on this solution that λ = 1 is the only value which avoids the appearance at two loops of the supercusp, i.e. a cusp in the potential correlatorR(u) rather than in the force correlator∆(u). The same discussion can be made on the the flow equation of∆(u) by taking two derivatives of (3.43). One finds that there is a priori an unstable direction corresponding to a uniform shift in∆(u) →∆(u) + cst. While this is natural in e.g. depinning, it is here forbidden by the potential nature of the problem which requires since in a potential environment, the integral of the force over one period must vanish. This is indeed satisfied for the fixed point for∆(u) The values for depinning are obtained by setting λ = −1: in that case, the problem becomes non-potential at large scales. Universal amplitude This fixed point implies for the amplitude of the zero mode in presence of an harmonic well, defined in (4.3), using (4.4): In the other limit m ≪ q one obtains the amplitude (using b = −1 from Section VI): Note that we prove in Section (VI) that this amplitude is independent of large scale boundary conditions, and is thus identical for e.g. periodic boundary conditions and in presence of a mass. As can be seen from (4.10) this is a consequence of ζ being zero. This can be compared to the GVM method [26,28]: with coefficients surprisingly close to the ǫ-expansion. It is interesting to compare predictions in d = 3. We recall that we are studying a problem where the period is unity, the general case being obtained by a trivial rescaling in u. Since (4.59) has a poor behavior (and so does (4.61) which resums into (4.60)), it is better to use instead (4.58). It was indeed noted in [26,28] that the improved 1-loop prediction c 1 (d = 3) obtained by setting ǫ = 1 and ignoring the ǫ 2 /108 term in (4.58) yields a value rather close to the prediction of the GVM: This is in reasonable agreement with the numerical results of Middleton et al. [84]. They obtained good evidence for the existence of the Bragg glass (i.e. its stability with respect to topological defects predicted in [26,28]). They measure directly the correlation (4.7) and obtain strong evidence for the behavior (4.58) (as well as the correct correction to scaling behavior) with Another interesting observable is the slow growth of displacements characteristic of the Bragg glass: at large x. Performing the momentum integral from (4.7), one obtains: If one expands each factor in ǫ it yields: For comparison, the GVM gives (4.68) Here extrapolation directly setting ǫ = 1 in (4.67) looks possible, and yieldsà 3 = 0.0556 to one loop increasing tõ A 3 = 0.0648 to two loop. On the other hand, setting ǫ = 1 in (4.66) yields insteadà 3 = 0.0707 to one loop decreasing toà 3 = 0.047 at two loops. The GVM gives the result A 3,GVM = 0.0507. Another interesting observable is: where L is the linear system size. In Ref. [84] it was assumed yielding a value of c(d) consistent with the direct measurement of this quantity [104]. This was also done in [85] where it was deduced from a measurement of B d that 0.98 < 2c(d = 3) < 1.11 [105]. Although this is a reasonable approximation, it is not exact. Indeed the quantity B d , contrary to c(d), depends on the (large scale) boundary conditions. It is of course universal, since it does not depend on small scale details. Its value can be computed e.g. for periodic boundary conditions and pinned zero mode, and depends on the whole finite size scaling function (4.9) computed in Section VI: As shown recently, w 2 fluctuates from sample to sample and the full distribution P (w 2 ) averaged over disorder realizations was computed for the depinning problem [86,87]. E. Long range elasticity Let us now consider the case of long range elasticity. There are physical systems where the elastic energy does not scale with the square of the wave-vector q as E elastic ∼ q 2 but as E elastic ∼ |q| α . In this situation, the upper critical dimension is d c = 2α and we define: The most interesting case, a priori relevant to model a contact line is α = 1, thus d c = 2. For calculational convenience, we choose the elastic energy to be This changes the free correlation to: (4.74) The energy exponent in that case is: The changes are very similar to the case of Ref. [67] so we summarize them here only briefly. The β-function is still given by (3.40) but with the integrals replaced by: and thus the β-function is given by (3.43) with: (See appendix F of Ref. [67]). And of course the relation (3.42) between R andR is identical except that ǫĨ 1 must be replaced by ǫĨ (α) 1 . Since X (α) is finite, the β-function is finite; this is of course necessary for the theory to be renormalizable. For the cases of interest α = 1 and α = 2, we find (4.80) The exponent ζ (as a function of ǫ) and the fixed point function is thus changed only at two loops. Let us now give the results in the cases of interest: Random bond disorder The solution of (3.43) with X → X (α) can be written, to second order in ǫ as: and θ = 2ζ. It would thus be interesting to perform numerical simulations in d = 1 for the directed polymer with LR elasticity. This would be another non trivial test of the 2-loop corrections. The 1-loop prediction is ζ = 0.208, significantly smaller than the roughness for SR elasticity ζ = 2/3. The naive 2-loop result is (setting ǫ = 1), ζ ≈ 0.227 ± 0.01. Error bars are estimated by half the difference between the 1loop and 2-loop results. Note that the bound θ < d/2 implies ζ < 1/4 in d = 1, already rather close to the 2-loop result. Random field disorder The exponent is still and was indeed measured in experiments on an equilibrium contact line [30]. It would be of interest to measure the universal distributions there, such as the one defined in [86,87]. The fixed point function is given by (4.30) and (4.34) upon replacing F (y) → X (α) F (y). The amplitude of the zero mode in a well c(d) is now given by: 1 ) −1/3 (4.86) and the amplitude of the massless propagator where b α is given in (6.14) setting ζ 1 = 1/3. Periodic disorder The fixed point becomes: For the periodic case, the universal amplitude reads: and Setting ζ 1 = 0 in (6.14) yields (4.91) Using ǫ = 2α − d, this gives which in the case of α = 1 takes the simple form (4.93) A. Summary of possible methods As we have seen above ambiguities arise in computing the effective action at the level of 2-loop diagrams if one uses a non-analytic action. One can see that these arise even at the 1loop level for correlations (see below Section VI). To resolve this issue, our strategy has been to use physics as a guide and require the theory to be renormalizable, potential and without supercusp. This pointed to a specific assignment of values to the "anomalous" graphs. The physical properties of the ensuing theory, studied in the previous section, were found to be quite reasonable. Of course, one would like to have a better, more detailed justification of the used "prescription". Although we do not know at present of a derivation of this theory from first principles, we have developed a set of observations and a number of rather natural and compelling "rules" which all lead to the same theory. We describe below our successful efforts in that direction as well as some unsuccessful ones, which illustrate the difficulty of the problem. A number of approaches can be explored to lift the ambiguities in the non-analytic theory. We here give a list; some of the methods will be detailed in the forthcoming sections. 1) Non-zero temperature: At T > 0 previous Wilson 1loop FRG analysis [58,59,70,88] found that the effective action remains analytic in a boundary layer u ∼T . However, since the rescaled temperature (2.24) flows to zero asT ∼ m θ as m → 0 (temperature being formally irrelevant) all (even) derivatives of R(u) higher than second grow unboundedly as m → 0, for instance R ′′′′ (0) ∼ R * ′′′ (0 + ) 2 /T (in terms of the zero temperature fixed point function). On a qualitative level one can thus see how finite T diagrams such as E in Fig. 3 yielding can build up "anomalous" terms in the β-function, hence confirming what is found here [70]. However, correctly and quantitatively accounting for higher loops is a non-trivial problem as stronger blow-up in 1/T k seem to arise. In fact each new loop brings two derivatives and a propagator, hence an additional factor 1/T . Despite some recent progress, a quantitative finite-temperature approach which would reproduce and justify the present ǫ expansion has proved difficult [74,75]. Not only for technical reasons, as methods using exact RG where found to be appropriate, but also for physical reasons, as an extension to non-zero T must also handle low-lying thermal excitations in the system (e.g. droplets). A theory from first principles at T > 0 is thus presently not available and will not be further addressed here. All other methods use a nonanalytic action. 2) Exact RG: Exact RG methods directly at T = 0 have been studied to one loop [70,89] and two loops [71,90]. Although it does yield interesting insights into the way to handle ambiguities (see below), and confirm the present results, it suffers from basically the same problems as described here. 3) Direct evaluation of non-analytic averages: In this approach one attempts a direct evaluation of non-analytic averages (e.g. in fully saturated diagrams). For instance, expanding at each vertex the disorder R(u x a − u x b ) in powers of |u x a − u x b | using the proper non-analytic Taylor expansion: one can try to compute directly all averages in vertex functions and correlations. After performing a few Wick contractions one typically ends up with averages involving sign functions or delta functions. These can be computed in principle using the free Gaussian measure. For instance, using formulae such as: Although promising at first sight, the results are disappointing. Averages over the thermal measure involve many changes of signs which kill all interesting divergences indicating that some physics is missing. The method, briefly described in Appendix B is thus not developed further. A dynamical version of this method which is similar in spirit [66,67], did work for depinning, although there it simply identified with another method used below, the background field (which, for depinning is u xt → vt + u xt see below). 4) Calculation of Γ(u) with excluded vertices and symmetrization: A valid, general and useful observation (not limited to this method) is that if one uses the excluded vertex then all Wick contractions can be performed without ambiguities. The excluded vertex is as good as the non-excluded one since one can always add a constant −nR(0) to the action of the model (2.2). Thus one can compute without any ambiguity the effective action Γ(u) for an "off-diagonal" field configuration since then no vertex is ever evaluated at u = 0. The drawback is that one ends up with expressions containing terms such as a =b,a =c which superficially looks like a three replica term, but due to the exclusions, may in fact contain a 2-replica part which can in principle be recovered from the above by adding appropriate diagonal terms, using that p-replica parts are properly defined as free replica sums e.g. from a cumulant expansion. The 2-replica part of (5.6) thus naively is and one is again faced with the problem of assigning a value to R ′′′ (0). The calculation with excluded vertices thus yields a sum of p-replica terms with p ≥ 2 and to project them onto the needed 2-replica part, one may need to continue these expressions to coinciding arguments u a = u b . The symmetrization method attempts to do that in the most "natural" way. Using the permutation symmetry over replicas and the hypothesis of no supercusp yields a rather systematic method of continuation. Surprisingly, it fails to yield a renormalizable theory at two loops. We identified some difference with methods which do work, but the precise reason for the failure in terms of continuity properties remains unclear. It may thus be that there is a way to make this method work but we have not found it. Being interesting in spirit this method is reported in some details in Appendix A. If one renounces to the projection onto 2-replica terms one can, in a certain sense, obtain renormalizability properties. This generates an infinite number of different replica sums and seems to be not promising, too. It is described in Appendix F. We now come to methods which were found to work, and which will be described in detail in the next section. In all of them one performs the Wick contractions in some given order (the order hopefully does not matter) and uses at each stage some properties. The fact that one can order the Wick contractions stems from the identity, which we recall, for any set of mutually correlated Gaussian variables u i : under very little analyticity assumption for W (u), which can even be a distribution. At each stage one can either use excluded or non-excluded vertices as is found more convenient. 5) Elimination of sloops: We found another method, which seems rather compelling, to determine the 2-replica part of terms such as (5.6). It starts, as the previous one, by computing (unambiguously) diagrams with the excluded vertices. Then instead of symmetrization, one uses identities derived from the fact that diagrams with free replica sums and which contain sloops cannot appear in a T = 0 theory and can thus be set to zero. Further contracting such diagrams generates a set of identities which, remarkably, is sufficient to obtain unambiguously the 2-replica projection without any further assumption. It works very nicely and produces a renormalizable theory, as we have checked up to three loops. In some sense, it uses in a non-trivial way the constraint that we are working with a true T = 0 theory. This method is detailed below. 6) Background field method: This method is similar to method number 3 except that the vertex R(u) at point x is evaluated at the field u a x = u a + δu a x , then expanded in δu a x , which then are contracted in some order. This amounts to compute the effective action in presence of a uniform background field which satisfies (5.5). Thanks to this uniform background and upon some rather weak assumptions, the ambiguities seem to disappear. The method is explained below. 7) Recursive construction: An efficient method is to construct diagrams recursively. The idea is to identify in a first step parts of the diagram, which can be computed without ambiguity. This is in general the 1-loop chain-diagram (3.1). In a second step, one treats the already calculated sub-diagrams as effective vertices. In general, these vertices have the same analyticity properties, namely are derivable twice, and then have a cusp. (Compare R(u) with (R ′′ (u)−R ′′ (0))R ′′′ (u) 2 − R ′′ (u)R ′′′ (0 + ) 2 ). By construction, this method ensures renormalizability, at least as long as there is only one possible path. However it is not more general than the demand of renormalizability diagram by diagram, discussed below. 8) Renormalizability diagram by diagram: In Section III we have used a global renormalizability requirement: The 1loop repeated counter-term being non-ambiguous one could fix all ambiguities of the divergent 2-loop corrections. However, as will be discussed in [73], this global constraint appears insufficient at three loops to fix all ambiguities. Fortunately, one notes that renormalizability even gives a stronger constraint, namely renormalizability diagram by diagram. The idea goes back to formal proofs of perturbative renormalizability in field-theory, see e.g. [91,92,93,94,95,96,97,98]. These methods define a subtraction operator R. Graphically it can be constructed by drawing a box around each sub-divergence, which leads to a "forest" or "nest" of subdiagrams (the counter-terms in the usual language), which have to be subtracted, rendering the diagram "finite". The advantage of this procedure is that it explicitly assigns all counter-terms to a given diagram, which finally yields a proof of perturbative renormalizability. If we demand that this proof goes through for the functional renormalization group, the counter-terms must necessarily have the same functional dependence on R(u) as the diagram itself. In general, the counter-terms are less ambiguous, and this procedure can thus be used to lift ambiguities in the calculation of the diagram itself. By construction this procedure is very similar to the recursive construction discussed under point 7. It has some limitations though. Indeed, if one applies this procedure to the 3-loop calculation, one finds that it renders unique all but one ambiguous diagram, namely , (5.9) which has no subdivergence, thus there are no counter-terms, which could lift the ambiguities. Thus this diagram must be computed directly and we found that it can be obtained unambiguously by the sloop elimination method [73]. 9) Reparametrization invariance: From standard field theory, one knows that renormalization group functions are not unique, but depend on the renormalization scheme. Only critical exponents are unique. This is reflected in the freedom to reparametrize the coupling constant g according g −→g(g) whereg(g) is a smooth function, which has to be invertible in the domain of validity of the RG β-function. Here we have chosen a scheme, namely defining R(u) from the exact zero momentum effective action, using dimensional regularization, and a mass. One could explore the freedom in performing reparametrization. In the functional RG framework, reparametrizations are also functional, of the form where B(R, R) is a functional of R. For consistency, one has to demand that B(R, R) has the same analyticity properties as R, at least at the fixed pointR =R * , i.e. B(R, R) should as R be twice differentiable and then have a cusp. A specifically useful candidate is the 1-loop counter-term B(R, R) = δ (1,1) R. One can convince oneself, that by choosing the correct amplitude, one can eliminate all contributions of class A, in favor of contributions of class B. Details can be found in [73]. Apart from methods 3 and 4 which did not work for reasons which remain to be better understood, methods 2,5,6,7,8,9 were all found to give consistent result, making us confident that the resulting theory is sufficiently constrained by general arguments (such as renormalizability) to be uniquely identified. Let us now turn to actual calculations using these methods. B. Calculation using the sloop elimination method 1. Unambiguous diagrammatics Let us redo the calculation of Section III B using excluded vertices. From now on we use sometimes the short-hand notations whenever confusion is not possible. The resulting diagrammatics looks very different from the usual unexcluded one. When making all four Wick contractions of the 2-loop diagrams A, B and C in Fig. 6 between three unsplitted vertices one now excludes all diagrams with saturated vertices, but instead has to allow for more than two connected components and for sloops. The splitted excluded diagrams corresponding to classes A, B and C are given in Fig. 14. There is an additional multiplicative coefficient 1/(m 1 !m 2 !m 3 !m 4 !) in the combinatorics for each pair of unsplitted vertices (say ab and cd) linked by an internal line where m 1 propagators link ac, m 2 link ad, m 3 link bc, m 2 link bd. (This is equivalent to assigning a color to each propagator). Let us denote by δΓ A R the 2-loop contribution of all diagrams of class A to the effective action. One finds: coming respectively and in the same order from graphs α, β, γ, δ+η (they are equal) and λ in Fig. 14. The only graph common to excluded and free-sum diagrammatics is α which is graph b of Fig. 7, since all the other graphs in Fig. 7 have saturated vertices. Similarly, the graphs of class B give a total contribution: (5.14) coming respectively and in the same order from graphs α ′ , β ′ , γ ′ , δ ′ in Fig. 14. Again, the only graph common to excluded and free-sum diagrammatics is α ′ which is graph g of Fig. 8, since all the other graphs in Fig. 8 have saturated vertices. The contribution δ C R of the diagrams of class C is given in Appendix C. Note that adding a tadpole does not alter the structure of the summations in the excluded-replica formalism, since a tadpole can never identify indices on different vertices. This indicates that class C does not contain a 2-replica contribution, but starts with a 3-replica contribution (times T ). This is explained in more details in Appendix D One can first check that when R(u) is analytic one recovers correctly the same result as (3.27) setting the last (anomalous) term to zero. Adding and subtracting the excluded terms in (5.13) to build free replica sums (using R ′′′ (0) = 0 in that case), or equivalently lifting all exclusions but replacing everywhere and then expanding and selecting the 2-replica part, one finds the contributions Similarly in (5.14) one obtains (5.17) We now want to perform the same projection for a nonanalytic R(u). The sloop elimination method The idea of the method is very simple. Let us consider the 1loop functional diagram (a) in Fig. 2 which contains a sloop. It is a three replica term proportional to the temperature. In a T = 0 theory such a diagram should not appear, so it can be identically set to zero: It is multiplied by G(x − y) 2 , which we have not written. We will also omit global multiplicative numerical factors. Projecting such terms to zero at any stage of further contractions is very natural in our present calculation (and also e.g. in the exact RG approach, where terms are constructed recursively and such forbidden terms must be projected out). It is valid only when (i) the summations over replicas are free (ii) the term inside the sum is non-ambiguous. These conditions are met for any diagram with sloops, provided the vertices have at most two derivatives. (One can in fact start from vertices which either have no derivative or exactly two.) Let us illustrate the procedure on an example. We want to contract W with a third vertex R at point z, i.e. we first write the product: where implicitly here and in the following the vertices are at points x, y, z in that order. We will contract the third vertex twice, once with the first and once with the second , i.e. look at the term proportional to G(x − y) 2 G(x − z)G(y − z). Note that since we will contract each vertex, we are always allowed to introduce excluded sums (clearly the diagonal terms a = b, a = c or d = e give zero, since R ab and its two lowest derivatives at a = b are field independent constants). Performing the first contraction (i.e. inserting δ ad − δ ae − δ bd + δ be multiplied by the exclusion factors (1 − δ ab )(1 − δ ac )(1 − δ de ) yields (up to a global factor of 2): (5.20) Similarly, the second contraction then yields (up to a global factor of 4): This non-trivial identity tells us that the sum of all terms (or diagrams) generated upon contractions of diagram (a) of Fig. 2 (i.e. the 1-loop sloop-diagram equivalent to term W in (5.18)) with other vertices, must vanish. Stated differently: A sloop, as well as the sum of all its descendents vanishes. Note that this is not true for each single term, but only for the sum. A property that we request from a proper p-replica term is that upon one self contraction it gives a (p − 1)-replica term. It may also give T times a p-replica term (a sloop) but this is zero at T = 0, so we can continue to contract. Thus we have generated several non-trivial projection identities. The starting one is that the 2-replica part of (5.18) is zero, since (5.18) is a proper 3-replica term. Thus, (5.19) prior to the exclusions, is a legitimate 5-replica term, and its 4-replica part is zero. Upon contracting once we obtain that the 3-replica part of (5.20) is zero. The final contraction tells us that the 2-replica part of (5.21) is zero. This is what is meant by the symbol "≡" above and the last identity is the one we now use. Indeed compare (5.21) with (5.13). One notices that all terms apart from the first in (5.13) appear in (5.21), and with the same relative coefficients, apart from the third one of (5.13). Thus one can use (5.21) to simplify (5.13): The function R ′′′ (u) 2 , which appears in the last term, is continuous at u = 0. It is thus obvious how to rewrite this expression using free summations and extract the 2-replica part which coincides with the contribution of diagrams A in (3.27) with λ = 1. We can write diagrammatically the subtraction that has been performed δ (2) where the loop with the dashed line represents the subdiagram with the sloop, i.e. the term (5.18) (with in fact the same global coefficient). The idea is of course that subtracting sloops is allowed since they formally vanish. There are other possible identities, which are descendants of other sloops. For instance a triangular sloop gives, by a similar calculation: This however does not prove useful to simplify δ (2) A R. Since the above method generates a large number of identities, one can wonder whether they are all compatible. We have checked a large number of examples (see the 3-loop calculations in [73]) and found no contradictions, although we have not attempted a general proof. The diagrams B and C are computed in Appendix D. One finds by the same procedure confirming our earlier results in section III B 2 and III B 3. C. Background method In the background method, one computes Γ[u] to two loops for a uniform background u such that u ab = 0 for any a = b. We start from Taylor expand in v x , and contract all the v fields keeping only 1PI-diagrams. This is certainly a correct formula for the uniform (i.e. zero momentum) effective action. Then one needs the small |u|-expansion of derivatives of R, i.e. (5.2) as well as Let us start from: We expand in v and of course in diagrams A one must handle terms involving R ′′′ (0) and in diagrams B terms proportional to R ′′′′ (0). Let us start with diagrams A, which come from the following term in the Taylor expansion: (5.31) Here and in the following, we will drop all combinatorial factors. Note that the expectation-values vanish at coinciding replicas so there is no need to specify the values of R ′′′ (u ab ) at a = b. Let us perform the first xy contraction If we now perform a second xy contraction there is a δ aa term which is a sloop and thus should be discarded. The δ ad + δ ba terms build saturated vertices. However the corresponding expectation values contain which is reasonably set to zero. Thus the first two contractions have been performed with no ambiguity leading to This term is no more ambiguous. Expanding as in (5.29) the potentially ambiguous part is clearly free of any ambiguity. It yields the result (5.23). The question arises, whether the result may depend on the order. We found that when first contracting xy and xz, one reproduces the result (5.23). However when one first contracts xy and yz (in any order) one encounters a problem, if one wants to contract yz again. The intermediate result after the first two contractions is The next contraction between xy contains one term with a single R ′′′ aa . One would like to argue that this term can be set to 0. Following this procedure however leads to problems. We therefore adopt the rule, that whenever one arrives at a single R ′′′ aa , one has to stop, and search for a different path. Note that this equivalently applies to the recursive constructions method. In 2-loop order, one can always find a path, which is unambiguous. It seems to fail at 3-loop order; at least we have not yet been able to calculate (5.36) using any other than the sloop elimination method. Whether some refinement of the background method can be constructed there is an open question. For diagrams of class B one expands as Again no need to attribute a value to R ′′′′ (u cd ) for c = d since the summand vanishes there. Contract xy: Contracting next xy the danger is the term δ ad yielding a saturated vertex in the middle. But, again if one takes The rest is straightforward. The backgound method thus seems to work properly at two loop order. D. Renormalizability, diagram by diagram In section V A we have stated that renormalization diagram by diagram gives a method to lift the ambiguity of a given diagram, as long as it has sufficient sub-divergences. This method is inspired by formal proofs of perturbative renormalizability; the reader may consult [91,92,93,94,95,96,97,98] for more details. The key-ingredient is the subtraction operator S, which acts on the effective action, i.e. all terms generated in perturbation-theory, which contribute to the renormalized R, and which subtracts the divergences at a scale µ. At 1-loop order, the renormalized disorder R m at scale m is symbolically (with R 0 the bare disorder) where of course the integral depends on m. The operator S rewrites this as a function of the renormalized disorder R µ at scale µ Here, the boxed diagram is defined as The idea behind this construction is that at any order in perturbation theory, any observable in the renormalized theory can be written as perturbative expansion in the bare diagrams, to which one applies S. S reorganizes the perturbative expansion in terms of the renormalized diagrams. The action of S is to subtract divergencies, which graphically is denoted by drawing a box around each divergent diagram or sub-diagram, and to repeat this procedure recursively inside each box. The second line of (5.39) is manifestly finite, since it contains the diagram at scale m minus the diagram at scale µ. This is eas-ily interpreted as the 1-loop contribution to the β-function. The power of this method is not revealed before 2-loop order. Let us give the contribution from the hat-diagram (class A): Using S, this is rewritten as Note that not only the global divergence is subtracted, but also the sub-divergence in the bottom loop; and finally the divergence which remains, after having subtracted the latter (last term). Note the factor of 1 = (−1) 2 in front of the last diagram, which comes with the two (nested) boxes. Let us halt the discussion of the formal subtraction-operator at this point, and not prove that the procedure renders all expectation values finite; this task is beyond the scope of this article, event hough it is not difficult to prove e.g. along the lines of [97], once the question of the ambiguities of a diagram are settled. However let us discuss, what the subtraction procedure can contribute to the clarification of the amgibuities. In standard field-theory, the main problem to handle is the cancellation of divergences, whereas the combinatorics of the vertices is usually straightforward. This means that the sum of the integrals, represented by the diagrams in the brackets on the r.h.s. of (5.42) are finite. This ensures of course renormalizability, subject to the condition that all diagrams have the same functional dependence on R. Here, the factor R ′′ (R ′′′ ) 2 should more completely read For the first term, there was no problem. However, we have seen that the last term was more difficult to obtain. If we demand renormalizability diagram by diagram, all diagrams have to give the same factor (5.43). Thus, if at least one of them can be calculated without ambiguity, we have an unambiguous procedure to calculate all of them. We now demonstrate that is unambiguous. To this aim, we detail on the subtraction operator S, whose action is represented by the box. This box tells us to calculate the divergent part of the sub-diagram in the box, and to replace everything in the box by the counter-term, which here is In a second step, one has to calculate the remaining diagram, which is obtained by treating the box as a point, i.e. as a local vertex. The idea is of course, that the sub-divergence comes from parts of the integral, where the distances in the box are much smaller than all remaining distances, such that this replacement is justified. Graphically this can be written as Remains to calculate the rightmost term, i.e. to calculate the 1-loop diagram, from one vertex R(u) and a second vertex V (u) := R ′′ (u) 2 − 2R ′′ (u)R ′′ (0). The result is in straightforward generalization of (3.1) The omitted terms are proportional to R ′′′′ R ′′ , and contribute to class B. We could have avoided their appearance altogether, but this would have rendered the notation unnecessarily heavy. The term which contributes to (5.46) is V ′′ (u) = R ′′′ (u) 2 . It has the same analyticity properties as R ′′ (u), especially can unambiguously be continued to u = 0, i.e. V ′′ (0) = R ′′′ (0 + ) 2 . Expression (5.46) becomes (5.48) without any ambiguity. [106] To summarize: Using ideas of perturbative renormalizability diagram by diagram, we have been able to compute unambiguously one of the terms in (5.42) and can use this informa-tion to make the functional dependence of the whole expression unambiguous. If we were to chose any other prescription, a proof of perturbative renormalizability is doomed to fail, a scenario which we vehemently reject. E. Recursive construction This method is very similar in spirit to the one of section V D. There we had first calculated a subdiagram, and then treated the result as a new effective vertex. This procedure can be made a prescription, which insures renormalizability and potentiality, since the 1-loop diagram insures the latter. Only at 3-loop order appears a new diagram, (5.36), which can not be handled that way, but the procedure, which is otherwise very economic, can handle again most diagrams at 4-loop order, using the new 3-loop diagram (5.36). VI. CORRELATION FUNCTIONS Here we address the issue of the calculation of correlation functions. We note that it has not been examined in detail in previous works on the T = 0 FRG. Usually correlations are obtained from tree diagrams using the proper or renormalized vertices from the polynomial expansion of the effective action. Thus in a standard theory one could check at this stage that correlation functions are rendered finite by the above counterterms, compute them and obtain a universal answer. In a more conventional theory that would be more or less automatic Here, as we point out, it is not so easy. Indeed, as we show below if one tries to compute even the simplest 2-point correlation at non-zero momentum one finds ambiguities already at one loop. This is because the effective action (the counterterm) is non-analytic. Again the requirement of renormalizability and independence of short-scale details guide us toward a proper definition of the correlation functions that we can compute. Interestingly, this definition is very similar as the one obtained from an exact solution in the large N limit in [72]. Let us illustrate this on the 2-point function, and at the same time, derive the (finite size) scaling function for any elasticity (not done in [67]) for massive and finite size scheme. A. 2-point function We want to compute the 2-point correlation function at T = 0. In Fourier-representation it is given by (4.1) with: in terms of the quadratic part of the effective action, which reads at any T i.e. by construction here R ′′ (0) gives the exact off-diagonal element of quadratic part of the effective action. Inverting the replica matrix gives the relation, exact to all orders: R(u) is exactly the function entering the β-function (in the rescaled formR(u)). In the second line we have inserted the fixed point form, which thus gives exactly the q = 0 correlations in the small m limit (i.e. up to subdominant terms in 1/m) which are bounded because of the small confining mass. Calculation of scaling function We now compute C(q) for arbitrary but small wave vector q, and to one loop, i.e. to next order in ǫ. One expects the scaling form (4.2) and that the scaling function is independent of the short-scale UV details (i.e. universal), if the theory is renormalizable. It satisfies F (0) = 1 and, from scaling should satisfy F (z) ∼ B/z d+2ζ at large z. In d = 4 one has F 4 (z) = 1/(1 + z 2 ) 2 and we want to obtain the scaling function to the next order, i.e. identify b in B = 1 + bǫ + O(ǫ 2 ). Let us use straight perturbation theory with R 0 , defined as in Section (III C), including the 1-loop diagrams. This amounts to attach two external legs to the 1-loop diagrams in Fig. 5, and use a non-analytic [107] R 0 . Our result is: . There is however an ambiguity in this calculation, i.e. again it is not obvious, a priori, how to interpret the R ′′′ 0 (0) 2 which appear. If one computes the one loop correction using (5.2), one must evaluate: One notes that at the very special point z = t there is no ambiguity, as the interaction term is analytic to this order. Then performing the average amounts to take two derivatives which, to this order, is analytic. In this case this is exactly the same calculation as for the repeated 1-loop counter-term. However, the full expression (6.6) integrated over z, t is, itself, ambiguous. Interestingly, this simple ambiguity already to one loop has never been discussed previously. Let us first show that renormalizability fixes the form to be the one written in (6.5). Indeed, let us re-express (6.5) in terms of the renormalized dimensionless disorder in (3.33) and (3.26) As discussed in Section (III C), no ambiguity arises when taking two derivatives of (3.33) at u = 0 + , i.e. the 1-loop counter-term is unambiguous. This gives Thus the substitution (6.7) acts as a counter-term which exactly subtracts the divergence as it should. The result is finite, as required by renormalizability, only with the above choice (6.5). Stated differently, the q = 0 calculation of (6.5) fixes the ambiguity. We show below that the methods described in the previous Section also allow to obtain this result unambiguously. Before that, let us pursue the calculation of the scaling function. Upon using (3.42) and the fixed point equation, we obtain: m ǫ (I(q) − I(0)) . (6.8) Apart from the dependence on ζ, the calculation of the scaling function is very similar to the one given in [67]. We perform here a more general calculation which also contains the case of elasticity of arbitrary range and expand in ǫ = 2α − d. Using that, in that case (6.10) Defining the 1-loop value of ζ ǫ = ζ 1 + O(ǫ), we obtain, to the same accuracy, the scaling function in the form (z = |q|/m): (We have used the variable transformation s = 1/(1 + t)). To obtain b, we need the large z behavior of the scaling function: We want to match at large z: The above result yields for α = 2 (6.14) Lifting the ambiguity Let us now present two additional methods to lift the ambiguity in the 1-loop correction to the 2-point function and recover (6.5). In the background method of Section V C one performs this calculation in presence of a background field, i.e. considering that the field u a x has a uniform background expectation value: with u a = u b for all a = b, and contracting the v a x . Then at T = 0 the sign of any u a − u b is determined, and the above ambiguity in (6.6) is lifted (contracting further the v a x yields extra factors of T and thus is not needed here). Using the background method is physically natural as it amounts to compute correlations by adding a small external field which splits the degeneracies between ground states whenever they occur, as was also found in [72]. On the other hand, performing the calculation in the absence of a background field, in perturbation theory directly of the non-analytic action yields a different result, detailed in Appendix B, which appears to be inconsistent. It presumably only captures correlations within a single well. The second method is sloop elimination. We want to compute contractions of: where the two disorder are at points z and w respectively. Let us restrict to the part proportional to G xz G 2 zw G wy which gives the q dependent part of the two point function. Since a is fixed, we need to extract the "0-replica part" of the expression after contractions (which will necessarily involve excluded vertices). Starting by contracting twice the two R's we get Subtracting the sloop W from (5.18) gives (up to terms which do not depend on both w and z, and which thus disappear after the two remaining contractions) Contracting the external u's with (6.18) we obtain (restoring the correlation-functions) The excluded sum can be rewritten as the sum minus the term with coinciding indices. Only the latter is a 0-replica term, which gives the desired result: This result can also be obtained writing directly the graphs with excluded vertices and eliminating the descendants of the sloop. Massless finite size system with periodic boundary conditions The FRG method described here can also be applied to a system of finite size, with e.g. periodic boundary conditions u(0) = u(L), and zero mass, which are of interest for numerical simulations. The momentum integrals in all diagrams are then replaced by discrete sums with q = 2πn/L, n ∈ Z d . One must however be careful in specifying the mode q = 0, i.e. u = 1 L d x u x . The simplest choice is to constrain u = 0 in each disorder configuration, which we do for now. Since the zero mode is forbidden to fluctuate sums over momentum in each internal line exclude q = 0. One then finds that the 2-loop FRG equation remains identical to (3.43), the only changes being that 1. −m∂ mR has to be replaced by L∂ LR . 2. m → 1/L in the definition of the rescaled disorder. 3. The 1-loop integral I 1 = k 1 (k 2 +m 2 ) 2 entering into the definition of the rescaled disorder has to be replaced by its homologue for periodic boundary conditions: used below. Here and below we use a prime to distinguish the different IR schemes. As we have seen X is, to dominant order, independent of the IR cutoff procedure. Thus we can now compute the 2-point function. Following the same procedure as above, we find: withĨ ′ (0) =Ĩ ′ 1 , and, for q = 2πn/L: Thus one finds the finite size scaling function (defined in (4.9)) as function of qL = 2πn. The asymptotic behaviour is which defines b ′ . The corresponding equation (6.13), when regularizing with a mass, holds. Taking the difference between the two equations yields whereĨ(q) := I(q)| m=1 . To leading order in 1/ǫ,Ĩ ′ (q) = I ′ (0) =Ĩ(q) =Ĩ(0), such that this difference takes the simpler form Now observe that for large q the first integral can be bounded by The difference in question is integrated numerically: We thus arrive at (ǫĨ Since the FRG equation and fixed point value (R ′′ ) * (0) is universal to two loops, the final result for the amplitude ratio between periodic and massive boundary conditions is B. 4-point functions and higher Let us now show how to compute higher correlation-functions with no ambiguity using the sloop-method. Let us illustrate the method on e.g. the 4-point function: The following class of diagrams contributes: x,a w,a y,a z,a (6.38) An arrow indicates contraction towards an external field, with position and replica-index as indicated. The combinatorial factors are: 1 4! from the 4 R's. 1 2 4 , the prefactor of the R's. 4! the possibilities to connect the u's to the R's. 3 for the ways to make the loop of R's. When contracting first the u's, there is another 2 4 for the possibilities, to attach the u's to the two replicas of R. Therefore only the factor of 3 remains, which is the combinatorial factor for ordering 4 points on an unoriented ring. We start by contracting the four u's with four R's, schematically: x,a w,a y,a z,a (6.39) and then we perform the four other contractions. Since exclusions at each vertex can be introduced early on, the number of possibilities is not too high and one easily obtains: where all terms have to be summed over with excluded replicas at each vertices. Due to the factors of R ′′′ ab with an odd power, it is not trivial how to project this expression onto the space of 0-replica terms to yield the desired expectation value (as in the previous Section a is fixed and thus no free replica sum should remain in the final result). To perform this projection we will first simplify the above expression using sloops. There are a number of possible sloops which can be subtracted. The first one is obtained by starting from . It reads The next sloop is 46) The simplest combination out of F , S 1 , S 2 and S 3 is Other possible contributions are given on figure 15. However none of these diagrams contributes. The reason is that they are all descendents of a sloop. We start by noting that is a true 1-replica term, i.e. a sloop. When constructing a diagram on figure 15, each of the terms in the excluded replica formalism is proportional to (6.49), thus descendant of a sloop. This means, that to any order in perturbation theory, at T = 0, no diagram contributes to a connected expectation value (of a single replica), which has two lines parting from one R towards external points. Thus the leading contribution in R to the connected 4-point function, as determined by the sloop method is the 1-loop diagram If one expressed this result in terms of the force correlator R ′′′ (0 + ) 4 = ∆ ′′ (0 + ) 4 we thus find that this expression is formally identical to the one that we obtained for the same four point function at the T = 0 quasistatic depinning threshold (Equation 5.4 in [87]). This is quite remarkable given that the method of calculation there, i.e. via the non-analytic dynamical field theory, is very different. Of course the two physical situations are different and here one must insert the fixed point value forR ′′′ (0 + ) from the statics FRG fixed point, while in the depinning calculation −∆ ′′ (0 + ) takes a different value at the fixed point. In both problems the connected four point function starts at order O(ǫ 4 ). However in some cases the difference appears only to the next order in ǫ. For instance, we can conclude that the results of [87] still hold here for the static random field to the lowest order in ǫ at which they were computed there (of course one expects differences at next order in ǫ). On the other hand. for the static random bond case, the result for the connected four point function will be different from depinning even at leading order in ǫ. It can easily be obtained from the above formula following the lines of [87]. VII. CONCLUSION In this article we constructed the field theory for the statics of disordered elastic systems. It is based on the functional renormalization group, for which we checked explicitly renormalizability up to two loops. This continuum field theory is novel and highly unconventional: Not only is the coupling constant a function, but more importantly this function, and the resulting effective action of the theory, are non-analytic at zero temperature, which requires a non-trivial extension of the usual diagrammatic formulation. In a first stage, we showed that 2-loop diagrams, and in some cases even 1-loop diagrams, are at first sight ambiguous at T = 0. Left unaddressed, this finding by itself puts into question all previous studies of the problem. Indeed, nowhere in the literature the problem was adressed that even the 1-loop corrections to the most basic object in the theory, the 2-point function, are naively ambiguous in the T = 0 theory. Since the problem is controlled by a zero-temperature fixed point there is no way to avoid this issue. An often invoked criticism states that the problems are due to the limit of n → 0 replicas. We would like to point out that even though we use replicas, we use them only as a tool in perturbative calculations, which could equally well be performed using supersymmetry, or at a much heavier cost, using disorder averaged graphs. So replicas are certainly not at the root of any of the difficulties. Instead the latter originate from the physics of the system, i.e. the occasional occurence of quasi-degenerate minima, resulting in ambiguities sensible to the preparation of the system. How to deal with this problem within a continuum field theory is an outstanding issue, and any progress in that direction is likely to shed light on other sectors of the theory of disordered systems and glasses. The method we have propose to lift the apparent ambiguities is based on two constraints: (a) that the theory be renormalizable, i.e. yield universal predictions, and (b) that it respects the potentiality of the problem, i.e. the fact that all forces are derivatives of a potential. Each of these physical requirements is sufficient to obtain the β-function at 2-loop order, and the 2-point function and roughness exponent to second order in ǫ. Next, we have proposed several more general, more powerful and mutually consistent methods to deal with these ambiguous graphs, which work even to higher number of loops and allow to compute correlation functions with more than two points. We were then able to calculate from our theory the roughness exponents, as well as some universal amplitudes, for several universality classes to order O(ǫ 2 ). In all cases, the predictions improve the agreement with existing numerical and exact results, as compared to previous 1-loop treatments. We also clarified the situation concerning the universality (precise dependence on boundary conditions, independence on small scale details) of various quantities. Another remarkable finding is that the 1-loop contribution to the 4-point function is formally identical to the one obtained via the dynamical calculation at depinning. This hints to a general property that all 1-loop diagrams are undistinguishable in the statics and at depinning. It would be extremely interesting to perform higher precision numerical simulations of the statics, and to determine not only exponents but universal amplitudes and scaling functions to test the predictions of our theory. We strongly encourage such studies. Thus in this paper we have proposed an answer to the highly non-trivial issue of constructing a renormalizable field theory for disordered elastic systems. Contrarily to the closely related field theory of depinning, which we were able to build from first principles, we have not yet found a first principle derivation of the theory for the statics. However, we have found that the theory is so highly constrained, and the results so encouraging, that we strongly believe that our construction of the field is unique. It is after all, often the case in physics that the proper field theory is first identified by recurrence to higher physical principles such as renormalizability or symmetries, as is exemplified by the Ginsburg-Landau theory of superconductivity, for which only later a microscopic derivation was found, or gauge theories in particle physics. Continuity of the renormalized disorder and summary of the method The first observation is that one expects (if decomposition in p-replica terms is to mean anything) that one can write the (local disorder part of the) effective action as a sum over well defined p-replica terms in the form: where the functions F (p) have full permutation symmetry. The idea of the symmetrization method is that we also expect, even at T = 0 that these functions F (p) should be continuous in their arguments when a number of them coincide. This seems to be a rather weak and natural assumption. Physically these functions can be interpreted as the p-th connected cumulants of a renormalized disorder, i.e. a random potential V R (u, x) in each environment. Discontinuity of the F (p) would mean that the V R (u, x) would not be a continuous function. This is not what one expects. Indeed discontinuity singularities (the shocks) are expected to occur only in the force F R (u, x) = −∇ u V R (u, x) as is clear from the study of the Burgers equation (see e.g. [54] for the discussion of simple case, in the elastic manifold formulation the shocks corresponds to rare ground state degeneracies). One thus expects V R (u, x) to be a continuous function of u. A further and more stringent assumption, discussed above, is the absence of supercusp. A supercusp would mean R ′ (0 + ) > 0. Thus we assume that the non-analyticity in the effective action starts as |u| 3 . The usual interpretation [108] is that there is a finite density of shocks and just counting how many shocks there is in a interval between u and u ′ yields the |u − u ′ | 3 non-analyticity in R(u). Let us summarize the method before detailing actual calculations. We thus define here the symmetrization method assuming no supercusp as a working hypothesis. We then compute corrections to the (local disorder part of the) effective action up to a given order in powers of R, with excluded vertices for any vector such that u a = u b for a = b, thus with no ambiguity. This yields, as in Section V B 1, sums over more than two replicas with exclusions. These exclusions are not permutation symmetric so we first rewrite them in an explicitly permutation symmetric way which can be done with no ambiguity (see below). Thus we have a sum of terms of the form where 2 = 2 is a short-hand notation for a i = a j for all i = j, i.e. symmetrized exclusions. Each function f is fully permutation symmetric, as indicated by the s superscript. Next the non-trivial part is that we explicitly verify that these symmetrized corrections can indeed be continued to coinciding points unambiguously, e.g the limit f s (u 1 , u 1 , u 3 , . . . , u ap ) exist and is independent of the direction of approach. This in itself shows that the continuity discussed above seems to work. The existence of a four replica term obliges us to also consider three coinciding points. This is done by considering f ss (u 1 , u 1 , u 3 , u 4 ), i.e. symmetrizing the result of two coinciding points over u 1 , u 3 , u 4 and then taking u 3 → u 1 . We check explicitly that this again gives a function which can be continued unambiguously. Thus at first sight, it would appear as the ideal method to extract the functions F (p) above to order R 3 . Calculations let us reconsider the diagrams of Fig. 14 Writing for any f (x 1 , ..x p ) symmetric and continuous: and expanding yields, for the three and four replica sums: in shorthand notations such that f abcd = f (u a , u b , u c , u d ). This is just combinatorics. For the three replica sums the procedure is straightforward, as symmetrization makes manifest the continuity. One easily finds (we drop an uninteresting single replica term): where in the first line we have applied (A.6) to f abc = sym abc R ′′ ab R ′′′ ab R ′′′ ac and so on (we define sym a1..ap as the sum over all permutations divided by p!). For the 4-replica term we find that f abcd = sym abcd R ′′ ab R ′′′ ac R ′′′ ad has the following limits (in a symbolic form, omitting the free summations) where at each step we had to symmetrize before taking coinciding point limits (checking that this limit was unambiguous in each case). The final result is found to be: The same procedure applied to the repeated counter-term confirms that it is unambiguous and give by (3.38). Thus because of the ominous 5/3 coefficient above, rather than the expected 1 the theory, using this procedure, is not renormalizable. Diagrams of class B and C behave properly. One finds with the same method their projections on the 2-replica part: Note the R ′′′′ (0) which here is defined as R ′′′′ (0) = R ′′′′ (0 + ) = R ′′′′ (0 − ) since R ′′′′ (u) can be continued at zero. One has, using the expressions given in Appendix C: These graphs (more precisely their contribution to 2-rep terms) sum exactly to zero: 16) in agreement with the result of the ambiguous diagrammatics in the case of an analytic function. To conclude, although promising at first sight this method is not satisfactory. The projection defined here seems to fail to commute with further contractions. For instance one can check that upon building diagrams A by contracting the subdiagram (a) in Fig. 2 onto a third vertex does give different answers if one first projects (a) or not. Since (a) is the divergent subdiagram this spoils renormalizability. Since the initial assumptions of the method were rather weak and natural, it would be interesting to see whether this problem can be better understood in order to repair this method. APPENDIX B: DIRECT NON-ANALYTIC PERTURBATION THEORY In this Section we give some details on the method where one performs straight perturbation theory using a non-analytic disorder correlator R 0 (u) in the action. Expanding in R 0 (u), this involves computing Gaussian averages of non-analytic functions, thus we start by giving a short list of formula useful for field theory calculation of this Section. One should keep in mind that these formula are equally useful for computing averages of non-analytic observable in a Gaussian (or more generally, analytic) theory. Gaussian averages of non-analytic functions: formulae We start by deriving some auxiliary functions, then give a list of expectation values for non-analytic observables of a general Gaussian measure. We need ∞ 0 dq e iqx + e −iqx e −ηq = 2η Integrating once over x starting at 0 yields The r.h.s. reduces in the limit of η → 0 to π sgn(x), which gives a representation of sgn(x) sgn(x) = lim This formula is easily generalized to higher odd powers of |x|, by integrating more often. The result is q 2n e −ηq cos(qx) n , (B.5) where cos(qx) n means that one has to subtract the first n Taylor-coefficients of cos(qx), such that cos(qx) n starts at order (qx) 2n : We now study expectation values. We use the measure xx xy yx yy from which the general case can be obtained by simple rescaling x → x/ xx 1/2 , y → y/ yy 1/2 . Let us give an explicit example (we drop the convergence-generating factor e −ηq since it will turn out to be superfluous.) A more interesting example is Another generally valid strategy is to use a path-integral. We note the important formula An immediate consequence is The very existence of the path-integral representation (B.10) also proves that Wick's theorem remains valid. Let us give an example which can be checked by either using (B.10) or (B.8): x 2 |y| = x 2 |y| + 2 x y x sgn(y) = x 2 |y| + 2 x y 2 δ(y) We finish our excursion by giving a list of useful formulas, which can be obtained by both methods: Perturbative calculation of the 2-point function with a non-analytic action Let us consider the expansion of the two point function We want to evaluate these averages at T = 0 with a non-analytic action R 0 (u). We restrict ourselves to a = b since at T = 0 the result should be the same for a = b, and we drop the subscript 0 from now on. As mentioned above, the Wick theorem still applies, thus we can first contract the external legs. The term linear in R yields the dimensional reduction result (2.14), thus we note u a x u b y = u a x u b y DR + u a x u b y ′ and we find: terms. For peace of mind one can introduce the restrictions c = a, d = b in the first sum and c = d in the second, but this turns out to be immaterial at the end. We need only, in addition to (5.2): since higher order terms in u yield higher powers of T . Using (B.13) to evaluate Gaussian averages this yields: where we denote: Note that the cross terms R ′′ (0)R ′′′′ (0 + ) involve analytic averages [110] and yield zero (a remnant of dimensional reduction). Also, to this order, no terms with negative powers of T survive for n = 0 (see discussion below). Performing the combinatorics in the replica sums, we find for n = 0: It is important for the following to note that cancellations occur in the small argument behavior of these functions, namely one has Φ 1 (s) = −s 3 /π + O(s 5 ) and Φ 2 (s) = s 4 /(4π) + O(s 6 ). In d = 0 it simplifies (setting G xy = 1/m 2 and restoring the subscript) to: with A = (24 − 27 √ 3 + 8π)/(3π). As such, this formula and (B.25) seem fine and it may even be possible to check them numerically in d = 0 for large m using a bare disorder with the proper non-analytic correlator R 0 (u). To obtain the asymptotic m → 0 and large scale behavior in any d, one must resum higher orders and use an RG procedure. The question is whether the above formula (B.25) can be used in an RG treatment. Discussion We found that this procedure does not work and we now explain why. Let us rewrite the result (B.25), including the dimensional reduction term: with i = 1, 2. One notes that if ψ 1 (x) were a constant equal to unity, one would recover the result (6.5) obtained in Section (VI). However, one easily sees that while ψ 1 (x) ≈ 0.346 approaches a constant as x ≪ a where a ∼ 1/Λ is an ultraviolet cutoff, it decreases as ψ 1 (x) ∼ x 2−d at large x, as a result of the above mentioned cancellations in the small argument behavior of the functions Φ i (x). Thus the infrared divergence responsible for all interesting anomalous dimensions in the 2point function as the non-trivial value of ζ is killed, and the method fails. Even more, the theory would not even be renormalizable. We have performed a similar calculation in the dynamical field theory formulation of the equilibrium problem in the limit T → 0, using a non-analytic action. There the method fails for very similar reasons. Only at the depinning threshold we were able to construct the dynamical theory as explained in [66,67]. One might suspect that one has to start with a somehow "normal-ordered" theory where self-contractions, i.e. terms proportional to G xx are removed, since they never appear in the T = 0 perturbation theory. We have not been able to find such a formulation. Another problem with direct perturbation theory in a nonanalytic action is that there is a priori no guarantee that it has a well defined T = 0 limit. Let us illustrate this on a simple example in d = 0. The following correlation has been computed exactly by a completely different method [69] for the random field model in d = 0 (Brownian motion plus quadratic energy landscape, · · · 0 indicates averages over all u): with t = (δ ac − δ ad )/ √ 2. In the zero-temperature limit u 2 a σ ≈ − σ √ T m 3 +O(σ 2 ), which is ill behaved. The absence of a well defined Taylor expansion in the zero-temperature limit is of course a sign that the correct result (B.30) is simply non-analytic in σ. Although this solvable example involves a correlator R 0 (u) with a supercusp, it is possible that a similar problem occurs at higher orders (three or higher) in the expansion of the 2-point function in the case of the usual cusp nonanalyticity. There have been conflicting claims in the literature about this question [47,54], i.e. the presence of fractional powers at higher orders of the expansion in a non-analytic disorder, and it may be worth reexamining. It is however important to note that, since the ǫ-expansion proposed in the main text is not based on such a direct expansion, it does not yield fractional powers of ǫ, contrarily to what was conjectured in [47]. Finally, let us point out some properties of non-analytic observables. Let us study e.g. |u x a | . Expansion in powers of R yields a first order term ∼ 1/ √ T . This is the sign of nonanalytic behavior and indeed it is easy to find that: where (u x a ) 2 DR = − q R0(0) ′′ (q 2 +m 2 ) 2 and t = G(y) √ 2G(0) . The first term is obtained by noting that R ′′ 0 (0) acts as a Gaussian random force which can then be separated from the nonlinear force, and the last term, evaluated using the above formula, is the only one which survives at T = 0 to linear order in R 0 . The formula (B.32) is interesting as a starting point to compute universal ratio, such as |u x a | 2 / (u x a ) 2 or |u x a − u y a | 2 / (u x a − u y b ) 2 . Indeed one notes that for d < 4 the integral in the term proportional to R ′′′ 0 (0 + ) is infrared divergent at large y. This is left for future study. APPENDIX C: DIAGRAMS OF CLASS C In this Appendix we give the expression of each of the diagrams of class C represented in Fig. (14) in the excluded (nonambiguous) diagrammatics. One finds, including all combinatorial factors: with: APPENDIX D: SLOOP CALCULATION OF DIAGRAMS B AND C Let us consider the expression δ B R for the B diagrams in the excluded diagrammatics (5.14). Let us start again from a single sloop (5.18) and (5.19) and contract this time between y and z twice to produce a diagram of type B. This yields: the terms R ′′ (0) arise because the first vertex is not contracted in the process so one must separate the (unambiguous) diagonal part to obtain excluded sums. If one subtracts this identity from (5.14) one finds that there remain some improper three replica term (the improper four replica term however cancels). This is because in the process of our last contractions we have generated new sloops, but, since replica were excluded they have to be extracted with care. Let us rewrite the two possible "double sloop" from unrestricted sums to restricted: In the process we have set to zero the terms since they are proper three and four replica terms. Defining now: (D.6) The simplest combination which allows to extract the 2replica part is: We now turn to graphs C. The expression for δ C R is given as the sum of all contributions in Appendix C. Within the sloop method it gives immediately zero: δ C R = 0. This is because one can start by contracting the tadpole. Since this is a sloop it can be set to zero: Upon further contractions, proceeding as in Section V B, one obtains exactly that the sum of all graphs C with excluded vertices is identically zero. Graphs C sum to zero since they are all descendants of a sloop. APPENDIX E: CALCULATION OF AN INTEGRAL We will illustrate the universality of X = 2 ǫ(2I A − I 2 1 ) (ǫI 1 ) 2 (E.1) using a broad class of IR cutoff functions, namely a propagator: Here we denote x A(x) ≡ dxg(x)A(x) and we normalize dxg(x) = 1 (consistent with fixing the elastic coefficient to unity). We will show that X = 1 + O(ǫ) independent of g(x). APPENDIX F: SUMMARY OF ALL NON-AMBIGUOUS DIAGRAMS, FINITE TEMPERATURE In this Section we give all 1-loop and 2-loop diagrams including finite T , evaluated with the unambiguous diagrammatics, which have not been given in the text. We use the unambiguous vertex a =b R(u a − u b ), denote R ab = R(u a − u b ), R ′ ab = R ′ (u a − u b ) etc.. The list of all UV-divergent diagrams up to two loops is given in Fig. 19. We write their contribution to the effective action as The total 1-loop contribution is The total 2-loop contribution is: where δ A R is given in (5.13), δ B R is given in (5.14) and δ (2) 1 q 2 1 q 2 2 (q 1 + q 2 ) 2 . (F.7) For an analytic R one substitutes R ab → R ab (1 − δ ab ) in the above formula and selects the 2-replica terms: (F.12) Let us show that if one renounces to the projection onto 2replica terms, one can still obtain some formal renormalizability property, but at the cost of introducing an unmanageable series of terms with more than two replicas. We show how to subtract divergences by adding counterterms of similar form. Let us discuss only T = 0. To cancel the 1-loop divergences we introduce the counter-term:
2018-04-03T01:29:42.754Z
2003-04-27T00:00:00.000
{ "year": 2003, "sha1": "2dab63d3f6809127b436d9fe8d685dc8074be940", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0304614", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "2dab63d3f6809127b436d9fe8d685dc8074be940", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Medicine", "Physics" ] }
10141523
pes2o/s2orc
v3-fos-license
Choosing between stairs and escalators in China: The impact of location, height and pedestrian volume Objective This research examines whether Beijing residents are more or less likely than Montréal residents to avoid stair climbing, by replicating a study in Montréal, Canada that measured the impacts of distance between stairs and escalator, height between floors and pedestrian volume on stair climbing rate. Method 15 stairways, 14 up-escalators and 13 down-escalators were selected in 13 publicly accessible settings in Beijing. Distance between the bottom or top of nearest stair and escalator combinations varied from 2.1 m to 114.1 m with height between floors varying from 3.3 m to 21.7 m. Simultaneous counts were conducted on stair and escalator pairs, for a total of 37,081 counted individuals. Results In the ascent model, pedestrian volume accounted for 16.3% of variance in stair climbing, 16.4% when height was added and 45.1% when distance was added. In the descent model, 40.9% of variance was explained by pedestrian volume, 41.5% when height was added and 45.5% when distance was added. Conclusion Separating stairs and escalator is effective in increasing stair climbing in Beijing, accounting for 29% of the variance in stair climbing, compared with 43% in Montreal. As in the Montreal case, distance has less effect on stair use rate when descending. Overall, 25.4% of Beijingers opted for stairs when ascending compared with 20.3% of Montrealers, and for descending 32.8% and 31.1% respectively. Environmental interventions to promote stair use Motivational interventions to promote stair use are relatively effective. A review of 26 intervention studies in stair use in 2012 reported increases in physical activity varying from 2.3% to 4.8% from baseline (Reynolds et al., 2014). The conclusion in this review is that motivational interventions were effective, with maintenance above baseline in some of the studies beyond the period of intervention. However, a review of the findings of the effectiveness of environmental modifications on stair climbing rates found there were insufficient such studies to draw conclusions (Soler et al., 2010). Two studies did report on stair use following non-structural design interventionsnew carpet, artwork, new paint and musicreporting 4.4% increase (Boutelle et al., 2001) and 8.6% increase (Kerr et al., 2004). Before and after studies of major environmental modification are exceedingly rare. Sun et al. (2014) report increased rates of ascent involving stairs when the bus that used to carry passengers upward decreased its service level. Other possible environmental interventions include decreasing the height of the stair run, widening the stairway and separating the stairway from the mechanical alternative. In a review of studies, it was reported that less height between levels was associated with higher levels of use (Dolan et al., 2006), but these studies did not include height as an independent variable. Height was a significant deterrent to stair climbing and descending in a study of 13 stairways and 12 pairs of escalators in a public setting (Zacharias and Ling, 2014), while lower buildings in a worksite also had higher rates of stair climbing (Olander and Eves, 2011). Greater distance between stairway and escalator accounted for higher use of the stairway (Zacharias and Ling, 2014) while proximity to the stairway over the elevator alternative increased stairway use (Olander and Eves, 2011). Architects often favor wider stairways to give prominence to a particular ascent into a building or public place, one of the most famous being the Spanish Steps in Rome. It is not known, however, whether stair width alone encourages use. Greater visibility of the stairway option is associated with higher rates of use (Eves et al., 2009) but visibility does not require width. A modeling study suggests greater stairway width may promote greater use by commuters under time pressure (Eves et al., 2008). Devoting more space to the stairway may give it more importance and can create the opportunity to make the ascent and descent more interesting, but controlled studies have yet to reveal they are effective measures. Finally, location of the stairway as a factor in choice when a mechanical alternative is available has been reported in two studies. In a 10-site study of stairs and elevators (Nicoll, 2007), higher rate of use of the stairs could be explained by the stairway's position with respect to the Preventive Medicine Reports 2 (2015) 529-532 centrally positioned, most frequented corridors of the buildings. To evaluate the possibility that stair location might motivate stair climbing and descending, a study was conducted of existing stair and escalator combinations with varying distances between nearest choices, and varying travel heights (Zacharias and Ling, 2014). Pedestrian volume was retained as a control variable. It was found that distance between the stair and escalator choices and height in the ascending model accounted for 71% of the variance in stair climbing and 21% in stair descending. Pedestrian volume had marginal impact on stair use. This last study was conducted in Montréal, Canada, and is replicated here using similar sets of stairs and escalators in Beijing, China. Physical activity and stair climbing in Mainland China There are suggestions in the literature that China's population, under the combined forces of urbanization and rising incomes, is following the trajectory to more sedentary lifestyles of the West. In 1996 in Tianjin, China, 60% of participants did not engage in leisure time physical activity but 91% of males and 96% of females walked or bicycled to work (Hu et al., 2002). The dramatic decline in bicycling since 1986for example, Beijing's bicycle commuting share dropped from 62.7% in 1986 to 13.2% in 2012 (BTRC)has not been replaced by leisure-time or occupationrelated physical activity. Only 13.2% of Chinese men and 8.4% of women declared that they engaged in any leisure-time exercise in 2006 (Ng et al., 2009). The decline in occupation-related physical activity, in particular, has been dramatic compared with declines in other domains such as leisure-related physical activity or transportation (Monda et al., 2007). Overall, the rates of voluntary leisure-related and incidental physical activity are lower in China than those measured in the West. The question is whether this tendency for voluntary physical activity extends to stair choice. We know little about stair climbing behavior in China. Response to stair climbing prompts in Hong Kong was much lower than those recorded in the UK, for example (Eves and Masters, 2006). High temperature and humidity reduced the rates further. Stair climbing may be different in the Mainland compared with Hong Kong, given many other differences in public behavior, but these differences, including differences in stair climbing and escalator riding, remain largely unexplored. Overall, active transportation declined in China from 1997, when the question was first included in the China Health and Nutrition Survey. In that survey, active transportation declined from 46-51% in 1997 to 28-33% in 2006 (Ng et al., 2009(Ng et al., , 2014. This survey does not account for the higher rates of stair-climbing and escalator use in mass public transport. Climbing a flight of stairs costs about double the energy for the same time spent walking at typical walking pace (Campbell et al., 2002). The literature suggests that sedentariness in China follows urbanization as it did in the West. However, there are also reasons and evidence why environment may prevail over widely exhibited behaviors in a particular population. With regard to differences across cultural contexts, do separation of stairway and escalator to the same destination, height of the stairway climb and overall pedestrian volume have the same effects on stair climbing? Methods To replicate the conditions of the Montréal study, an exhaustive search of locations in central Beijing was undertaken, since the great majority of shopping centers do not provide open stairways. As a consequence, the locations included 3 stair-escalator sets just outside several major electronics markets (6, 7, 8 in Table 1) and 2 sets in a metro station (9, 10). All other locations were inside shopping centers. Variations in height between floors and distance between stair-escalator combinations were a requirement for the sites. The mechanical alternative was visible in all cases from the foot or top of the stairway with a barrier-free passage between them. Pedestrian volume was included as a control variable since perceived congestion on the mechanical alternative and resulting slower ascent might induce stair climbing or descending. Visible congestion and delay at the foot of the escalator did not occur in the observation study, as might be expected in shopping environments. Although counts were conducted in 2 metro station stairescalator combinations, the associated counts could not be said to generate a wait at the foot of the escalator. This is an important condition because of the observed major positive effect of delay on stair choice. As in the previous study, 5-minute counts were conducted simultaneously or in immediate succession, between 10 a.m. and 5 p.m., with counts conducted to represent variable overall pedestrian flow at each location in the middle of the day. Counts at individual locations were conducted simultaneously, with two and three successive counts conducted at locations 11 and 12, respectively. The researchers also used the same recording devices and software. The independent variables of total pedestrian volume, height between floors and distance from the stairway to the nearest escalator were entered successively in a linear regression, to observe the relative contributions to variance in both the ascent and descent models. Height was transformed by taking its reciprocal, while the natural logarithm of distance was used to reduce the effects of disparity. Results The mean ascent volume was 46.4 persons per 5-minute block while mean descending volume was 48.5, 11.1% and 19.5% respectively higher than in the previous study (Table 1). Stair climbing as a percentage of the total ascending volume was 25.4, with 32.8 descending, 25% higher and 5% respectively than the values in the previous study. The Beijing cases had much greater distances between the foot of the stair and its paired escalator, averaging 32.0 m in Beijing compared with 17.4 m in Montréal. Similarly, mean distance between the top of the stair and its corresponding escalator in Beijing, 27.4 m, can be compared with 15.5 m in Montréal. Distances between choices were greater and so were the heights between floors. Mean height between floors in Beijing was 7.6 m compared with 4.2 in Montréal. Overall, greater height reduced stairway use while greater distance to the escalator increased it. The data were entered in a hierarchical linear regression to understand the impacts of each of the three independent variables, presented in Table 2. The Poisson model, normally appropriate for count data, had to be rejected because variances did not match means. Pedestrian volume data were entered first, followed by height and finally distance. In the ascending model, 16.7% of the variance is explained by pedestrian volume, while height alone accounts for 2.0%. Distance between choices raises the explained variance in the model to 50.9%. In the descending model, pedestrian volume accounts for 40.9% of stair choice, 42.0% when height is added. Distance between choices raises the total explained variance in the model to 45.8%. The interaction between pedestrian volume and distance indicates that overall pedestrian volume has less impact as the distance between ascent and descent alternatives increases. Conclusion and discussion Distance between stairway and escalator had similar major, positive effect on stair climbing in Beijing as observed in the Montréal case. Height also had a dampening effect on stair climbing, although an increase in height results in less than proportional declines in numbers in both cases. The greater tendency to take the stairs to descend when pedestrian volume increases, compared with ascending, is also replicated, reflecting the much lower expenditure of energy required to descend. The Beijing case exhibits higher rates of stair use than in Montréal, which can be explained in part by the much greater distances between the manual and mechanical options, and the higher pedestrian volumes. The stairs also offer a faster descent when there is higher passenger volume on the escalator, and when pedestrians are stationary. It is not known whether separating a single, long stairway into two or more shorter stairways affects the likelihood of stair climbing, although it seems a good candidate for evaluation. A smaller number of stairs between floors were associated with more stair climbing in one study (Titze et al., 2001). Most building codes require landings at 12 or 13 stairs but greater separation between successive stairways might inspire a different evaluation of the more modest first stairway, based on the limited evidence. The substantial effect of environment, in this case distance between options, on the decision to ascend stairs rather than use the nearest escalator has immediate implications for the planned public environment. Separating the manual from the mechanical means for changing levels clearly confers different meanings on these devices in the eyes of the users. Given these results, it seems reasonable to consider other environmental variables that have not received adequate treatment, such as the width of the stairway. The limited results on the design aesthetics and lighting of stairways also merit further exploration. With concern about rising sedentariness in China, the design of the public environment would appear to offer some opportunities to increase physical activity in everyday experience. The multiple-level city is increasingly the norm as underground development, metro rail and multi-story shopping environments become commonplace. The placement, dimensions and perhaps other attributes of the means to go between levels offer ways to increase daily physical activity.
2018-04-03T00:11:01.691Z
2015-06-10T00:00:00.000
{ "year": 2015, "sha1": "80fe0e51b77093ad49c0819ebd978959a65851f7", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.pmedr.2015.06.005", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3d245af3dfb790f8746139a4dfb81ede1d98a150", "s2fieldsofstudy": [ "Geography", "Sociology" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }
119318576
pes2o/s2orc
v3-fos-license
Superbosonization in disorder and chaos: The role of anomalies Superbosonization formula aims at rigorously calculating fermionic integrals via employing supersymmetry. We derive such a supermatrix representation of superfield integrals and specify integration contours for the supermatrices. The derivation is essentially based on the supersymmetric generalization of the Itzikson-Zuber integral in the presence of anomalies in the Berezinian and shows how an integral over supervectors is eventually reduced to an integral over commuting variables. The approach is tested by calculating both one and two point correlation functions in a class of random matrix models. It is argued that the approach is capable of producing nonperturbative results in various systems with disorder, including physics of many-body localization, and other situations hosting localization phenomena. Superbosonization formula aims at rigorously calculating fermionic integrals via employing supersymmetry. We derive such a supermatrix representation of superfield integrals and specify integration contours for the supermatrices. The derivation is essentially based on the supersymmetric generalization of the Itzikson-Zuber integral in the presence of anomalies in the Berezinian and shows how an integral over supervectors is eventually reduced to an integral over commuting variables. The approach is tested by calculating both one and two point correlation functions in a class of random matrix models. It is argued that the approach is capable of producing nonperturbative results in various systems with disorder, including physics of many-body localization, and other situations hosting localization phenomena. I. INTRODUCTION Supersymmetry 1,2 deals with Grassmann numbers, that were originally invented in mathematics and later used in quantum field theory as the classical analogues of anticommuting operators. This mathematical construction is proven to be a very useful tool for studies in various fields of physics and in particular in models of quantum chaos, involving random matrix theory and various models of disorder. [3][4][5] One of the prominent methods employing supersymmetry is the non-linear supersymmetric σ-model 3,4 description of disordered metallic conductors. According to this standard formalism, effective field theory is described by an action with coordinate dependent supermatrix field, Q(r), obeying the constraint, Q 2 (r) = 1. ( This method has a broad range of applications including study of Anderson localization, mesoscopic fluctuations, levels statistics in a limited volume, quantum chaos. A general form of the free energy functional F is rather simple (2) containing the classical diffusion coefficient D, the one particle density of states ν and frequency ω. Although the free energy F [Q] , Eq. (2), is written in the limit of a weak disorder, it can be used for a strongly disordered samples replacing the gradient by finite differences. At very low energies the effective free energy functional is dominated in a finite volume by the zero spatial mode, Q(r) = Q 0 , which is independent of r. In this limit the model is especially simple containing only the second term in Eq. (2). The spectral properties of the theory are universal and coincide with those of Wigner-Dyson random matrix ensembles with corresponding symmetries [3][4][5][6][7] . The derivation of the σ-model, Eq. (2), from microscopic models is not exact and is based on a saddle point method applicable at weak disorder or the large size of the matrices in the Wigner-Dyson ensembles. At the same time, the "diffusive" σ−model, Eq. (2) is not applicable for, e.g. description of electron motion in ballistic regime, where characteristic spatial scales are much smaller than the mean free path. Another important problem, that is known to be out of the reach of nonlinear σ-model, is random matrix models with finite range correlations between the matrix elements that are do not belong the Wigner-Dyson ensembles. One of the examples are models of weakly non-diagonal matrices. 8,9 . Of course, there are many other models that cannot be reduced to the σ-model, Eq. (2). In many of those models correlation functions of interest can be expressed from the beginning in terms of integrals over supervectors and the problem arise due to absence of a possibility of using the saddle-point approximation leading to Eqs. (1) and (2). Therefore, it is natural to try to generalize the σ-model, Eq. (2) to a model containing supermatrices but without the constraint, Eq. (1). In such a model, the generating functional Z(J) would be expressed in terms of an integral over unconstrained supermatrices, and having calculated this integral, one would be able to compute correlation functions of interest. It should be noticed here that usually many-level or many point correlation functions are really interesting. One-level or one-point averages (average density of states) usually do not bring an interesting information about the systems (in the problem of Anderson localization, the average density of states cannot help to distinguish between the metal and insulator). In principle, some dual representations of a generating functional, initially given by an integral over N × N Hermitian matrices (color space), are known as colorflavor transformations 10 . They transform the original integral in "color space" to an integral over certain supermanifolds, which are acting in the dual space (flavor space). However, being interesting on its own, this trans-formation has not yet evolved into a new computational tool. Trying to find a new method of studying non-standard problems of the supersymmetry method a kind of bosonization procedure to the original fermionic functional Z(J) has been suggested some time ago 11,12 . As a result, the partition function has been represented in a supermatrix action formulation without any constraint what soever; this approach was claimed to be applicable to the physics of electron motion at all scales. It seemed that the limitations due to the non-linearity of the conventional σ-model representation were overcame. Nevertheless the formula of superbosonization was not well understood from the practical point of view, namely the integration method was not specified. More precisely, in Ref. 12 a new superbosonization formula that allowed the field-integral over supervectors be expressed through a supermatrix integral has been derived, where H n is the linear space of n-dimensional complex supermatrices, ψ ∈ U (n, 1|n, 1) andψ ∈ U (n, 1|n, 1) are supervectors, and F : H n → G is a formal map with G representing a superspace. 13 Importantly, the righthand-side of Eq. (3) could be evaluated under general conditions, without reducing it to any mean-field manifold. For this reason, it was suggested that Eq. (3) could be capable of producing non-perturbative results in various models of disorder. One can imagine that Eq. (3) can represent a promising approach for non-perturbative studies in physics of many-body localization 14,15 and other situations where disorder plays an important role. 16 Originally 12 , Eq. (3) has been derived rather schematically without discussing contours of integration over the commuting elements of the supermatrix A. An attempt to specify contours of integration has been undertaken in Ref. 17. Roughly, speaking it was suggested to integrate over the eigenvalues of the boson-block from −∞ to ∞, while the integration over the eigenvalues of the fermionfermion block has to be performed over a compact domain (a circle in the simplest case). Surprisingly, it turned out that such an integration was well defined only in rather uninteresting cases. In particular, it worked perfectly well for correlation functions that required a sufficiently small number q ≤ n of the bosonic components, where n was a number of artificial "orbitals". In other words, one could use Eqs. (3) for calculation of the density of states in case of the unitary ensemble, while one encountered a singularity of the type ∞ × 0, when trying to calculate a two-level correlation functions. The situation for, e.g. orthogonal ensemble was even worse and one could not calculate even the density of states in this case. The situation was better when using a sufficiently large number of the "orbitals" n but this could be efficiently closer to results obtained using the standard saddle-point method and therefore less interesting. These findings have been confirmed rigorously in Ref. 18 but the case q > n was not resolved and it was even concluded that the superbosonization formula, Eq. (3), was not correct for this case. This was a serious obstacle in using the superbosonization for applications to interesting unsolved problems. In this paper we resolve this long standing problem of the integration in Eq. (3) for the case of hermitian matrices with an arbitrary correlation between the matrix elements. Of course, the suggested approach can be used for disordered systems with a broken time-reversal invariance. We do it integrating over the eigenvalues of the fermion-fermion block along the imaginary axis from −i∞ to i∞ instead of the integration along the circle adopted in Refs. 17,18. This does not make a difference in the results for q ≤ n but it makes the integral, Eq. (3), well defined for q > n and computation of many point correlation functions feasible, thus establishing a new method of calculations for interesting problems. The paper is organized as follows. In Section II we set the basis for the subsequent analysis of the bosonization procedure of Ref. 12 by calculation of a supersymmetric generalization of Itzykson-Zuber (IZ) integral. In Section III we show how the formulated supermatrix representation of integrals over supervectors (the so called bosonized representation) can be evaluated. In particular, we derive the domains of integration, for which the bosonization formula is exact. It is remarkable that this regularized scheme leads to an effective reduction of dimensionality of the domain of integration, which is noncompact. The proof is essentially based on the results discussed in Section II: the supersymmetric generalization of the Itzykson-Zuber (IZ) integral, [19][20][21][22][23][24] in situations when a boundary term is crucial due to the presence of singularities in the Berezinian. Emergence of this boundary term in the IZ integral ensures that both representations of the generating functional coincide. In Section IV we apply the regularized superbosonization formula to calculation of correlation functions in random matrix models. We derive both one and two point correlation functions for Hermitian diagonal random matrices with continuously distributed components and correction to the density of states for weakly nondiagonal random matrices. 8,9 Technical details of some of the derivations are presented in Appendices A, B, and C. A. Supersymmetric Itzykson-Zuber integral In this section we present useful formulae, that will be applied in subsequent sections. Let us note that in all future considerations the integration over the linear space of complex supermatrices, Hn DA, with flat Berezin measure 1,2 is always performed first by diagonalizing the matrix A and then by integrating over the eigenvalues. We distinguish between "fermion-fermion (FF)" and "boson-boson (BB)" blocks of the matrix A corresponding respectively to products ψ F ⊗ψ F and ψ B ⊗ψ B of anticommuting and commuting components of the supervectors. After the diagonalization of the supermatrix A one half of the eigenvalues will be in the FF-block, and the other part will be in the BB-block. We will call these eigenvalues FF-and BB-eigenvalues respectively. We will demonstrate that the integration over the BBeigenvalues should be performed in the infinite interval R ≡ {−∞, ∞}, while the integration over the FFeigenvalues should be performed in the infinite interval {−i∞, i∞}. This contrasts the integration rules of Refs. 17,18, where the integration over the FF-eigenvalues was performed along the unit circle. Note, that any complex 2n × 2n supermatrix, A, can be diagonalized as (1) are diagonalization matrices restricted correspondingly to the unitary supergroup and its subspace with removed phases. Here we are interested in Itzykson-Zuber integral of the type } are the FF and BB eigenvalues of supermatrices B and Q respectively and U ∈ U (n | n),V ∈ U (n | n)/U 2n (1). Then, the result of integration reads 20,22,23 Here Γ 0 {b j ,b j } | {λ j ,λ j } is the result of the bulk integration without accounting for the singularity in the Berezinian (if there is such). It has the form is the supersymmetric Vandermonde determinant and J 0 [b p λ q ] is the zero-order Bessel function. The term η {λ i ,λ i } is the boundary term arising from the singularities of the Berezinian (This type of the boundary term in the integrals over supermatrices has been found in Refs. 3,25 and is sometimes called Efetov-Wegner boundary term 26,27 ). It originates from the regularization of the anomaly in the Berezinian and is given by 23 One can easily check, that the expression (7) for IZ in- (9) gives rise to the appearance of the boundary term in Γ {b j ,b j } | {λ j ,λ j } . B. Origin of the boundary term in the Itzykson-Zuber integral Our aim in this section is to underline the origin of the anomaly of the Berezinian and the implication for the supersymmetric Itzykson-Zuber integral. For this purpose for any given diagonal complex supermatrix Q d consider the Gaussian integral, The integral, Eq. (13), is originally gaussian and integrating separately over all matrix elements of the supermatrix B gives unity. It is clear that changing the variables of the integration cannot modify this result and one must obtain unity also integrating over the eigenvalues. However, one can easily check, that the "naive" expression for Itzykson-Zuber integral Γ 0 {b j ,b j } | {λ j ,λ j } , Eq. (6, 7), is not equal to unity. It is the singularity of the Berezinian ∆ 2 ({b 2 j , b 2 j }) in Eq. (9), that gives rise to the appearance of boundary term η {λ i ,λ i } Eq. (8), in Γ {b j ,b j } | {λ j ,λ j } , that was found in Ref. 23. Existence of this boundary term ensures the condition that the integral Eq. (9) is unity. Hence, the correct answer for supersymmetric Itzykson-Zuber integral Γ has the form Eq. (5). The following remark is in order. The result, Γ 0 {b j ,b j } | {λ j ,λ j } , of the evaluation of the supersymmetric IZ integral in the absence of singularities was derived by solving the supersymmetric heat equation 20,22 ; technique, that was developed in Ref. 19 for conventional matrices. It is straightforward to check that the boundary term ∝ (1 − η) in Eq. (5) also satisfies the heat equation. III. SUPERBOSONIZATION: PROOF AND INTEGRATION CONTOURS In this section we present a derivation of the superbosonization formula and, in particular, of the bosonized σ-model for random matrices. The derivation is similar to the procedure developed in Refs. 3-5,20, but here instead of the Hubbard-Stratonovich transformation, which in the standard scheme follows the averaging over random matrices, we use the identities from the above section. Actually, the scheme of the derivation is very close to that of Ref. 12 but is more rigorous. It is useful to recall, that formal sums of formal products Ψ ⊗Ψ, where Ψ ∈ U (n, 1|n, 1) and Ψ ∈ U (n, 1|n, 1) are supervectors, constitute a vector space. This vector space is defined, up to isomorphism, by the condition that every antisymmetric, bilinear map f : U (n, 1|n, 1) ×Ū (n, 1|n, 1) → G determines a unique linear map g : U (n, 1|n, 1) ⊗Ū (n, 1|n, 1) → G with f (Ψ,Ψ) = g(Ψ ⊗Ψ). This implies that if we consider a map, F : H n → G, then the integral is now well defined. From now on we will restrict ourselves to the case of maps, F , such that the integral I F in Eq. (10) is convergent. As the first step we make use of the identity derived in Appendix A to rewrite the field integral in the left-handside of Eq. (3) as where δ is an infinitely small variable that ensures the convergence of the integral over the variable B in Eq. (11); it can be dropped once the integral over B is convergent. Now, due to the convergence of the integral in Eq. (11) and the presence of δ, we are free to change the order of the integration over the supermatrix B and the supermatrices ψ i ⊗ψ i . The integration over the supervectors ψ,ψ leads to Then, the integral over B acquires the form where we dropped δ due to the convergence of the integral Eq. ( we obtain The coefficient C n is calculated in Appendix B, yielding C n = 1. As a result, one arrives at the bosonized representation for the integral Eq. (11) To finalize this section we remind the reader that in Eq. (16) the integration over the linear space of Hermitian supermatrices, Hn DA, with Berezin measure is understood here as follows: (i) First we diagonalize the matrix A and then integrate over the eigenvalues. (ii) Integration over "boson-boson" eigenvalues is performed in the infinite interval {−∞, ∞}, whereas the integration over the "fermion-fermion" eigenvalues is performed (in contrast to Refs. 17,18) in the non-compact interval {−i∞, i∞}. In this way, the integral in Eq. (3) over supervectors is reduced to an integral over commuting variables. It is worth mentioning that the presence of SdetA −1 in Eq. (3) leads to a singular product iλ −1 i , which make the integral very sensitive to the contour of the integration over the FF-eigenvaluesλ i . Representing the integral over supervectors in terms of an integral over the supermatrices is more than just changing the variables of the integration. Usually, the term bosonization is used for the procedure of a replacement of an electron model by a model describing collective bosonic excitations. For example, the traditional σ-model describes so called diffusion modes instead of electrons in a random potential. As our transformation is exact and is based on the supersymmetry, we find it proper to use the word "superbosonization" for the transformation, Eq. (3), complemented by the rules of the integration over the eigenvalues of the supermatrices. with the distribution functions P ij (i, j = 1 . . . N ) equal to Then Eqs. (17), (18) unambiguously define statistical properties of the matrix entries as H ij = 0, H 2 ii = A 0 , and H 2 ij = A ij for i = j. The Wigner-Dyson unitary ensemble is obtained putting A ij = const independent on i, j. A. Correlation functions in the superbosonized representation: General framework We begin with the generating functional for n-point correlation functions with the matrix M J i,j is defined as In Eq. (19) ψ i are supervectors with n bosonic and n fermionic components, the source terms , J i (i = 1 . . . n), in Eq. (20) are real parameters multiplied by diagonal 2n × 2n matrices,ŝ, which break the fermion-boson (FB) symmetry. Parameter E stands for the energy and ω is the frequency. The 2n-dimensional supermatrices L, Λ andŝ are defined as with n-dimensional unity matrix, id n , and n-dimensional diagonal matrix,k = diag(1, −1). Purpose of introducing the matrix,k, is that it distinguishes between the advanced and the retarded (A/R) Green functions. To derive the supersymmetric action for RMT, one has to perform averaging in the generating functional, Z(J 1 . . . J n ), over realizations of the entries of the random matrix, H. Carrying out such an averaging with the probability distribution defined in Eqs. (17,18), one obtains where we have definedψ i = ψ + i L. At this point we note that for the constituent terms of the action (expressions in exponent), Eq. (22), the following identities hold The crucial step towards calculation of the correlation functions in Gaussian random matrix theory under consideration, is the evaluation of the super-integrals in Eq. where each of the integrals over the linear space of complex Hermitian supermatrices, H n , should be performed first diagonalizing matrices Q i and then integrating over their eigenvalues. As was mentioned in Inroduction, integration over BB-eigenvalues is performed along the real axis, (−∞, ∞), whereas integration over FF-eigenvalues is performed along the imaginary axis, (−i∞, i∞). In conclusion of this subsection we note that the derivatives of the averaged generating functional, Z(J 1 . . . J n ) , taken at zero source, J = 0, define the advanced and retarded Green functions in RMT 5,20 . The n-point Green functions can be expressed via the derivatives of Z(J 1 . . . J n ) functional in a standard way, which define the universal characteristics of RMT. As usual, the sign "+" in the denominator corresponds to the retarded Green function G R , while the sign "−" to the advanced one G A . B. Correlation functions for diagonal random matrices. Let us first show how the method developed here works for diagonal random matrices. Although this case is not the most interesting one, it allows one to understand how the method works. We remind the reader that the conventional non-linear σ-model [3][4][5] is not applicable in this case. For diagonal random matrices we have A ij = 0 for i = j, and thus the averaged generating functional Eq. (24) acquires the form where supermatrices H J i are given by Eq. (20),ŝ is given by Eq. (21) and Q i are Hermitian supermatrices with n bosonic and n fermionic entries. Calculation of Z 0 (J 1 . . . J n ) can be performed in a similar way, as the calculation of C n in Appendix B. Namely, first we diagonalize the supermatrices Q i , and afterwards perform IZ-type integration. Since the supermatrices Q i in Eq. (26) are Hermitian, they can be diagonalized upon the rotation by the elements of the unitary supergroup, SU (n | n). Substituting transformation Q = U Q d U + , where U ∈ SU (n | n), into Eq. (26) we arrive to the following form of the generating where the integration over bosonic eigenvalues of Q i , λ i,α (α = 1, · · · n), should be carried out along the real axis, (−∞, ∞), and the integration over fermionic eigenvalues, λ i,α (α = 1, · · · n), should be carried out along the imaginary axis, (−i∞, i∞). The infinitesimally small terms ±i0 in Eq. (27) arise after removing ±i0 from H J i in Eq. (26) by shifting the variable of the integration Q. Berezinian, ∆ 2 {λ j,α ,λ j,α } , is the Jacobian of the diagonalization given by It is transparent that the zero order generating functional Eq. (27) has a factorized form and can be represented as where We see that the calculation of the generating functional for diagonal random matrices reduces to the calculation of the IZ integral. This integral can be calculated employing the result of Section II for the unitary supergroup, 20,23 U ∈ SU (n, n): where the components h α andh α , (α = 1 · · · n) are BB and FF eigenvalues of H J d , respectively, and the ∆functions are defined by Eq. (28). The boundary term, η, is given by Eq. (8) and reads Here, the matrix µ αβ = µ h β ,h α is given by Now, with the help of the IZ integral, Eq. (31), we can perform integration over U and U + , namely the parameter space of the unitary supergroup, in the expression for Z 0 (J), Eq. (30). Then, taking into account the determinant form of the super-Vandermonde determinant Eq. (28), we obtain With this expression for Z 0 (J) we are ready to calculate one and two point (both level-level and eigenfunctioneigenfunction) correlation functions for Gaussian ensemble of unitary diagonal random matrices. These calculations are presented in the next two subsections. Density of states for diagonal random matrices The averaged density of states is expressed in terms of the imaginary part of the one-point Green function, G A (E), as follows The function G A (E) is related to the averaged generating functional via Eq. (25). Employing the factorization property Eq. (29) for one point Green function, one is led to evaluate the integral in Eq. (34) for n = 1, which means that all the supermatrices are two dimensional and thus have one bosonic and one fermionic eigenvalue. Then the Bosonic eigenvalue of the supermatrix H J d will have the form h = E + ω + J, while the fermionic eigenvalue will have the formh = E + ω − J. Without loose of generality we can set ω = 0. For one point Green function one has to take a derivative of the generating functional, G A 0 (E) = (1/2π)∂ Z 0 (J) /∂J| J=0 , which, as follows from Eqs. Evaluation of the integral in Eq. (36), presented in Appendix C, leads to where erfi(x) is the imaginary error function Eq. (37) exactly reproduces the averaged advanced Green function of the Gaussian unitary ensemble of diagonal random matrices (see for example Refs. 8,9). Substituting Eq. (37) into Eq. (35) we find the density of states ρ 0 (E) for diagonal random matrices, Two point correlation function for diagonal random matrices In this subsection we show how the superbosonization formula with the flat integration measure, as defined above, works for four-dimensional supermatrices. Namely, we employ the developed technique of superbosonized generating functional Eqs. (24), (25) for calculation of a two-level correlation function of diagonal random matrices. For simplicity we will concentrate on a level-level correlation function having the following form where N is the size of the matrices. We are aware of the fact that the correlation function K A 0 (E 1 , E 2 ) containing the product of two advanced Green functions is not the most interesting function characterizing the level correlations. However, the computation of this function presented here serves merely as a demonstration of how the method works. We emphasize that the method of integration adopted in Refs. 17,18 does not work when applied to this problem. For the case of diagonal random matrices the two-point function Eq. (40) can be derived upon evaluating Eqs. (24), (25). This can be done making use of the factorization property Eq. (29) with Z 0 (J) given by (34). The calculation is straightforward. Since the Vandermonde determinant, ∆ {h β ,h α } , in Eqs. (32) and (34) is always inverse proportional to the source terms, J 1 and J 2 , it is easy to see, that only the ∆ {h α ,h β } −1 term will contribute to double derivative in Eq. (25) taken at J 1 = J 2 = 0. The double derivative of the Vandermonde determinant is equal to Therefore, we have for the function where µ {h β ,h α } is defined by Eq. (33) and h 1,2 = h 1,2 = E 1,2 . Analysis of the integral over λ andλ under the second determinant in Eq. (42) is presented in Appendix C. Result of the integration can be represented as the sum, (43) Substituting now Eqs. (32) and (43) into the determinants in Eq. (42), we come to the result where the function G A 0 (E) is given by Eq. (37). The result for the level-level correlation function for diagonal random matrices, Eq. (44), coinciding with Eq. (40), together with Eq. (37) agrees with the one found in Ref. 28. C. Non-diagonal contributions to the density of states for almost diagonal matrices In order to show how the superbosonization technique works for less trivial random matrix theories, we calculate in this section a correction to the density of states in the model of almost diagonal matrices 8,9 up to the second order in the bandwidth, b. By definition, statistical properties of non-diagonal matrices are described by a single, always positive function, F (r), as A ij = b 2 F (|i − j|) , i = j. Function F (r) can adopt any form provided that it has a maximum at the center of the band, r = 0, and decays with the bandwidth, b, as r becomes large. For small b we have the ensemble of almost diagonal random matrices, while for large b we approach the Wigner-Dyson Gaussian Unitary Ensemble (GUE). We consider the case of b ≪ 1 when the standard nonlinear σ-model is not applicable. Then, expanding the exponent in Eq. (24) in b, we have where zero order in b 2 contribution, Z 0 (J) , corresponds to diagonal random matrices considered in the previous subsection. Technically, calculation of the correction, b 2 Z 1 (J) , is similar to that of Z 0 (J) . It is determined by the form of A ij for almost diagonal matrices as follows where, as usual, integration goes over the linear space H n with the flat measure. Then, the correction to the advanced Green function, b 2 G A 1 (E), is expressed in terms of the correction, b 2 Z 1 (J) , to the averaged generating functional, In Eq. (47) the averaging, . . . , is defined as Averaging in the right hand side of Eq. (47) can be performed using the identity, Q = where, according to Eq. (48), we have As described above, now again, one has to diagonalize the supermatrix Q and reduce the expression Eq. (46) to IZ integral. For that purpose, we first notice that the only difference between the expressions for Str[Q] J and Z 0 (J) is the presence of the term Str[Q] under integral, which, after diagonalization for the one-point Green functions (n = 1 case), produces an additional λ−λ term under the integral in Eq. (34). Secondly, the boundary 1 − η term does not contribute here, because the presence of δ functions in Eq. (31) together with λ −λ in the integral makes it zero. Repeating now the calculation for G 0 (E) and keeping in mind the two observations above, one finds Substituting now Eqs. (49) and (51) into the expression for the first order correction to the Green function G 1 (E), Eq. (47), we obtain Then, for the first order in b 2 correction to density of states, ρ 1 (E) = π −1 ImG A 1 (E), one easily finds which exactly reproduces the results first obtained with the help of the virial expansion 8,9 . V. OUTLOOK We have presented a new scheme of computations using the superbosonization formula, Eq. (3), first proposed in Ref. 12. We have proven that this formula is exact and have given a precise recepy for the performing integration for many point correlation functions for the unitary ensemble. In contrast to a previous study 17,18 the integration over the eigenvalues in the fermion-fermion block of the supermatrices is performed from −i∞ to i∞ and not along a circle. This way of the integration has allowed us to obtain regular integrals and calculate them in several cases. The proof of our approach and proposed method of computation of the integrals is heavily based on the supersymmetric extension of the Itzykson-Zuber integral. This integral in known only for systems with broken timereversal symmetry (unitary ensemble) and this why we consider here only such systems. At the same time, the proposed method of the integration over the eigenvalues of the supermatrices when one integrates over the eigenvalues in the boson-boson block from −∞ to ∞ and over the eigenvalues in the fermion-fermion block from −i∞ to i∞ looks very general. This encourages us to make a guess that this way of the integration can also be used for time reversal invariant ensembles. Of course, such a guess must be checked and proven in the future. We have demonstrated that the application of the bosonization formula to random band matrix (RBM) 29-34 models with small bandwidth b reproduces the perturbative expansion of DOS obtained by virial expansion 8 . We have also computed the simplest twopoint correlation function containing a product of two advanced Green functions for the ensemble of diagonal matrices. Of course, calculating an average product of both retarded and advanced Green functions would be a more interesting task but we leave it for future study. It is important at the moment that our method allows us to calculate many-point correlations functions for cases where the way of the integration developed in Refs. 17,18 does not work. We have made comparison with the known results only for checking our approach and demonstration of details of the computation. Eq. (3) complemented by our recipe of the integration is exact and most general representation of the integrals over supervectors in terms of integrals over supermatrices. The traditional non-linear σ-model, Eqs. (1) and (2), can be obtained using the saddle-point approximation for calculation of the integral over the supermatrix Q is less general. Taking into account a success of the latter in solving numerous problems (see, e.g. Ref. 3) we believe that its generalization can also bring new interesting results. Finally, we would like to mention that to this point disordered systems have been actually successfully studied using supersymmetric σ-model (including statistical properties of the energy levels in small metallic disordered grains), and we mostly focused here on a field theory for random matrix ensembles and nonperturbative effects therein. Another field of great interest of course is the non-perturbative study of various correlation functions in strongly interacting systems. Examples of such systems that potentially can be studied non-perturbatively using superbosonization include among others (i) the field theory of many-body localization in random spin chains 15 ; (ii) quantum phase transitions at the boundary of topological superconductors in two and three dimensions, which have been argued to support supersymmetry at long distances and times 35 . where we have Tailor expanded the function F (Q + A) around A = 0. We note, that such an expansion exists due to the specific constraints on the function F , outlined in Section III. To finalize our proof, it is left to show that lim η→0 1 4πη which coincides with Eq. (B2) in the case when Q d is an identity matrix. Before setting Q d = id, first let us note that for any complex 2n × 2n supermatrix of the following diagonal form: where id n is the n × n identity matrix, the η term has the form The term, Γ 0 {b j ,b j } | x, y , [see Eq. (5)] corresponding to the matrix Λ vanishes. This is because the Vandermonde determinant, ∆ Λ (x, y), in the denominator will cancel one of determinants involving Bessel function in the nominator. However the next determinant, which is equal to zero, remains. Thus, we see that if our 2n dimensional complex supermatrix Q d = id (which means x = y above), then the corresponding η term η id (1, 1) = 1 − e − 1−1 2t n = 0. Therefore, from Eq.(5), we obtain Substituting Eq. (B5) into Eq. (B2), where as Q d a unity matrix is taken with t = 1, one obtains The last equality holds, since our integration contours are shifted by an infinitesimal δ and iδ with respect to the imaginary and real axis correspondingly. This completes the computation of C n . Making use of the decoupling we represent the double-integral, I(h,h), as the sum I(h,h) = I 1 (h,h) + I 2 (h,h), where In the following two subsections we will evaluate integrals I 1 (h,h) and I 2 (h,h) respectively. Calculation of I1 In order to evaluate I 1 (h,h) we recall that 1 (λ − i0) where symbol P denotes the principal value of the integral. Then for I 1 (h,h) we have The presence of the principle value in Eq. (C7) insures the possibility of bringing the integral to the Gaussian form first by taking the derivative overh: Then the functionĨ 1 itself will have the form with C = (A 0 /2π) exp(h 2 /2A 0 )Ĩ 1 (A 0 , h, 0). On the other hand we have that suggesting C = 0. Substituting Eq. (C9) into Eq. (C6) we obtain
2017-05-22T18:01:17.000Z
2017-05-22T00:00:00.000
{ "year": 2017, "sha1": "c0213448eb5c705e37002de47509646763aa8f84", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1705.07915", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "93de596b932fd3673a80e8562f01c6b415afcf60", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
259374107
pes2o/s2orc
v3-fos-license
A method for the geometric calibration of ultrasound transducer arrays with arbitrary geometries Geometric calibration of ultrasound transducer arrays is critical to optimizing the performance of photoacoustic computed tomography (PACT) systems. We present a geometric calibration method that is applicable to a wide range of PACT systems. We obtain the speed of sound and point source locations using surrogate methods, which results in a linear problem in the transducer coordinates. We characterize the estimation error, which informs our choice of the point source arrangement. We demonstrate our method in a three-dimensional PACT system and show that our method improves the contrast-to-noise ratio, the size, and the spread of point source reconstructions by 80±19%, 19±3%, and 7±1%, respectively. We reconstruct the images of a healthy human breast before and after calibration and find that the calibrated image reveals vasculatures that were previously invisible. Our work introduces a method for geometric calibration in PACT and paves the way for improving PACT image quality. Introduction Photoacoustic computed tomography (PACT) [1][2][3] is an emerging hybrid medical imaging modality that combines the molecular specificity of optical imaging and the low tissue scattering property of ultrasound to provide deep tissue imaging with optical absorption contrast. A typical PACT system consists of a laser for light delivery to the target, an ultrasound transducer array for acoustic detection, and a data acquisition system for recording and digitizing the photoacoustic (PA) signals. The ultrasound transducer arrays used in PACT come in various geometries and have different operating frequencies and bandwidths [4][5][6][7][8]. Knowledge of the exact locations of the transducers in these arrays is crucial to reconstructing the high-contrast images that PACT is known to produce. However, due to manufacturing errors, the positions of the transducers in the manufactured array do not exactly match those in the design, which degrades the reconstructed image quality. Correcting these errors is essential for maximizing the potential of PACT systems. The problem of position estimation has been studied extensively in fields such as global positioning systems (GPS) [9,10], wireless sensor networks [11][12][13], and microphone arrays [14][15][16]. It is typically formulated as estimating the position of an object given the times-of-arrival (ToAs) of waves (e.g., electromagnetic waves, or acoustic waves) from a few sources to the object. Generally, the positions of the sources and the wave propagation speed are assumed to be known. In the context of the geometric calibration of ultrasound transducer arrays, the major distinction is that the speed of sound in the medium is unknown. Ultrasound transducer position estimation with an unknown wave propagation speed has been studied in the context of ultrasound computed tomography [17,18] and underwater ultrasound imaging [19,20]. However, they also consider the element receive and transmit delays to be unknown, which leads to more involved solution strategies. In contrast, in PACT, the element receive delays can be assumed to be known since they can be found separately by diffusing laser light onto the array (which generates a strong PA signal at the instant of the light emission). In the PACT literature, the problem of geometric calibration of transducer arrays has not been studied widely. In [21], the authors proposed a non-linear least-squares algorithm to solve for the transducer positions from ToA data collected by scanning a point source using a robotic gantry. However, they did not address the speed of sound estimation. In [8], the authors used an iterative method based on point source responses to simultaneously estimate the transducer positions, the point source positions, and the speed of sound in the medium. However, simultaneously approximating all three quantities leads to a scale ambiguity between the speed of sound and the coordinate system. Additionally, due to the non-convexity of the problem formulation, if the initial guesses of the unknowns are inaccurate, the algorithm can converge to a local minimum. Further, their method only calibrates the transducer coordinates in the radial direction and is therefore not applicable to arrays with arbitrary geometries. More recently, in [22], the authors proposed a global optimization algorithm to find the optimal location for each transducer in their 28-element transducer array by maximizing the sharpness of the reconstructed image. This method also suffers from convergence to local minima, and it scales poorly with the number of transducers. Moreover, it could lead to an unphysical situation, where different imaging targets result in different values of learned transducer coordinates. Finally, in [23], the authors circumvent the problem of geometric calibration by using deep learning-based frameworks that reduce the image artifacts resulting from the errors in the transducer positions. However, such methods suffer from a lack of interpretability [24] and result in the loss of linearity of the image reconstruction process. In this work, we present a geometric calibration method that overcomes all the limitations stated above. We start with the point sourcebased formulation in [8] and reduce it to a linear system of equations in the transducer coordinates by using alternate methods to obtain the other unknown quantities in the formulation. In doing so, we overcome both the scale ambiguity between the unknowns as well as the non-convexity of the problem. Owing to the linearity of the resulting formulation, we can also derive error estimates for the estimated transducer locations. These are useful for determining the number and locations of the point sources needed to calibrate a transducer array within a given error tolerance. The paper is structured as follows. In Section 2, we elucidate the importance of geometric calibration through numerical simulations and introduce our solution strategy. In Section 3, we apply our method to an experimental PACT system and show an improvement in the reconstructed image quality due to our method. In Section 4, we end with a discussion of our results. Motivation and theory Before presenting our method, we demonstrate the need for a sound geometric calibration method for PACT systems through numerical simulations. Motivation for geometric calibration Using the k-wave MATLAB package [25], we simulate a 512-element circular ultrasound transducer array with isotropic point transducers and a radius of 10 cm. Each element of the array has a Gaussian frequency response with a center frequency of 2 MHz and an ∼ 80 % one-way 6 dB bandwidth. Next, we perturb the x and y coordinates of each transducer with a uniformly distributed random variable in the range [ − 0.5λ 0 ,0.5λ 0 ], where λ 0 is the wavelength corresponding to the center frequency. This imitates the real-world situation where the actual transducer locations in an array do not exactly match the designed locations due to manufacturing errors. A schematic of this simulation setup is shown in Fig. 1(a). We simulate the propagation of ultrasound waves due to an initial pressure distribution defined by a vessel-like numerical phantom (shown in Fig. 1(a)) and record the propagated waves at the perturbed transducer coordinates. Then, we reconstruct the images of the phantom using the designed coordinates and the perturbed coordinates, shown in Figs. 1(b) and (c), respectively. We can interpret the image obtained with the perturbed coordinates as the one obtained after geometric calibration (i.e., the calibrated image), and the image obtained with the designed coordinates as the uncalibrated one. Even for a maximum perturbation of 0.5λ 0 (or 375 μm), there is a significant degradation in the quality of the uncalibrated image ( Fig. 1(b)) compared to that of the calibrated one ( Fig. 1(c)) in terms of the sharpness of the reconstruction and the background artifacts. To quantify this degradation, we compute the contrast-to-noise ratios (CNRs) of the two images in Figs. 1(b) and (c) to be 17 and 36, respectively, thus indicating a CNR reduction of as much as 50%. We also extract two line profiles from the images at locations "A" and "B" Proposed method Our method for geometric calibration is based on acquiring point source responses at various locations within the field of view (FOV) of the array. The ToA of the PA signal originating from a point source at x ′ = [x ′ , y ′ , z ′ ] and recorded by a transducer at x = [x, y, z] can be written as, where t denotes the ToA of the signal, c is the speed of sound in the medium (water in this case), and ||⋅|| denotes the Euclidian norm. While the objective of geometric calibration is to estimate the transducer locations, due to the problem formulation in Eq. (1), we end up with three unknownsthe transducer location, x, the point source location, x ′ , and the speed of sound, cthat are related in a non-convex fashion. In addition, this problem is ill-posed because scaling both the coordinate system and the speed of sound by a constant factor will result in the same ToAs. To overcome these issues, in our approach, we obtain the point source locations and the speed of sound in water through surrogate methods. To obtain c, we leverage the fact that the variation of the speed of sound in water with temperature has been studied extensively in the literature [26][27][28][29]. We measure the water temperature accurately and infer the speed of sound from it. Next, instead of solving for the point source locations, we use a high-precision (3-axis) translation stage to move the point source to different locations within the FOV of the array. Thus, we have a coordinate system defined by the translation stage with the origin at the initial position of the stage. Having obtained the speed of sound and the point source locations, we reformulate the problem, so it becomes linear in the transducer coordinates. To do this, consider the ToA relations (Eq. (1)) for two point sources at x , and a transducer at x = [x, y, z], square them and take their difference, as shown below. where d i = ct i , i = 1, 2 are the distances between the transducer and the two point-sources, respectively, and r (i,1) )/2, respectively, where (i, 1) and (i, 2) represent the indices corresponding to the i th pair of point sources out of the N c pairs. Finally, we estimate the transducer location using the This process is repeated for each transducer independently. A graphical illustration of the proposed method is shown in Fig. 2. Posing the problem as described above allows us to characterize the error in the estimated transducer positions in a straightforward manner, as shown in Appendix A. By doing so, we can systematically choose the number and locations of point source measurements needed to calibrate the transducers within a pre-determined error tolerance. Methods We experimentally demonstrate our method using the 3-dimensional (3D) PACT system described in [8]. The system consists of a hemispherical array housing with four arc-shaped 256-element ultrasound transducer arrays uniformly distributed along the azimuthal direction (see Fig. 3). Each transducer element has a center frequency of 2.25 MHz and an ∼ 98% one-way 6 dB bandwidth. The array is rotated by 90 • to achieve an ∼ 2π steradian solid angle coverage. The signals from each transducer are amplified and digitized by a one-to-one mapped pre-amplification and data acquisition system, and the digitized data are streamed to the computer via USB 3.0. Finally, we reconstruct the images from the raw PA signals using the universal back-projection (UBP) algorithm [30]. For the demonstration in this paper, we only consider one of the arcs. We operate the system in two configurations. In configuration #1, meant for point source imaging, we couple 532 nm light from a laser (IS8-2-L, Edgewave) to an optical fiber (FG050LGA, Thorlabs; core diameter: 50 μm) terminated with a light-absorbing material (carbon nanopowder), which acts as a point source for PACT. In configuration #2, meant for human breast imaging, we use a laser (LPY7875, Litron; pulse repetition frequency: 20 Hz, maximum pulse energy: ∼ 2.5 J) to deliver 1064 nm light to the tissue through an engineered diffuser (EDC 40, RPC Photonics Inc.) installed at the intersection of the four arcs to expand the beam. We ensure that the optical fluence on the tissue surface is within the American National Standards Institute (ANSI) safety limit at 1064 nm [31]. The two configurations are illustrated in Fig. 3. For calibrating the array, we operated the system in configuration #1 and acquired 108 point-source responses using a high-precision 3-axis translation stage (PLS-85, Micos; bidirectional repeatability: 0.2 μm) in a 6 × 6 × 3 arrangement with a pitch of 0.254 mm. We used deionized water (resistivity: 2 MΩ⋅cm) in the experiment to ensure that we can accurately estimate the speed of sound. During the experiment, we measured the temperature of the water using a thermocouple (HH303, Omegaette) and inferred the speed of sound from it to be 1482.9 m/s. Note that the point source we used is not perfectly isotropic. However, this anisotropy does not affect our method as long as the signal-to-noise ratio (SNR) of the acquired data permits accurate ToA estimation. There are several ways to estimate the ToAs of the point source signals. For instance, we can compute the noise statistics of a signal and estimate the ToA as the first instant when the signal exceeds a predefined amplitude threshold above the noise. However, the true firstarrival signal might be buried in noise, which leads to erroneous ToA estimates, especially in low SNR situations. Alternatively, we can experimentally acquire a reference PA signal with a known ToA (for e.g., by accurately measuring the distance between the source and the transducer), and use it to estimate the ToAs of the point source signals relative to the reference signal [32]. However, acquiring such a signal with a known ToA is challenging. Instead, we combine these two approaches of using noise statistics and leveraging the structure of an experimentally acquired signal. First, we find the maxima of the acquired signals, which is usually well above the noise. There is a delay between the maximum of the signal and the first-arrival due to the finite bandwidth of the transducers. To find this delay, we align the maxima of all the acquired signals and compute the average of the signals (this boosts the SNR). Then, we estimate the ToA of the averaged signal as the first instant when it exceeds a predefined amplitude threshold (three times the standard error of the noise in this case) and compute the delay between the ToA and the time when the maximum of the averaged signal occurs. Finally, we estimate the ToA of each individual signal by subtracting this delay from the time corresponding to the maximum of the signal. We estimated the ToAs using this approach (see Supplementary Fig. Fig. 2. Graphical illustration of our geometric calibration method. Fig. 3. A schematic of the 3D PACT system. The system consists of four arc-shaped 256-element ultrasound transducer arrays in a hemispherical array housing, which is filled with water for acoustic coupling. The array is rotated by 90 • to achieve a solid angle coverage of ∼ 2π steradian. The system is operated in two configurations. Configuration #1 for point source imaging: light from a 532 nm laser is coupled to an optical fiber which is terminated on the other end with an optically absorptive material, which acts as a point PA source. Configuration #2 for human breast imaging: light from a 1064 nm laser is delivered to the tissue through a diffuser (placed at the intersection of the arcs) to expand the beam. 1) and applied our calibration method to estimate the locations of the transducers. The designed and calibrated locations of the transducers and their relative shifts are plotted in Supplementary Fig. 2. Comparison of the images reconstructed before and after calibration Having obtained the calibrated coordinates, we proceed to evaluate the improvement in the reconstruction quality due to the geometric calibration using two data sets. The first one consists of point source responses recorded at five different locations within the FOV of the array. Note that these data are not part of the 108 point-sources used for the calibration. The second data set is obtained by imaging the breast of a healthy adult subject lying down in a prone position within a single breath-hold of 10 seconds (to minimize motion artifacts), and with the system being operated in configuration #2. Point source reconstruction results The reconstructed images of one of the five point-sources using the uncalibrated (designed) and calibrated coordinates are shown in Figs. 4 (a) and (b), respectively. The images are maximum amplitude projections (MAPs) of the reconstructed volume along the x, y, and z directions. From the images, we see a clear improvement in the calibrated image ( Fig. 4(b)) compared to the uncalibrated image ( Fig. 4(a)) in terms of the improved sharpness of the reconstruction and the suppressed artifacts in the background. We identify three locations in the MAPs (marked as points A, B, and C in Figs. 4(a) and (b)) where the difference between the two images is prominent and extract line profiles of the volumes at each of these locations in the direction perpendicular to their corresponding MAPs. The profiles at points A, B, and C are plotted in Figs. 4(c), (d), and (e), respectively, and they also show that the background in the calibrated image is lower than the uncalibrated one. Finally, to better appreciate the differences between the two images, we provide a video that toggles between the two images consecutively (Supplementary Video 1) and a video that shows the reconstructed 3D volumes of the uncalibrated and calibrated point sources (Supplementary Video 2). To quantify the improvement in the reconstructed point source images, we compute their CNRs. Additionally, we compute two other metrics that characterize the resolution of the system. These metrics are based on the power-RMS width (RMS stands for root-mean-square) defined in Appendix A.2 of [33]. The power-RMS width of a complex-valued function, f(t), is the standard deviation of the probability density function given by |f(t)| 2 . We extend this definition to 3D by computing the covariance matrix, Σ V 2 ∈ R 3×3 , of the 3D distribution defined by the square of the reconstructed volume, V. From the covariance matrix, we define the following quantities, and where Det(⋅) and Tr(⋅) denote the determinant and the trace of a matrix, respectively. The size and the spread of the reconstructed point source are a measure of its volume, and its spread in the radial direction, respectively. The advantage of using these covariance matrix-based measures over conventional metrics (such as the full width at half maximum) is that they are axis-invariant and do not place assumptions on the polarity or the shape of the reconstruction (such as Gaussianity). The three metrics are computed for the reconstructions of all five pointsources and their mean and standard errors are reported in Table 1. We also report the relative improvement for each metric, defined as, Relative improvement in metric = |Metric of calibrated image − Metric of uncalibrated image| Metric of uncalibrated image . From Table 1, we see that there is an (80 ± 19)% improvement in the CNR, a (19 ± 3)% improvement in the size, and a (7 ± 1)% improvement in the spread of the point source reconstruction. We also reconstruct the image of a simulated point source (see Supplementary Fig. 3). The size and spread of the simulated point source are 0.1 mm 3 and 0.81 mm, respectively, and they are very close to the size and spread of the calibrated point source reconstruction. In-vivo reconstruction results Next Supplementary Fig. 4. Further, to quantify the improvement, we compute the CNRs at five points within the region of interest (the green dashed box) and compute their mean and standard error. These quantities are presented in Table 2 and they show that the CNR in this region improves by (25 ± 7)% due to the geometric calibration. Discussion In this paper, we have presented a method for the geometric calibration of the ultrasound transducer arrays used in PACT. The method is versatile in that it can be used for any ultrasound array, provided the point source measurements are made within the FOV of the array. The method also overcomes the ill-posedness and non-convexity of the original formulation in Eq. (1) by using surrogate methods to estimate the speed of sound and the point source locations, leading to a linear system of equations in the transducer coordinates. We applied our method to a 3D PACT system and showed that using the estimated transducer locations obtained from our method resulted in a significant improvement in the reconstructed point sources and the in vivo human breast image in terms of the CNR and the resolution. Our method would be particularly useful in situations where precisions in the transducer positions are difficult to control, such as when arrays are constructed using individual ultrasound transducers [7,22]. A notable advantage of our formulation is that it is linear in the transducer coordinates. In addition to simplifying the optimization, the linearity also allows for a straightforward characterization of the error in the estimated transducer coordinates, as shown in Appendix A. Characterizing the error is particularly important for practical considerations such as choosing the number and positions of point sources needed to calibrate an array within a given error tolerance. For instance, for the demonstration in Section 3, let the error tolerance be λ 0 /5, where λ 0 ≈ 0.67 mm is the wavelength corresponding to the center frequency of the array. For the point source arrangement that was used, as shown in Appendix A, we estimate the errors along the three coordinate axes defined by the three-axis translation stage as 0.03 mm, 0.03 mm, and 0.07 mm, respectively, which is well within our error tolerance. If our error tolerance is even lower, we can either increase the pitch between the point sources or increase the number of point source measurements to satisfy the requirement. The ability to systematically choose the point source arrangement based on an error tolerance distinguishes our method from the existing approaches in the literature [8,21,22]. In our method, we estimate the speed of sound in water by measuring the temperature of the water. To ensure that this estimate is accurate, a few points must be considered. Firstly, the speed of sound in water does not just depend on the temperature of the water but also its purity [34]. In our experiment, we used deionized water with a resistivity 2 of 2 MΩ⋅ cm to ensure that our speed of sound estimate is accurate. Secondly, since we assume that the speed is homogeneous, we must make sure that the water temperature is uniform and constant throughout the experiment. One way to do this is to start the experiment only after the water temperature has reached a steady state as monitored at several locations and at regular intervals in time. Our method also requires an accurate estimate of the ToAs of the point source signals. While estimating the ToAs, it is crucial to account for any delays in the data acquisition pipeline such as the element receive delays. In PACT systems, these delays can be found by diffusing the laser light onto the array which generates a strong PA signal (termed the transducer surface signal) at the instant of laser emission. The ToA estimation approach from Section 3.1 can be used to estimate the firstarrival time of the transducer surface signal. This synchronizes the laser emission with the data acquisition system. It is important to note that the ToA estimation approaches described in Section 3.1 are valid only when the point source response does not change significantly within the measurement region. If it does, then the spatial impulse response of the transducers [37] has to be incorporated into these approaches for accurate ToA estimation. Table 1 A quantitative comparison of the uncalibrated and calibrated point source reconstructions using three metrics: CNR, size, and spread. The reported quantities are the mean ± standard errors of the respective metrics for the reconstructed volumes of five different point sources. The size and spread of the point source reconstruction are defined in Eqs. (3) and (4) We concede that despite accounting for several factors in the estimation of the speed of sound and the ToAs, there may still be some error in these estimates. For instance, changes in the temperature that are smaller than the measurement resolution of 0.1 • C could result in some error in the estimated speed of sound. Similarly, since we define our ToA based on an amplitude threshold, we ignore the part of the point source response that occurs prior to this instant, which introduces some error in our ToA estimates. We account for such errors in our error analysis in Appendix A, where we assume that our speed of sound error is 0.3 m/s (based on the resolution of the temperature measurement) and the ToA estimation error is approximately 0.45 μs (based on the center frequency of the array). While our method is readily applicable to any ultrasound transducer array, a practical concern arises when working with 2-dimensional (2D) PACT systems (for example, a ring array, or a linear array). In this case, it is crucial to ensure that the point sources and the transducer array lie in the same plane. Otherwise, the ToAs of the point source responses acquired by the PACT system do not accurately reflect the true distances between the point sources, leading to erroneous results. To overcome this, if we perform 3D geometric calibration for a 2D array, then it is necessary to account for the changes in the spatial impulse response of the transducer and the decrease in the signal-to-noise ratio while estimating the ToAs for out-of-plane point sources. In conclusion, we presented a method for the geometric calibration of the ultrasound transducer arrays used in PACT systems, demonstrated the method in a 3D PACT system, and discussed several practical considerations in implementing the method. We hope that our work will standardize the practice of geometric calibration in PACT and lead to improved image quality in PACT systems. A demo code for our method has been posted on GitHub. 3 Imaging protocols All human imaging experiments were performed with the relevant guidelines and regulations approved by the Institutional Review Board of the California Institute of Technology (Caltech). The human experiments were performed in a dedicated imaging room. Written informed consent was obtained from all the participants according to the study protocols. Data and code availability The data that support the findings of this study are provided within the paper and its Supplementary materials. A demo code for the calibration method has been posted online at https://github.com/ karteekdhara98/PACT-geometric-calibration. The reconstruction algorithm and data processing methods can be found in the paper. The reconstruction code is not publicly available because it is proprietary and is used in licensed technologies. Declaration of Competing interest L.V.W. has a financial interest in Microphotoacoustics, Inc., Cal-PACT, LLC, and Union Photoacoustic Technologies, Ltd., which, however, did not support this work. The other authors declare no competing interests. Data Availability The reconstruction algorithm and the data that support the findings of this study are provided within the paper and its Supplementary materials. A demo code for our method has been posted online. Appendix A. Error analysis As described in Section 2.2, in our method, the transducer locations are estimated as x = ( To compute the error in x, we treat the elements of b as random variables whose mean is equal to their true value and standard deviation is equal to the experimental error in their measurement. Assuming that the high-precision translation stage has an accuracy much smaller than the wavelength of the transducers being calibrated, we can treat the matrix A as a deterministic quantity with negligible error. Let the covariance matrices of b and x be Σ b ∈ R Nc×Nc and Σx ∈ R 3×3 , respectively. Then, we have the relation, where A † = ( A T A ) − 1 A T is the pseudoinverse of A, and Σ b has the following form: Here, σ c and σ t are the uncertainties in the measurements of the speed of sound and the ToAs, respectively, c is the speed of sound, and t 0 is the ToA of a point source response at the transducer. Since the speed of sound is derived from temperature measurements, σ c depends on the error in the temperature measurement, T. Assuming that the temperature measurement is unbiased and has an uncertainty of σ T , we have σ c = ⃒ ⃒dc dT ⃒ ⃒ σ T . The uncertainty in the ToA, σ t , can be estimated as 1/f 0 , where f 0 is the center frequency of the array, because the bandwidth is usually given as a fraction of the center frequency. Finally, note that although the ToA from every point source is different, for simplicity, we only consider a representative ToA, t 0 , for calculating σ 0 . Having calculated Σ b using Eqs. (6) and (7), we can use Eq. (5) to compute Σx. The square roots of the diagonal elements of Σx give us the estimation error in the transducer positions along the three coordinate axes. To illustrate how these quantities are computed in a practical situation, we estimate the error for the demonstration in Section 3. Note that this calculation is typically performed before the experiment. Firstly, the bidirectional repeatability of our translation stage (PLS-85, Micos) is 0.2 μm, which is much less than the wavelength corresponding to the center frequency of the array (670 μm). Therefore, matrix A can be treated as a deterministic quantity. Assuming that the water temperature is 20 • C, we infer the speed of sound to be, c = 1482.3 m/s, and dc dT ≈ 3 m/s. Since the readings from our thermocouple (HH303, Omegaette) are sufficiently precise, we determine that the maximum error in our temperature measurement is equal to the resolution of the thermocouple, i.e., σ T = 0.1 • C. Thus, σ c ≈ 0.3 m/s. The center frequency of the array is 2.25 MHz. Therefore, σ t = 1 2.25×10 6 ≈ 4.5 × 10 − 7 s. t 0 is estimated as the time taken for the acoustic wave to travel a distance of 13 cm (from the center of the array to the transducer), i.e., t 0 = 0.13 1482.3 ≈ 8.8 × 10 − 5 s. Substituting these values in Eq. (7), we get σ 0 ≈ 5 × 10 − 3 . Next, we construct Σ b for the point source arrangement in Section 3 using Eq. (6). Then, we substitute it into Eq. (5) to obtain the errors in the estimated transducer locations along the three coordinate axes as 0.03 mm, 0.03 mm, and 0.07 mm, respectively. We study the dependence of the estimation error on the number of point sources and the point source locations in Supplementary Fig. 5.
2023-07-10T05:03:51.930Z
2023-06-07T00:00:00.000
{ "year": 2023, "sha1": "e7f9e38ac29047e55f85568ea4ac266360fce6d5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.pacs.2023.100520", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e7f9e38ac29047e55f85568ea4ac266360fce6d5", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
245512056
pes2o/s2orc
v3-fos-license
GENETIC RELATIONSHIPS BETWEEN COMMERCIALLY PRODUCED AND NATURAL POPULATIONS OF BOMBUS TERRESTRIS DALMATINUS IN TERMS OF MITOCHONDRIAL COI AND CYTB Bombus terrestris dalmatinus is naturally common in many countries, including Turkey, and is also used commercially for the pollination of greenhouse plants. Intensive commercial production and international trade in many countries are considered as reasons for the disappearance of some natural populations. Hybridization of native bumble bees with those produced commercially, but having escaped from greenhouses and colonization of these commercial bees in natural habitats are cause for concern. In order to assess this concern, B. t. dalmatinus workers were collected from twelve different populations: five commercial producers, three surrounding greenhouse centers, three natural areas at least 30 km away from greenhouses, and one more recent greenhouse zone in Antalya, Turkey. The genetic variations and relationships among the twelve populations were estimated using SNP haplotypes determined in mitochondrial COI and CytB. Twenty and sixteen haplotypes were obtained for COI and CytB, respectively. A single haplotype, H1, was widespread with a high frequency in all individuals for both genes. Individuals collected from around greenhouse centers and commercial companies had more common haplotypes. The genetic variations of intra-populations were higher than the inter-populations in both COI (65.41%>34.59%) and CytB (72.47%>27.53%). The natural and commercial populations were genetically more distant from each other considering F st values. However, samples from near the greenhouses had a higher similarity with the commercially produced samples, while the natural populations far away from greenhouses still retained their genetic distinctiveness. INTRODUCTION Bombus species are adapted to different climatic and environmental conditions, having a wide range of habitats from sea level up to 5800 m. Bombus terrestris (Linnaeus, 1758) shows abilities for high migration, early colony development, to benefit from a large number of flowers and to withstand low temperatures (Goka et al., 2001;Goulson et al., 2008). This species is one of the most important pollinators of both natural and cultivated plants (Michener, 2000) and constitutes about 90% of commercially produced colonies (Velthuis & van Doorn, 2006). It is now used for pollination in over fifty-seven countries including Turkey and has not been found naturally in sixteen of them. Bombus terrestris dalmatinus Dalla Torre is one of the most suitable subspecies for commercial breeding because of its high adaptability and reproductive performance under captive conditions (Velthuis & van Doorn, 2006). The many benefits of commercially produced Bombus colonies including increased yield and quality of fruit in undergrowth cultivation and reduced use of hormones and chemical drugs have increased demand globally since the 1980s (Ono, 1998;Gürel et al., 1999;Goulson & Hanley, 2004;Hingston, 2006;Velthuis & van Doorn, 2006;Inoue et al., 2008;Williams & Osborne, Genetic relationships among Bumblebee populations 2009). However, bees emerging from commercial colonies are reported to cause problems (Goulson & Hughes, 2015) such as competition with local Bombus species for nesting and ecotypes (Goulson, 2003;Inoue & Yokoyama, 2010;Aizen, 2018) because it has escaped from greenhouses (Seabra et al., 2019) and colonized the environment (Kraus et al., 2011). Trillo et al. (2019) reported the presence of managed bumblebees escaping from greenhouses and feeding on the same flowering plants with the native ones, especially in winter. Thus, there has likely been genetic contamination due to hybridization (Rhymer & Simberloff, 1996;Ings et al., 2006;Kanbe et al., 2008;Goulson, 2010). In addition, this species, which is highly invasive (Hingston et al., 2002), is thought to cause the spread of exotic diseases (Whitehorn et al., 2013). Due to such reasons, natural bumblebee populations are thought to be negatively affected and even at risk of extinction in some places (Goulson, 2003;Cejas et al., 2018;Tsuchida et al., 2019). However, there is a lack of knowledge about the current state of most populations (Chandler et al., 2019). Genetic studies on the detection of introgression between commercial and native Bombus populations are still scarce (Kraus et al., 2011;Seabra et al., 2019;Cejas et al., 2020). Each year, approximately two million commercially produced Bombus colonies are estimated to be shipped globally (Lecocq et al., 2016), and almost 300,000 colonies are used per year, especially in tomato pollination, in Turkey alone (Cilavdaroglu & Gurel, 2020). In greenhouses in Turkey there are no measures in place to prevent the escape of bumblebees into the natural habitat. Furthermore, as is also usually practiced in Portugal, hive boxes at the end of their useful life are left outside of greenhouses with bees still in residence (Seabra et al., 2019). In many species, mitochondrial DNA (mtDNA) is used for identifying the inter and intra specific differences due to high evolution ratio compared to nuclear DNA and to non-recombining and maternal inheritance (Rubinoff & Holland, 2005). In this context, animal mtDNA is a reliable and valid marker for population genetics and phylogenetic studies. Although the B. terrestris subspecies differ morphologically, reliable molecular tools are needed because the features are insufficient in some subtype and ecotype level definitions. mtDNA is widely used in phylogenetic studies, in particular, COI and CytB sequencing data accumulated for bumblebee species are used to reveal genetic relationships between bumblebee populations (Pedersen, 1996;Widmer & Schmid-Hempel, 1999;Yoon et al., 2003;Murray et al., 2008;Kim, 2009;Tokoro, 2010;Williams et al., 2012;Han et al., 2019). The genetic structure of commercial colonies taken from companies in Antalya and their possible genetic effects on local genotypes are not fully known. Therefore, the genetic diversity of B. t. dalmatinus populations in habitats may be at risk of being lost without being identified. The information obtained about the genetic basis and the population structure of the geographical variation will serve as an essential resource for the use and safeguarding of bumblebees, but for now molecular genetic studies on bumblebees are limited in Turkey. In order to assess this concern, this study was carried with the use of COI and CytB haplotypes to evaluate whether there has been a hybridization between commercially reared and indigenous populations of B. t. dalmatinus in Antalya. Sample Collection Fifteen B. t. dalmatinus workers from each of the twelve different locations (five commercial producers, seven collected from natural fields including three nearby greenhouse centers and four natural districts) were sampled from the Antalya province. Three foreign and two domestic commercial companies engaged in B. t. dalmatinus breeding. Three of seven different fields are greenhouse centers (Aksu, Demre, and Kumluca) where intensive greenhouse activities are carried out, and a large number of commercially produced B. t. dalmatinus colonies are used. Bees easily escape from these greenhouses because there are no preventive measures in general. Bumblebees were collected at a distance of about 200 meters from the greenhouses in these centers. The other three of the seven field collections are natural districts (Bayatbademler, Phaselis, and Termessos) which are at least 30 km away from the nearest greenhouse centers and isolated with natural barriers. Bayatbademler is a plain surrounded by mountains, Phaselis an ancient city at sea level and Termessos a national park top of the mountain. The last field site, Geyikbayiri, is a plateau where some greenhouse activities have started in recent years. Also, an outgroup (Lecocq et al., 2013) was used as a reference to determine the phylogenetic relationships of the populations. The bumblebees were caught with use of an entomological net. The coordinates and altitude information of the sampling locations are given in Fig. 1. The samples were stored in collection tubes with pure ethanol at +4°C until DNA extraction. DNA extraction, amplification, and sequencing The thorax of each bumblebee was placed in an individual tube and crushed through the application of cold nitrogen. The CTAB method was used to extract DNA (Doyle, 1990). The quantities and qualities of DNA were determined through BioDrop spectrophotometer. In addition, DNA Processing of data and statistical analyses The COI and CytB gene region sequences of mtDNA were examined for each individual with the use of ChromasPro version 1.7.4 (Technelysium Pty. Ltd. Australia), and incorrect and uncertain reads resulting from sequencer were edited based on visual examination of the chromatograms. The ends of the reads presenting lower quality bases were trimmed and thus the length of the sequence used in further analysis has been shortened. Analyzes were continued with DNA sequences from 157 individuals for COI (204 bp:base pairs) and 170 individuals for CytB (373 bp). At several base positions there were double peaks in the chromatograms, and examples are shown in the Sup. Fig. 1. Since all suspicious artifacts were removed from the sequences, the sequences with double peaks were considered heteroplasmic. Thus, sequences belonging to all individuals were used twice, whether heteroplasmic or not, in order not to make a propor- tional error. So, including these sites, haplotype analysis was performed on total of 314 (2*157) DNA sequences for the COI and 340 (2*170) DNA sequences for the CytB. Sequenced DNA regions were searched against BLAST (GenBank: JQ820651. 1 for COI and GenBank: JQ820853. 1 for CytB). Sequence alignment was conducted with MEGA 6 (Molecular Evolutionary Genetics Analysis, version 6.06.) software (Tamura et al., 2013). SNPs were identified, and the haplotypes for each region were generated with DnaSP (version 5. 10.01) software (Rozas, 2010). Also, such diversity parameters as haplotype (gene) diversity (Hd) and nucleotide diversity (π) were calculated for the studied gene regions. Arlequin (version 33. 11) software was used to calculate genetic differentiation within and among the populations (Excoffier et al., 2007) with the unbiased fixation index (F ST ) based on information of haplotypes (p<0.01, p<0.05). Molecular variation (AMOVA) was analyzed to calculate genetic variation values for both studied gene regions through the use haplotype sequences detected in COI and CytB. Phylogenetic analyses were made with TASSEL (version 5.2.43) software (Bradbury et al., 2007) through the use of the genetic distance matrices among populations. Mutational relationships among haplotypes were represented by median-joining (MJ) Network (version 5.0.0.3) software (Bandelt et al., 1999). Sequence analysis and haplotype distributions In all analyzed individuals, the COI fragment sequence had 204 base pairs, and the CytB fragment sequence had 373 base pairs. According to the results of the BLAST analyses, the DNA nucleotide sequences obtained from mtDNA COI and CytB gene fragments were determined to belong to B. t. dalmatinus and Other haplotypes were restricted to one, two or five populations and had a low frequency (Tab. 2 and Tab. 3). For example, H3, H4, H6, H9, and H10 in terms of COI and H2, H7, H8, H9, and H19 in terms of CytB were only in samples taken from companies (Tab. 2 and Tab. 3). After haplotype (gene) and nucleotide diversity parameters were calculated, the highest values were found in natural areas for both genes (Tab. 4 and Tab. 5). The distribution of haplotypes was visualized with the Median-Joining Network ( Fig. 2a-b), but both network patterns showed that there was some differentiation between groups in general ( Fig. 2a-b). These figures (2a-b) show that greenhouse areas and commercial companies had more common haplotypes, and conversely natural areas were visibly separated from the others. Bees collected from greenhouse areas were closer to commercially produced bees genetically according to network pattern. Genetic relationships As a result of AMOVA (Tab. 4-5), the percentage of variation within populations was higher than among populations for both COI and CytB gene regions in all populations and each group. The mean F ST values were calculated as 0.346 for COI and 0.275 for CytB. Also, the highest and the lowest pairwise F ST values for COI were detected as 0.78 between Aksu and Phaselis and as 0.00 between Company_2 and Termessos, respectively (Tab. 6). The highest and the lowest F ST values for CytB were detected as 0.50 between Company 4 and Termessos and as 0.02 between Geyikbayiri and Phaselis, respectively (Tab. 6). Genetic distances between populations are lower among individuals who are commercially produced and caught around the greenhouse regions. In order to determine the genetic relationships among the twelve different populations of B. t. dalmatinus, a phylogenetic-tree were constructed based on the Neighbor-Joining (Saitou & Nei, 1987) method with the use of TASSEL software v. 5.2.59. Bees from twelve different sources were clearly distinguished according to haplotypes (Sup. Fig. 2a-b). The closer relationship of commercially produced bumblebees with the ones caught in the greenhouse region suggests that they are in a closer genetic relationship than the bees caught from natural areas. Seabra et al. (2019) supports this result reporting a relatively common haplotype in commercial individuals and in wild individuals collected nearby the greenhouses where the commercial hives are used. The presence of jointly shared haplotypes indicates an affinity between commercially produced individuals and individuals collected from the greenhouse environment. This result suggests that individuals can go leave greenhouses and survive around greenhouses in the natural environment. Candidate hybrids were detected in the wild, as well as putatively escaped commercial bumblebees, some being potentially fertile males (Seabra et al., 2019). In another study, commercial individuals escaping from greenhouses were shown to hybridize with individuals in the vicinity. In addition, if natural bees are far enough from greenhouse areas, they show the ability to preserve their original genetic structure (Kraus et al., 2011). Cejas et al. (2018) showed 16s haplotype as evidence of integration between B. terrestris subspecies. As seen in Tables 2 and 3, similar haplotype numbers (COI=16 and CytB=20) were determined in this present study. The similarity of the number of haplotypes supports that we can use it as evidence to demonstrate integration among sub-populations in the Antalya region. Pedersen (1996) identified the nucleotide sequence of the fragment of 532 bp in the mitochondrial COI gene of bumblebees and revealed the phylogenetic relationships of eleven Bombus species. Tokoro et al. (2010) studied the mitochondrial COI (1048 bp) gene and identified fifteen haplotypes in order to determine the risk of the genetic breakdown in local Japanese bumblebees and the outsourced gene flow of samples collected from Japan, China and Korea. When the literature is examined, the number of haplotypes calculated in this study is notably higher than those found in other studies. This situation is due to Turkey's very different geographical and climatic conditions, the intersection of Europe, Asia and Africa in Turkey and the high genetic diversity of Turkey's B. terrestris populations. AMOVA analysis revealed that genetic variation within populations is greater than among populations. One reason for this situation may be hybridization between populations. Individuals carrying a genomic structure from one population to another may have increased intrapopulation variation due to hybridization, while reducing genetic variation between populations. The B. t. dalmatinus species in Antalya may be genomically a large population and the differences between sub-populations are still low. Genetic similarities and differences were calculated through the pairwise F ST values for mtDNA COI and CytB gene fragments of twelve populations. According to these results, individuals gathered from around the greenhouses were genetically closer to commercially produced individuals than natural ones. When Tab. 4 and Tab. 5, which show the results of AMOVA, were examined, the remarkable results were that the intra-genetic variations among the commercial populations and among the populations from the near the greenhouses were lower than the natural populations far from greenhouse areas. Understandably commercial populations produced thorugh culturing and the populations from the near the greenhouses showed a more homogeneous genetic structure. Moreira et al. (2015) investigated the genetic structure in twenty-two natural and two commercial B. terrestris L. populations using eight microsatellites and two mitochondrial genes (COI and CytB). The barrier formed by the Irish Sea and the dominant south winds is thought to prevent gene flow between Ireland and Britain, and also heterozygosity in Ireland and Isle of Man was reported to be lower than in the European continent and commercial populations. The data suggest that B. terrestris individuals in the west of Ireland, where the use of managed bumblebees is rare, associated with populations from the European mainland in terms of COI. Estoup et al. (1996) stated that B. terrestris populations in the European continent have a relatively homogeneous genetic structure due to the low overall geographical barriers. Based on this idea, we think that individuals who do not encounter any barrier in greenhouses without precautions easily affect the genetics of natural bees. Although few studies have been conducted in Turkey, up to fifty different Bombus species have been identified. Based on world-species distribution, Turkey is understood to be an important genetic resource in terms of bumblebees (Özbek, 1997;Barkan & Aytekin, 2013;Meydan et al., 2016), but it is not clear whether there is a hybridization between natural populations and commercial ones. Thus, the genetic diversity of Turkey's local B. t. dalmatinus populations must be determined and utilized in protection strategies as the source of genes. The data revealed by this completed study will be useful for this purpose. The Antalya province is where B. t. dalmatinus is widely found in natural habitats, and many sub-species are also encountered (Barkan & Aytekin, 2013), which is where very intensive greenhouse activities are conducted. Intensive use of commercial B. t. dalmatinus colonies are thought to increase the numbers of escaped bumblebees from greenhouses, but the spread of a large number of commercially produced individuals leads to increased competition with natural populations for nesting and feeding, spread of diseases and uncontrolled crossbreeding with local genotypes (Dafni, 1998;Goka et al., 2001;Whitehorn et al., 2013;Aizen, 2018). Furthermore, the introduction of foreign Bombus terrestris (Linnaeus) has resulted in a decline in native bumblebee populations in such countries as Japan, Chile and Argentina (Cejas et al., 2018). Although it is difficult to prove that commercially produced bees and wild bees are hybridized, bees fleeing from greenhouses can compete with natural bees in terms of nest and nutrient sharing. The invasive potential of commercially produced bumblebees has been clearly identified in several studies. Ings et al. (2006) used a paired design to compare the nectar-foraging performance and reproductive outputs of commercial and native colonies growing under identical field conditions, and they found that the commercial colonies have high reproductive success, superior foraging ability and large colony size. In a study conducted in Turkey, Gösterit (2017) determined that the colonies founded by commercial queens produced more than twice the number of gynes (82. 11±9.32) than colonies founded by native queens (32.85±3.99). Pirounakis et al. (1998) reported that the CytB gene could be successfully used to identify geographical subspecies of bumblebees. In another study using the CytB gene, the genetic structure of B. terrestris bees on two islands in the African Gulf was found to be heterogeneous (Widmer et al., 1998). Koulianos and Hempel (2000) investigated the genetic differences and kinship relationships using mitochondrial CytB and COI genes in nineteen bombus species collected from sixteen regions in Europe and three areas in North and South America. Morath (2007) used the mtDNA CytB gene to determine genetic variation in Bombus impatiens. In addition, the genetic structure of seven mainland and island Asian populations of Bombus ignitus was investigated with the use of nine microsatellite markers, and the sequences of part of the mitochondrial CytB gene were determined. The results of this study, carried out with the samples collected from most of the companies in Antalya and the areas where the bumblebees are concentrated, overlap with many of the previous studies and summarize the genetic relationships among the populations in the Antalya region. Although the investigation of heteroplasmy was Genetic relationships among Bumblebee populations not the subject of this study we surprisingly encountered individuals with double peaks, which were too obvious to be considered as artifacts in sequencing of both directions. (Sup. Fig. 1). As mentioned in previous similar studies (Wernick et al., 2016;Williams et al., 2019;Pizzirani et al., 2020;Ricardo et al., 2020;Tikochinski et al., 2020) double peaks on sequence traces with both alleles from one individual were identified as mitochondrial heteroplasmy and a single individual with two haplotypes. At least one heteroplasmy was detected in 93 of 157 individuals for COI and at least 105 of 170 individuals for CytB in the current study. Ricardo et al. (2020) reported that heteroplasmy was detected in individuals from all the ten sampled locations with an average of six heteroplasmic haplotypes per individual. Moreover, they found that some of these heteroplasmic haplotypes are shared between individuals from different locations. This situation regarding heteroplasmy in this study suggested that new studies are needed to prove it. Similarly, after the initial definition of heteroplasmy for Bombus morio (Francoso et al., 2016) further investigation was necessary (Ricardo et al., 2020). The bee samples from near the greenhouses have been determined to have high similarity with the commercially produced individuals. These results suggest that commercial colonies escaped from greenhouses and can cause genetic hybridization with local populations around greenhouses. However, the natural populations collected from different native habitats far away from greenhouses constitute different branches of phylogenetic tree from the commercial. This shows that they can still retain their genetic constructs, but uncontrolled and imprudent use of commercial colonies may soon be a serious threat to the original genetic stock of B. t. dalmatinus. In such areas as Antalya where hundreds of thousands of commercial colonies are produced every year and used in greenhouses, very serious measures must be taken. Disclosure statement The authors declare that they have no conflict of interest.
2021-12-29T14:09:42.396Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "fe15732d71ee46e4b444b36d6eb917761c5bf451", "oa_license": "CCBY", "oa_url": "https://www.sciendo.com/pdf/10.2478/jas-2021-0025", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "91a0c6cdbccbd909e4f8f0e9090b52c8427ecc3a", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
26435528
pes2o/s2orc
v3-fos-license
A New Freshwater Biodiversity Indicator Based on Fish Community Assemblages Biodiversity has reached a critical state. In this context, stakeholders need indicators that both provide a synthetic view of the state of biodiversity and can be used as communication tools. Using river fishes as model, we developed community indicators that aim at integrating various components of biodiversity including interactions between species and ultimately the processes influencing ecosystem functions. We developed indices at the species level based on (i) the concept of specialization directly linked to the niche theory and (ii) the concept of originality measuring the overall degree of differences between a species and all other species in the same clade. Five major types of originality indices, based on phylogeny, habitat-linked and diet-linked morphology, life history traits, and ecological niche were analyzed. In a second step, we tested the relationship between all biodiversity indices and land use as a proxy of human pressures. Fish communities showed no significant temporal trend for most of these indices, but both originality indices based on diet- and habitat- linked morphology showed a significant increase through time. From a spatial point of view, all indices clearly singled out Corsica Island as having higher average originality and specialization. Finally, we observed that the originality index based on niche traits might be used as an informative biodiversity indicator because we showed it is sensitive to different land use classes along a landscape artificialization gradient. Moreover, its response remained unchanged over two other land use classifications at the global scale and also at the regional scale. Introduction In 2002, the 188 countries that are signatories to the Convention on Biological Diversity committed themselves to "achieve by 2010 a significant reduction of the current rate of biodiversity loss at the global, regional and national level" [1,2]. Even though this target was not achieved (the new target is 2020), research in the field of biodiversity indicators has been growing during the last decade [3,4]. As biodiversity is a complex object and subject, a first step for improving conservation plans is to build indices, which are intended to synthesize and simplify data in quantitative terms. Indices vary depending on the biological level quantified, i.e. from genes to biomes. Such a variety of biodiversity levels respond to the numerous ways of examining biodiversity, as defined by the Convention of Biological Diversity [5]. As indices quantify an aspect of biodiversity, they can become useful indicators if they tell us about the impact of human pressures on biodiversity. Facing global changes, species responses are not uniform [6]. Although a few species are not negatively affected by human activity and are flourishing, many are declining or will become extinct in the next century [7]. In this sense, the evaluation of biodiversity needs to move away from a reliance on species lists and case-by-case approaches to give a more global picture of what happens for most species in an ecosystem. Up until ten years ago, all river ecosystem indicators were assessed on their hydromorphological, chemical, and biological characteristics (e.g. IBGN [8] or EPT [9]). Because of their ability to integrate environmental variability at different spatial scales, fish assemblages have been studied and new indicators of river ecosystems have been developed (e.g. IPR [10], [11]). Although these indicators encompass the relative importance of geographic, ecoregional, and local factors, they were developed using the reference condition of pristine ecosystems without human impacts. As Baker and King (2013) point out, there is a crucial need for new aquatic indicators based on other criteria than biotic indices or summary metrics (e.g. taxon richness, ordination scores) especially in assessment and management [12]. Here, we develop a new approach, specifically dedicated to evaluate different functional approaches of fish biodiversity. It is the first study to synthesize comparisons between a large variety of fish traits: life history traits, morphological traits linked to habitat or diet, habitat niche, and an integrative approach based on abiotic habitat specialization. Community indices consider upper biological organization levels beyond the species level. They take into account the relationship between species inside the community, sometimes explicitly, such as in trophic networks [13,14], and sometimes implicitly, such as in niche or habitat specialization approaches [15]. Even if all community indices are species-based, they incorporate more complexity than species indicators because of these interactive approaches. They thus correspond more closely to a primary objective for indicators, producing a synthetic representation of biodiversity. Indeed, these indicators should help tackle the problem of maintaining the entire community integrity despite global changes by providing decision makers with more accurate information about human impacts on a global scale. In this respect, they are closer to the steady-state perspective, a frequently mentioned policy objective. It is easy to address the functional facet of biodiversity in this way and quite popular nowadays in the ecosystem "services" context [16]. However, basic summary metrics at the community level lose valuable information and non-linear declines should be undetected with aggregate responses [17]. In function of the study objectives it may be important, especially in conservation, to analyze the dataset species by species (or see TITAN, [17]) and it is always helpful to carefully interpret the community results. Finally, the criterion to create an indicator is to build good communication tools that are easy to understand and friendly to use, adapted to the context and scale of needs [18,19]. Indicators provide information to fuel dialogue between different scientific disciplines and stakeholders involved in biodiversity conservation. However, indicators also try to reach new targets identified as extremely important for the preservation of biodiversity. Indicators in this case attempt to open a dialogue and convince people not already involved in conservation including policy makers (local to international), judges, industrials, and farmers. In this paper we aim to develop indices to better understand functional patterns in space and time of river fish communities and to evaluate their potential as biodiversity indicators for environmental policy makers. First, we quantified the spatial and temporal changes in composition of French fish communities with two different approaches: originality and specialization indices. For the originality indices, we used four sets of functional traits (habitat niche, life history, diet-linked morphology, and habitat-linked morphology) and the phylogeny to obtain five matrices for the twenty-six common fish species considered. We first used the metric of originality defined as the rarity of species traits to obtain scores for each species [20]. Thus, the whole contribution of species to trait community depends on its originality. More precisely, as integrative community-traits indices, we computed the average value of the originality score depending on the density of species locally present. The second approach was based on niche theory and species specialization such as it has been done for birds [21,22]. To carefully interpret the community results we also explored spatio-temporal analyses at the species level. We identified regions of low originality or specialization communities at the national scale and explored the temporal changes through nineteen years. For the first time, we explored potential congruent or mismatched patterns between different functional traits approaches. Next, we evaluated the link between these community indices and human pressures via land use. We used land use as our proxy because threats to global freshwater biodiversity are mainly due to industrial and agricultural impacts [23]. We tested the sensitivity of each of the six indices to human pressures using habitat modification data sets, and used these results to select biodiversity indicators. Finally, we discussed the choice of indicators selected by communication criteria to give a clear message for stakeholders and, especially in our case, for environmental policy makers. Fish database We worked with the database of the French National Agency for Water and Aquatic Environments (Onema), which contains records of standardized electrofishing protocols performed between 1990 and 2009 during low-flow periods (May-October). Electrofishing is considered the most effective nondestructive sampling procedure for describing fish assemblage structure [24]. Sampling protocols were defined depending on river width and depth. Streams were sampled by wading (mostly two-pass removal), while fractional sampling strategies were undertaken in larger rivers. Since the implementation of the EU Water Framework Directive's surveillance monitoring, protocols follow the recommendations of the European Committee for Standardization [25]. To compare inter-annual densities, however, only surveys performed with the same sampling protocol were selected in the whole data set. Fishes were identified to species level, counted and then released. We worked with the 26 species for which trait data were available (around 90% of the total abundance catch) ( Table 1). We extracted collecting events from Onema's fish database using two different criteria: (i) All sites regardless of temporal coverage, which yielded 5 403 sites with 1-20 years of sampling and a total of 13 076 sampling occasions (Dataset 1). (ii) Only sites with at least 8 years of data, which yielded 557 sites with 8-20 years of sampling from 1990 to 2009 and a total of 6942 sampling occasions (Dataset 2; see [26]). Trait dataset (i): Habitat use. This dataset consists of five parameters describing the habitat use of a river species: foraging habitat, reproductive habitat, position in the water column, salinity tolerance and rheophily (the ability to live in fast moving water). The information has been gathered from different sources [27,28]. (ii): Life history traits (LHT). The life history traits included in the study were: maximum lifespan, female age at maturity, number of spawns per year, logarithm of the maximum body length, relative fecundity (Number of ovocytes per gram), egg diameter, and parental care [27][28][29]. (iii): Morphological database. Fourteen morphological traits related to two different axes of the niche (diet and habitat) were used ( Figure 1, Table 2) [30][31][32]. Traits were measured from pictures collected mainly from FishBase [33,34], but for details see 35. All traits were standardized to account for differently sized photographs and species (e.g. standard length). (iv): Phylogenetic dataset. We retrieved molecular data from three mitochondrial genes from GenBank (cytochrome b, cytochrome oxidase I and ribosomial 16S sub-unit). We inferred the best evolutionary model for each gene using maximum likelihood methods implemented in Paup4 ob10 [36]. The best model of molecular evolution was obtained using Modeltest based on the AIC criterion [37], (for more details see 38). Human Pressure dataset The dataset of human pressures was provided by the European land-cover database CORINE, which classifies landscape units larger than 25 ha into one of 44 classes [39] on the basis of satellite digital images (e.g. SPOT and LANDSAT). We used the 2000 update and considered three alternative groupings of seven habitat classes: (i) The CORINE Land Cover (CLC) yields 5 habitat classes: "Forest", "Meadow", "Farming", "Urban", and a "Mix" (i.e. a Mix between agricultural and urban habitats) (ii) The EUROWATER (a special variant of CLC for freshwater common to the European scale), yields 6 habitat classes with the addition of the "Intensive Urban" habitat class, and (iii) The ONEMA land use classification (a special variant of CLC and EUROWATER for freshwater common to the national scale) yields 7 habitat classes with the addition of the "Intensive Farming" class. Only the latter two classifications are used to test the reproducibility of our indicator. Here, we consider land use classification as a gradient of landscape artificialization under human pressures. Land use is a common proxy for human pressures in terrestrial communities [6,22]. The link between land use and human pressures in river has been reviewed at a global scale [40], but also on regional scale in North America [41,42] and Europe [43]. Marzin and al. (2013) showed a clear link between the CORINE Land Cover (CLC) dataset and different pollution and physical modifications at both local and regional scales [43]. If both human pressures are correlated with CLC, water quality parameters are more strongly correlated to land use than physical modifications. Statistical Analyses All statistical analyses were performed using R 2.15.1 (R Development Core Team. 2012), and more particularly the ade4 and nlme packages [44,45]. We calculated one index for each kind of data set, thus for a total of 6 indices: 4 functional originality indices, 1 phylogenetic originality index, and 1 specialization index. (i): Functional and phylogenetic originality. To characterize the functional originality of each species, we used the mean of a set of functional traits from the different datasets described above. For each dataset a distance matrix was created using the Gower's dissimilarity index to allow the treatment of various statistical types of variables when calculating distances [46]. A hierarchical clustering (the unweighted pair-group clustering method using arithmetic averages: UPGMA) of the distance matrix produced a functional dendrogram comprising all the species. For each functional tree and the phylogenetic tree we used the procedure of Pavoine et al. [20] to estimate the biological originality of each species using the quadratic entropy of Rao [47]. Branch lengths and tree topology are jointly taken into account in the calculation of this index of originality. We computed both the Equal-split index [48] and the QE-based index [20]. The Equal-split index is more influenced by unique traits (trait states observed in a single species) than rare traits (trait states shared by a few species), whereas the reverse is true for the QE-based index. However, as both indices yielded similar results, we retained the equal-split index only, which is subsequently referred to as Species Originality Index (SOI). When it was possible we explored the sensitivity of our SOI to the addition of species in the data set [see the File S1]. (ii): Species Specialization Index. Ideally, specialization should be measured as the multi-dimensional breadth of a species' ecological niche. An integrative index of habitat specialization (Species Specialization Index, SSI) was developed for birds [21], as the coefficient of variation (standard deviation/average) in average density of a species across habitats. We tested the relevance of this index in fishes. Because ecological habitat classes were missing for several species, we used habitat traits and four abiotic variables: temperature (sum of January to June air temperatures), longitudinal gradient, log of elevation, and slope (see 13 for more details). We had to take into account the geographical bias in the data set. This bias was linked to an over-sampling of headwaters. We therefore reassigned all the sampled points into 7 habitat classes with an approximately equal amount of samples in each habitat class. (iii): Community Indices. Each species can be ranked along a continuous gradient from the least to the most original or specialized species (X 1 ,…, X i ). Any species assemblage at time t can be characterized by the average specialization or originality taken across all individuals in the assemblage. These community level indices are simple weighted averages, i.e. ∑(a i,t X i )/∑a i,t , where a i,t is the relative abundance of species i in the assemblage at time t and X i the originality or specialization of species i. In the following, CSI t = ∑(a i,t SSI i )/ ∑a i,t is the Community Specialization Index and COI = ∑ (a i,t SOI i )/∑a i,t , the Community Originality Index at time t. We explored the temporal and spatial variation of both community indices, COI and CSI, using mixed-effects linear models with sampling site as a random effect [49,50] and Akaike Table 2. Description of functional traits related to the habitat and diet niche axes [30][31][32]. From Schleuter et al. [35]. Information Criteria (AIC) model selection. We also explored the link of all the community indices between them by exploring their correlations by performing linear model. For the statistical independence of the data spatio-temporal effects and their interactions were taken into account, and we selected the model in function of its AIC. (iv): Community Indicators. We tested the relationships between CSI and five COI (four functional and one phylogenetic originality indicators) and landscape variables using mixed-effects linear models with sampling site as a random effect. Temporal (year) and spatial effect (geographical coordinates and watersheds) with their interactions were also taken into account. Because no R-squared can be calculated with random effect, we only obtained a proxy of the R-squared with the same model without the random effect. We used the CORINE Land Cover dataset ( Figure 2) and its two variations to evaluate the reproducibility of our results and thus the sensitivity of each community index through habitat classifications. Then we studied the scale dependence of the community index response by exploring the relationship at the regional watershed scale. (i). Species Originality and Specialization Indices The four trait distance matrices can be visualized using trees ( Figure 3). Trees based on life-history traits (Figure 3a), functional niche (Figure 3b), and diet-linked morphological traits (Figure 3c) were well balanced in the sense of Blum et al. [51]. These authors defined the balance of a tree as the average balance of its nodes, "assuming that a given node is completely balanced if it splits the sample into two subsamples of equal size". At the opposite, the tree based on habitat-linked morphological traits (Figure 3d) was highly unbalanced by the European eel (Anguilla anguilla), and, to a lesser extent by the groups common bream (Abramis brama), crucian carp (Carassius sp.), ninespine stickleback (Pungitus pungitius), three spines stickleback (Gasterosteus aculeatus). Using the equal-split metric, we computed four originality indices for each species to evaluate the three functional datasets and the phylogeny ( Figure 4). As expected, A. anguilla was characterized by a high originality score for the habitat-linked morphological trait matrix (SOI = 0.81 compared to a mean of 0.24). The two other imbalanced nodes had smaller originalities (Abramis group = 0.46 and Pungitius group = 0. 37). The species specialization index ranked the European eel as the most specialist species and the common bleak, Alburnus alburnus as the most generalist species (Figure 4). At the community level, the habitat-linked morpho-COI was sensitive to the presence of the European eel and to a lesser extent to the presence of the common bream, Abramis brama, and the crucian carp, Carassius sp. We tested the sensitivity of the Species Originality Index (SOI) based on traits to the addition of species in the initial dataset. The life history traits index and the habitat-linked morphological index were strongly correlated (respectively R 2 =81, R 2 =86). The diet-linked morphological index and the niche index were less correlated (respectively R 2 = 65 and R 2 =68) and thus more sensitive to the addition of fish species in the initial dataset [see the File S1]. (ii). Community Indices: spatial and temporal patterns All statistical models retained by the AIC, with both datasets, contained the same variables: geographic coordinates, year, and their interaction, except for the Diet-COI where the watershed effect gave a better model (AIC). Because Corsica appeared to be an outlier ( Figure 5), we re-ran all analyses excluding data from this area. With Corsica excluded, we found that watershed was a better spatial effect than geographic coordinates (AIC). Corsica clearly comes out as a hotspot of fish originality and specialization ( Figure 5). In contrast, the Seine watershed presented the lowest originality and the most generalist fish communities. The CSI, habitat-linked morpho-COI and LHT-COI presented limited variation among sites and appeared to be ineffective to discriminate sites. In contrast, the diet-linked morpho -COI highlighted a strong originality in all small rivers, especially mountainous streams. Although the temporal effect was always retained in statistical models on the AIC, it was not always significant. However, Niche and both Morpho-COI increased significantly over the last years (Table 3). The CSI was strongly correlated with the LHT-COI (R 2 = 0.74, F 8,12829 = 3.43 10 4 , P < 0.001) and habitat-linked morpho -COI (R 2 = 0.47, F 8,12829 1397, P < 0.001) but weakly with the niche (R 2 = 0.11, F 8,12829 208, P < 0.001), and not with the diet-linked morpho (R 2 = 0.04, F 8,12829 65, P < 0.001) and phylo -COI (R 2 = 0.08, F 8,12829 149, P < 0.001). It is important to note that the level of specialization measured here is more relevant to describe the Fish Life History Traits component than the habitat niche component. The sensitivity to human pressures of the six community indices was evaluated with respect to land use data and two variations of CORINE Land Cover (Table 4). All indices correlated with land use (Table 4-6), but with some variation. For example, some indices were sensitive to the different human pressures (farming or urban) represented here by an artificialization gradient ( Figure 6). In contrast, the CSI was significantly higher for urban area than for agricultural or natural habitats ( Figure 6). We used two variations of CORINE Land Cover (ONEMA and EUROWATER) to get an estimate of the community indices reproducibility in function of the arbitrary habitat classifications [ Table 4-6, see the File S2] and only one COI was robust to the effect of habitat classifications: the Niche -COI (Table 4-6). The response of this latter index was also significantly sensitive to the different type of human pressures with a consistent behavior at national and regional scales (Figure 7). Within each watershed or over all watersheds the relationship between human pressures and Niche-COI is negative when it is expected to be negative (e.g. under human pressures like farmland and urban habitat) and reciprocally (e.g. under natural habitat). Discussion The aim of this study was to develop new functional indices for river fish and to evaluate their potential application as functional biodiversity indicators. For the first time in fish communities, we examine spatio-temporal patterns of six functional facets of biodiversity relying on two different theoretical approaches: specialization and originality. We identified common conservation priorities but also spatial mismatching in function of the trait considered. Then, we linked them with human land use pressures and we identified the community functional originality index based on niche traits as the most likely to become a functional biodiversity indicator. Its sensitivity to the nature and intensity of human disturbance, considered here by an artificialization gradient, at both regional and national scales, results in a simple message to communicate with policy makers and biodiversity managers. I. Community Indices There is a growing consensus that functional diversity based on species traits is a better predictor of ecosystem functioning than species number per se [53]. Species richness is currently the most used biodiversity index (and indicator) but it is highly scale-dependent, with local increases that are often accompanied by regional or global decreases and increases in between-site similarity [54]. Particularly, fish species richness tends to increase from upstream to downstream [55]; and the upstream part of many French rivers sustain only a few fish species (< 5 fish species, [56]). Species richness is thus an inadequate surrogate in the context of ecosystem function unlike community traits approaches, which appear more and more relevant in the literature especially to examine ecosystem integrity [16,22,57]. Community-trait indices take into account the species present in the area considered, species being grouped depending on their ecological or phylogenetic affinity. These indices compute the average value for a trait or character, depending on the frequency of species locally present. So, we did not consider intra-specific variability, which sometimes represents a significant proportion of the variability and complex spatio-temporal dynamic [58,59]. Even though specialization community indices seem to give the same message with presence/absence data [60], it is not always the case when a process or a function is measured using functional traits [19]. Moreover one of the most interesting points to use common species is based on the assumption that abundance plays an important role in ecosystem functioning [57,61]. Community Specialization Index (CSI) is a different approach than Community Originality Index (COI) approaches. If CSI is not clear on the underlying mechanisms explaining the precise ecosystem function, its well-known power comes from its holistic habitat approach. The central focus of CSI is not the species feature but its interaction with the environment by the habitat approach. This statement is closer to the Grinnell niche theory approach than the Hutchinson one [62]. And thus, in the CSI approach, the crucial point is the relevance of habitat description, not the species traits data set. On the other side, with the COI we study the distinctiveness of precise species traits and lineages, and thus we postulated that trait variation among species variation relates to functional differences in the ecosystem, which allows an interpretation in terms of ecosystem function or "services". The set of traits selected is a crucial step toward this goal, especially if we want the dynamics of the indices to reflect ecosystem function [57]. Here, we worked only with traits having a demonstrated functional role in fish biology. For example, morphological traits such as the mouth position or the length of barbell are linked to the diet and food acquisition [29,35]. Because our originality index is based on distance metrics we verified that it was not species richness dependant of the initial dataset. The life history traits index and the habitat-linked morphological index are less sensitive (R 2 >80, see S1) than the diet-linked morphological index and the niche index (respectively R 2 = 65 and 68, see S1). Moreover, some community indices could be especially sensitive to one or a few species with extreme values of originality. In this study, it was the case with the European eel, which disproportionately increased the original community indices based on phylogeny and on habitat-linked morphology. This is the main purpose of this index, to weight unique species. However, because the European eel is classified as critically endangered at the national and global scales by the IUCN, these original indices also met in this particular case, the red list species indicator. Moreover, the European eel is a patrimonial species, there is a strong cultural heritage in France associated with this species for their fishing and cooking and thus for their taste, but also for their unique form and shape. For this last one based on the human vision, an originality index based on morphology could be common avenues for all species to be "objectively" quantified on the arguments develop by naturalists trying to preserve the unique forms and shapes that have been emerging on Earth. II. Spatio-temporal patterns We found that for the river fish communities, mapping each diversity component separately reveals partially congruent patterns between functional or phylogenetic originality, and specialization. All indices highlight Corsica, and to a lesser extent the Channel watersheds, as hotspots of originality and specialization. In contrast, the Seine watershed presents a less original and specialized fish community. For all the other watersheds, the different functional and phylogenetic COI and CSI are not congruent. For example, diet-linked and habitatlinked morphological COI present completely different patterns. The former highlights a strong originality in all small rivers especially mountainous streams, whereas the latter did not present any strong variation pattern at all, except for Corsica Island. These common patterns suggest that species occurring locally may be derived from regional species pools with similar biogeographical and evolutionary histories [63,64]. Moreover, for a given regional pool, species may respond to environmental gradients in different ways affecting the spatial distribution of the different biodiversity components and generating a spatial mismatch between functional and phylogenetic COI and CSI [64,65]. These results challenge the use of a single component as a surrogate for the others, and stress the need to first understand the different processes underlying each index and second to adopt a more integrative approach for conservation. One option to deal with the different messages given by the functional properties of communities and the resulting set of measurements is to be able to provide a hierarchy of their meanings depending on the context and perspectives or more reasonable to only use common patterns. A temporal effect has been detected for three original community indices (Table 3). Both morphological and Niche community indices significantly increased over time. This temporal dynamic should result from a global increase of the total species abundance, which has been shown (t= 5,09; p<0.001) and thus, may be the result of a global improvement of the entire river ecosystems. Indeed, global water quality has improved compared to the last century thanks to significant efforts to decrease organic substances [26,66]. Moreover, fish populations in Europe are still in their re-colonizing process since the last glaciations and some species are expected to extend their geographical area [67]. More precisely, each index is more influenced by the population dynamic of a few species presenting a high original value. The European eel has a very high original value especially for both habitat-linked morphological and niche indices, even though this species is declining (t= -6,53; p<0.001) which implies an important increase by other species in compensation. For the niche-COI, the global increase could be mainly linked with the population expansion of the European perch, northern pike, and minnow. The northern pike is very popular with anglers favoring their introduction and thus, may have a positive impact on the population dynamic [26]. In the case of the diet-linked morphological-COI, it could be explained by three increasing species: the three spines stickleback, the common nase, and again the minnow. And in the case of the habitat-linked morphological-COI, the temporal increase could be mainly linked with the increase of two introduced species: the crucian carp and the pumpkinseed sunfish. Introduced species may increase faster than native species due to their rapid spread and their repeated introductions (accidental or deliberate). III. Community Indicators If originality and specialization approaches are completely different in their mathematical calculations, they describe complementary components of the functional properties of communities with similar expectations for their roles in ecosystem services. It has been theoretically and experimentally shown that the alteration of biodiversity disrupts ecological functions performed by species assemblages [68], and we know that species niche partitioning is fundamental in ecosystems properties [69]. Thus the more we are losing specialist or original species, the more we are losing irreplaceable functions in the ecosystem [6,70]. The theoretical background underlying the link between COI, CSI, and ecosystem functioning is growing in the literature, but it does not yet mean that both types of indices are relevant as functional biodiversity indicators. Community indicators have to be sensitive to anthropogenic pressures and give a clear and simple message to be technically and practically used by the targeted audience. Thus, the interpretation of indicators has to be as simple as possible and some communication qualities have to be accounting for. An indicator that is well recognized by all biodiversity stakeholders is more likely to be used in the future. According to the Millennium Ecosystem Assessment [17], one of the most important direct drivers of biodiversity loss and ecosystem service changes are land use modifications, including the physical modification of rivers. In this study we build innovative indices for community river fish in order to open the way for a new generation of indicators based on traits or niche linked with ecosystem functions. We compared each of our created community indices with the land use dataset ( Figure 6). Interestingly the CSI present a very high score in urban areas. Fish communities are composed of specialist species in this artificialized habitat, probably because the environmental filter is very strong and species need to be specialists of this disturbed area such as the Black bullhead (Ameiurus melas), an invasive species which have a strong SSI. This pattern of urban specialist species in fish seems to be similar to the bird one, with urban specialist species such as pigeons (Colombia liva) or house sparrows (Passer domesticus) [21]. One COI appears particularly relevant to become a functional indicator of river fish communities, the COI based on species niche. This functional COI is sensitive to the different kinds of human disturbances with a simple interpretation: higher disturbance correlates with lower indicator values. Indeed, when we sorted land use on an artificialization gradient from natural to farming to urban areas, we observed a decrease in the niche-COI ( Figure 6). Moreover, the three different variations of the CORINE Land cover data set give exactly the same results for the niche-COI, and thus we can be confident in its reproducibility over classification criteria. Finally, we evaluated its sensitivity to spatial scale. Is the pattern observed at the national scale still present at the regional scale? We observed a consistent pattern over the different watersheds, with the exception of the Mediterranean region (Figure 7), which may be due to the sample size. More interestingly, but not useful as an indicator, it may due to a local scale dependence or an eco-regional dependence. Indeed, Mediterranean watershed is very small and without any big river. In addition, all the urban areas are concentrated along the coast in this part of the Mediterranean eco-region. Further investigations need to be done to confirm the context of the use of this potential bio-indicator. Otherwise, Niche-COI as a functional biodiversity indicator encompasses species indicators like the CBD headline indicator "trend of selected species" because they consider a complete ecological group with their functions and their common dynamics. As a result, they carry more significant ecological information so that expectations and objectives for biodiversity stakeholders can be derived. Originality indices have already been used as indicators in biodiversity conservation contexts. Isaac et al. [71] built one called EDGE (Evolutionary Distinct and Globally Endangered) based on the phylogenetic originality and conservation status of species. At the community level, Mouillot et al. [70] have suggested functional and phylogenetic COI to evaluate conservation action areas. Because interspecific competition is more intense among species sharing common traits due to the limiting similarity principle [72], they expected that under protected areas competition might drive the better colonization or subsistence of the most original species because of niche complementary. We believe that even if one of the goals of this study was to develop functional biodiversity indicators for environmental policy makers, the Niche-COI could also be used at the scale of conservation reserves and may be used by managers of protected areas. Finally, we have to keep in mind that biodiversity indicators help to prioritize conservation actions to conserve ecological functions and in fine ecosystem "services". However, evaluation and measure alone are not sufficient in order to stop biodiversity loss, human pressures also must be limited. Supporting Information File S1. Complementary analysis: Sensitivity of the SOI to the number of species. Table S1, List of the studied species. (DOC) File S2. Complete sensitivity analysis results of community indices to human pressures. Table S2, Complete results of the relation between community indices and land used. (PDF) Freshwater Biodiversity Indicator
2017-07-09T12:07:07.033Z
2013-11-22T00:00:00.000
{ "year": 2013, "sha1": "8dd99b4fc6aabab0f9e2ded440c7c398cc3278c7", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0080968&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "79238cae45b8feae9705ad2d27f5aeace76c4bbd", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
3769412
pes2o/s2orc
v3-fos-license
Characterization of a Novel Esterase Rv0045c from Mycobacterium tuberculosis Background It was proposed that there are at least 250 enzymes in M. tuberculosis involved in lipid metabolism. Rv0045c was predicted to be a hydrolase by amino acid sequence similarity, although its precise biochemical characterization and function remained to be defined. Methodology/Principal Findings We expressed the Rv0045c protein to high levels in E. coli and purified the protein to high purity. We confirmed that the prepared protein was the Rv0045c protein by mass spectrometry analysis. Circular dichroism spectroscopy analysis showed that the protein possessed abundant β-sheet secondary structure, and confirmed that its conformation was stable in the range pH 6.0–10.0 and at temperatures ≤40°C. Enzyme activity analysis indicated that the Rv0045c protein could efficiently hydrolyze short chain p-nitrophenyl esters (C2–C8), and its suitable substrate was p-nitrophenyl caproate (C6) with optimal catalytic conditions of 39°C and pH 8.0. Conclusions/Significance Our results demonstrated that the Rv0045c protein is a novel esterase. These experiments will be helpful in understanding ester/lipid metabolism related to M. tuberculosis. Introduction Mycobacterium tuberculosis (M. tuberculosis), firstly discovered by Robert Koch [1], is a pathogenic species and the causative agent of most tuberculosis [2]. The World Health Organization (WHO) has recognized the global threat imposed by M. tuberculosis, and statistics show that about one-third of the world's population has been infected. It was reported by the WHO that the increasing rate of new clinical cases was 8 million each year, with at least 3 million people deaths [3,4,5]. M. tuberculosis has an unusual, waxy coating on the cell surface (primarily mycolic acid), which highlights that there must be a large number of enzymes involved in lipid metabolism. In 1998, the whole genome of M. tuberculosis H37Rv strain was sequenced by the Sanger Center and the Institut Pasteur, showing at least 250 enzymes related to lipid metabolism including extracellular secreted enzymes, integrated cell wall enzymes and intracellular esterases/lipases, compared with about 50 enzymes in E. coli [6,7]. The genomic organization and gene functionality of M. tuberculosis are invaluable for understanding the slowly growing pathogen. Mycobacterial genes that are involved in lipid metabolism, cell division chromosomal partitioning, and secretion are more likely to be required for survival in mice [8,9]. Lamichhane and colleagues detected 31 M. tuberculosis genes that were found to be required for in vivo survival in mouse lungs. Mutation of six of the Mycobacterial membrane protein (mmpL) family genes severely compromised the ability of the respective mutants to multiply in mouse lungs [9]. In 2007, a M. tuberculosis CDC1551 (or Rv2224c of H37Rv) gene, MT2282, was identified as a virulence gene belonging to the microbial esterase/lipase family with an active site consensus sequence of G-X-S-X-G. In fact, the esterase was a cell wall-associated carboxyl esterase rather than a protease as initially annotated. Further research found that the MT2282 esterase was required for bacterial survival in mice and full virulence of M. tuberculosis [10]. The Rv0045c protein is a putative hydrolase, probably involved in ester/lipid metabolism of M. tuberculosis. Alignment among amino acid sequences showed that the Rv0045c protein shares little amino acid sequence similarity with members of the esterase/ lipase family identified in Bacteria, Archaea, Eukaryotes and some viruses [11], such as Aes acetyl-esterase from E. coli [12], and mosquito carboxylesterase Esta2 1 (A 2 ) [13]. Here, we experimentally characterized the Rv0045c protein via protein expression, purification, biochemical characterization and enzyme activity analysis, and finally demonstrated that Rv0045c is a novel esterase in M. tuberculosis. Expression and purification of the Rv0045c protein In order to allow easy purification and to attenuate the effect of a large tag on the biological activity of the Rv0045c protein, a 66His-tag was chosen and added to its N-terminal. The fusion protein was overexpressed at 37uC and induced with 1 mM IPTG. SDS-PAGE analysis showed a major protein band with the expected 35.5 kDa size, but the recombinant Rv0045c protein was in form of inclusion bodies (data not shown). To make purification easy and to maintain the biological activity of the recombinant protein, the expression condition was optimized by raising the major fraction as a soluble protein under a feasible condition with 0.3 mM IPTG at 16uC (Fig. 1, lane 3). First, we purified the soluble protein from supernatant using Ni 2+ -affinity chromatography (Fig. 1, lane 5 to lane 8). Subsequently, the eluted protein was concentrated, and loaded onto an anion exchange chromatography column ( Fig. 2A) and a cation exchange chromatography column (Fig. 2B). Finally, the protein was further purified through gel filtration chromatography to .98% purity (Fig. 2C). MALDI-TOF mass spectrometry analysis of the Rv0045c protein We analyzed the purified Rv0045c protein by mass spectrometry. The MALDI-TOF MS spectrometry of the digested protein is shown in Fig. 3. The peptide mass fingerprinting (PMF) of the protein was observed and submitted to Mascot. Consequently, only NP_214559 protein from M. tuberculosis was obtained as a result with a score of 112. The results provided convincing evidence that the purified Rv0045c protein is the NP_214559 protein from M. tuberculosis. Circular dichroism spectroscopy analysis of the Rv0045c protein To gain insight into the secondary structural elements in the Rv0045c protein, a circular dichroism (CD) spectroscopy was collected in the wavelength range from 240 to 190 nm at room temperature (25uC) and the pH range 2.0-12.0 (with an interval of 1.0, except pH 5.0 because the protein precipitates and may be too close to its pI).. The curves converged together in the range pH 6.0-10.0, but were nevertheless distorted and disordered at extreme pH (#pH 4.0 and $pH 11.0). Near physiological conditions (at pH 7.0 and pH 8.0), the protein was much more stable and the negative trough at 216 nm with crossover at 195 nm is the characteristic feature of b-sheet secondary structure. The native state of the protein was estimated to entail 11,14% ahelix, 54,60% b-sheet, 4,8% turn and 24,26% random region, measured according to Yang and colleagues [14]. The high bsheet content suggested that the Rv0045c protein possessed abundant b-sheet secondary structures, which is in accordance with the a/b-hydrolase fold [15] and implied that the Rv0045c protein may fall into the description of the a/b-hydrolase fold by Nardini and colleagues [16]. In the ranges pH 2.0-4.0 and pH 11.0-12.0, the structure of the protein had been denatured, showing that the conformations were quite different from those at pH 7.0 (as shown in Fig. 4A). In order to assess the thermal stability of the protein, CD spectra was also collected at various temperatures (ranging from 10uC to 70uC, with an interval of 10uC) with the pH fixed at 7.5. The conformation of the Rv0045c protein was stable at temperatures #40uC and the curves converged together. The proportions of ahelix and b-sheet secondary structures at 30uC and 40uC (at 30uC: a = 10.0%, b = 61.3%, turn = 4.0%; at 40uC: a = 11.0%, b = 58.1%, turn = 7.6%) were similar to those for pH 7.0 and pH 8.0 at room temperature (25uC). When the temperature went down to #20uC, folding of the protein is consistent with inactivity (data not shown), although the percentages of a-helix and turn (at 20uC: a = 16.1%, turn = 14.8%; at 10uC: a = 20.5%, turn = 21.1%) notably increased. It was reported that the active site was fully available for substrate binding only when the protein was in the active and open conformation [16], and hence the Rv0045c protein adopts an inactive closed conformation at low temperatures, causing the enzyme activity to be extremely low. In contrast, when the temperature was increased to $50uC, the a-helical secondary structure was lost (e.g. a = 4.7% at 60uC and a = 4.8% at 70uC) and curves began to deviate from those for temeratures #40uC (as shown in Fig. 4B), which showed that the structure of the protein had been partially or largely denatured. Enzyme activity analysis of the Rv0045c protein Based on the above results, and in order to test whether the Rv0045c protein has esterase activity, we experimentally analyzed the enzyme activity of the Rv0045c protein using p-nitrophenyl derivatives (p-nitrophenyl acetate (C 2 ), butyrate (C 4 ), caproate (C 6 ), caprylate (C 8 ), laurate (C 12 ), myristate (C 14 ) and palmitate (C 16 )) as substrates according to previously described methods [11,17,18]. As shown in Table 1, at pH 7.0 and 37uC, the Rv0045c protein could hydrolyze a wide range of p-nitrophenyl derivative (C 2 -C 14 ) substrates, of which p-nitrophenyl caproate (C 6 ) was effectively hydrolyzed. The substrates p-nitrophenyl acetate (C 2 ) and p-nitrophenyl myristate (C 14 ) were also visibly hydrolyzed with more than 50% maximal activity. In contrast, no enzyme activity towards longer p-nitrophenyl esters (C 16 ) was detected (Table 1). M. tuberculosis is known to presents a certain degree of resistance to aberrant potential of hydrogen. Activity of the Rv0045c protein was examined over a broad pH range from pH 2.0 to pH 12.0. No or poor activity was detected at #pH 4.0 and $11.0 (data not shown). Based on the CD spectroscopy data, the enzyme displays a conformation-dependent esterase activity, with activity declining dramatically or almost lost at # pH 4.0 and $11.0 as a result of the enzyme becoming denatured. Activity was also too low to be detected at pH 9.0 and pH 10.0, for the reason that substrates spontaneously decomposed causing a deep background (data not shown). To determine the dynamic activity of the enzyme, we tested the activity using p-nitrophenyl caprylate (C 6 ) as substrate at certain pH conditions (pH 6.0-8.0) in the temperature range around body temperature (from 36uC to 40uC), respectively. As shown in Fig. 5, the highest enzyme activity at pH 6.0 occurred at 37uC. At both pH 7.0 and pH 8.0, however, the optimal temperature for the enzyme activity is shown to be 39uC, In addition, the activity as a whole and also the highest activity at the optimal temperature exhibited a rapid and dramatic increase along with pH, suggesting that the Rv0045c protein adopted a pH-dependent activity in the pH range from 6.0 to 8.0, and can be described by the electrostatic potential distribution on the enzyme surface at alkaline pH making the substrate-binding and/or hydrolysis more effective [19]. Discussion Esterases or lipases are types of hydrolases which are widely distributed from prokaryotes to eukaryotes, and which are involved in lipid metabolism. As previously reported, M. tuberculosis is understood to contain more than 250 enzymes related to ester/ lipid metabolism [6,7]. In this study, we confirmed that the M. tuberculosis Rv0045c protein is a novel esterase. Compared with other esterases in the a/b-hydrolase fold family, two esterases Rv3487c [20] and Rv1399c [21] from the M. tuberculosis, both of which have been functionally characterized as esterases, shared no obvious sequence identity to our Rv0045c protein, in a multiple sequence alignment calculated using ClustalW software (data not shown). All esterases in the a/b-hydrolase fold family have a nucleophile-histidine-acid catalytic triad evolved to efficiently operate on substrates with diverse chemical compositions or physicochemical properties [22,23,24]. Alignment among amino acid sequences showed that the active site G-X-S-X-G sequence motif within esterases is highly conserved (data not shown), and that the main catalytic residues (Ser89, Asp113, Ser206, His234) in the esterase ybfF [25] are also well conserved in our Rv0045c protein sequence. However, the Rv0045c protein shares as low as 23% amino acid sequence identity with ybfF. Additionally, residues around the active site in ybfF are quite divergent from those in the Rv0045c protein, suggesting that the Rv0045c protein has distinct substrate specificity and catalytic properties that set it apart from other esterases. As with the proteins Rv3487c [20] and Rv1399c [21], the Rv0045c protein can efficiently catalyze shortchain synthetic substrates (C 2 -C 8 ), but can also hydrolyze pnitrophenyl myristate (C 14 ) with more than 50% of the maximal relative activity (Table 1). Being the causative agent of most cases of tuberculosis, M. tuberculosis infects the lungs of the mammalian respiratory system and can persist in the human body at normal temperature (36uC-37uC) and pH (pH 7.3-pH 7.4) conditions for many decades. Thus, p-nitrophenyl acetate (C 6 ) was used to determine the dynamic activity of the enzyme at mild pH conditions (pH 6.0-8.0) over the temperature range from 36uC to 40uC, which was around body temperature. Compared with the optimum reaction temperature of 30uC for Vlip509 [26], a new esterase from a strict marine bacterium, Vibrio sp. GMD509, the optimal temperature for the Rv0045c protein activity turned out to be 37uC at pH 6.0 and 39uC at both pH 7.0 and pH 8.0 (Fig. 5). This is probably the result of M. tuberculosis commonly living in the bodies of humans or animals whereas the Vibrio sp. GMD509 marine bacterium parasitizes in the eggs of the sea hare, a cold-blooded animal living at relatively low temperatures. It has also been observed that the average and the highest activity of the enzyme increased rapidly and dramatically following increased pH (Fig. 5), indicating that the metabolism of esters/lipids in this pathogen was more active when the circumstances become less favorable, especially more basic, and further suggests that M. tuberculosis becomes more pathogenic at alkaline pH. M. tuberculosis is pathogenic to humans, and to some extent shows resistance to aberrant hydrogen potential. In our research, enzyme activities were determined over a broad pH spectrum (pH 2.0-12.0), yet little or no activity was detectable at extreme hydrogen potential (# pH 4.0 and $pH 11.0, data not shown). Results from CD spectroscopy analysis also indicated that, at extreme hydrogen potential (# pH 4.0 and $pH 11.0), the enzyme is partially or almost completely denatured, especially the a-helical secondary structure. These data suggest that the enzyme activity of the Rv0045c protein is conformation-dependent. Data from CD spectroscopy analysis showed that the Rv0045c protein is rich in b-sheet secondary structure, indicating that the enzyme should possess a very stable and substantial b-sheet core which helps to stabilize the architecture of the enzyme, thus ensuring that the pathogen can survive in strong environments. However, at extreme hydrogen potential (# pH 4.0 and $pH 11.0), the ahelical secondary structure of the enzyme was mostly denatured, and simultaneously the activity of the enzyme was not detectable. Based on the above evidence, it can be deduced that the b-sheet comprises the skeleton and backbone of the enzyme, while the ahelices or other secondary structure elements, e.g. turns, are required for the catalytic reaction. In addition, the conformation of the enzyme is very stable at temperatures # 40uC, and the thermal denaturing temperature of the Rv0045c protein was determined to be 50uC, which can be utilized for dry heat sterilization to deactivate the enzyme and possibly even the pathogen. The Rv0045c protein is just one of hydrolases involved in ester/ lipid metabolism in M. tuberculosis, of which many members haven't been identified or haven't been studied. Biochemical characterization and functional analysis of those undefined esterase/lipase members should help to reveal the mechanism of ester/lipid metabolism of M. tuberculosis. In order to illustrate the relationship between the tertiary structure and function of the Rv0045c esterase, and to explain the molecular mechanism and principles of the Rv0045c protein participating in hydrolyzing esters, crystallography of the protein is under progress. Protein expression Based on the template of the whole genome of M. tuberculosis, the Rv0045c gene was amplified using a standard PCR procedure with primers R1 (59-CGCGGATCCCTATCTGACGACGAACTGA-CC-39, contained a BamH I digestible site) and R2 (59-TCCGCT-CGAGTCAGCGTGTGTCGAGCACCCC-39, attached a Xho I site), and subcloned into the BamH I and Xho I sites of the pET28a vector (Novagen) with 66his-tag gene on N-terminal. The Rv0045c protein was overexpressed in E. coli BL21 (DE3) strain (Novagen) as a fusion protein with the 66His-tag. Briefly, E. coli BL21 (DE3) carrying the Rv0045c gene was grown in LB medium at 37uC with 50 mg/mL kanamycin until the OD 600 reached 0.6-0.8, and then induced with 0.3 mM IPTG at 16uC for 20 hrs. Protein expression was verified by SDS-PAGE analysis. Protein purification For 1L culture, the cells harvested by centrifugation were homogenized in 80 mL buffer A (20 mM Tris, 150 mM NaCl, 10 mM Imidazole, pH 7.5) and disrupted by ultrasonication (400 W, 3 s/3 s, 4uC). Cell debris was removed by centrifugation at 15,000 rpm for 30 min at 4uC. The supernatant collected was loaded onto Ni Sepharose TM 6 Fast Flow resin (GE Healthcare), which was pre-equilibrated with buffer A. The resin was washed with buffer B (20 mM Tris, 150 mM NaCl, 20 mM Imidazole, pH 7.5), and the objective protein was eluted with buffer C (20 mM Tris, 150 mM NaCl, 200 mM Imidazole, pH 7.5) and buffer D (20 mM Tris, 150 mM NaCl, 500 mM Imidazole, pH 7.5), sequentially. Collections were verified by SDS-PAGE analysis. The target protein was dialyzed against buffer E (20 mM Tris, pH 7.5) at 4uC to remove the imidazole and salt, and then concentrated using a 10 kDa Centricon concentrator (Millipore). Concentrated protein was successively applied to Resource Q and Resource S 1 mL columns (GE Healthcare), and the protein was eluted from the column using buffer E with a gradient of NaCl from 0 M to 2 M. Finally, the protein was loaded onto a Superdex 75 10/300 GL column (GE Healthcare) in buffer F (10 mM Tris, 150 mM NaCl, 2 mM DTT, pH 7.5). All peak fractions were collected, and the protein purity was analyzed by SDS-PAGE. Mass spectrometry analysis The gel strip was removed from the SDS-PAGE gel, cut into small pieces, washed with 100 mL 25 mM ammonium bicarbonate (pH 8.0) containing 50% acetonitrile for 15 min twice with vortexing. Gel pieces were dehydrated with 100 mL acetonitrile and completely dried with a Speed-Vac before tryptic digestion. The volume of the dried gel was evaluated and three volumes of 12.5 ng/mL trypsin (Promega) in 25 mM NH 4 HCO 3 (freshly diluted) were added. The digestion was performed at 30uC overnight, and then the mixture was sonicated for 10 min and centrifuged. The supernatant was removed for matrix-assisted laser desorption/ionization time-of flight mass spectrometry (MALDI-TOF MS) analysis. For MALDI-TOF MS analysis, 1 mL of the digested sample was spotted onto the MALDI target plate, and coated with 1 mL of matrix solution (5 mg/mL a-cyano-4-hydroxycinnamic-acid in 50% (v/v) acetonitrile and 0.1% (w/v) trifluoroacetic acid), then left to air-dry. Mass data were analysed with a prOTOFTM 2000 mass spectrometer interfaced with TOFWorksTM software (PerkinElmer/SCIEX). In this study, a 2-point external calibration of the prOTOF instrument was performed before acquiring the spectra from samples. Protein identification was performed by searching for bacteria in the NCBI non-redundant database using the Mascot search engine (Matrix Science), using the following parameters: monoisotopic; mass accuracy, 0.1 Da; missed cleavages, 1. Circular dichroism spectroscopy analysis During circular dichroism (CD) spectroscopy analysis, purified 66His N-terminally tagged Rv0045c protein (0.35 mg/mL) was solubilized in 20 mM Tris (pH 7.5) and measured in the presence of room temperature with different pH (pH 2.0-pH 12.0) and pH 7.5 with different temperatures (10uC-70uC), respectively. UV CD spectra between 190 and 250 nm were collected on a JASCO 715 spectropolarimeter (JASCO) using 1 mm quartz cuvettes containing 200 mL of the protein solutions, with a data pitch of 0.1 nm, bandwidth of 2.0 nm and scanning speed of 50 nm/min. Every sample was measured in triplicate, and data were analyzed using the Jasco Jwsse 32 secondary structure estimation software. Enzyme activity analysis Enzyme activity of the Rv0045c protein was measured as previously described [11,17,18] using seven substrates: p-nitrophenyl acetate (C 2 ), butyrate (C 4 ), caprylate (C 6 ), caproate (C 8 ), myristate (C 12 ), laurate (C 14 ) and palmitate (C 16 ). The activities were determined by applying 10 mM p-nitrophenyl esters (C 2 -C 16 ) as substrates at different pH (pH 6.0-pH 9.0) and different temperature (36uC-40uC). The substrate of p-nitrophenyl caprylate (C 6 ) was also used to estimate the dynamic activity of the enzyme at pH from 6.0 to 8.0 in the presence of mild temperatures (36uC-40uC). .For each standard assay, 50 mL 10 mM sodium taurocholate, 20 mL 10 mM substrate (dissolved in chloroform), and 420 mL Britton-Robinson buffer solution with different pH (pH 6.0-pH 9.0) were mixed in 1.5 mL Eppendorf tube separately, and then 10 uL protein (0.2 mg/mL) was added into each tube. After incubating at different temperatures for 15 min, the reaction was terminated by adding 700 mL 5:2 (v/v) acetone/hexane solution. The mixture was then centrifuged at 4,600 g for 2.5 min at room temperature and the OD 405 of the lower phase was measured. Simultaneously, three controls were made: one prepared by adding the Rv0045c protein after adding acetone/ hexane solution to observe instant hydrolysis; another prepared by substituting substrate solution with chloroform; and the other prepared by substituting the Rv0045c protein with 20 mM Tris (pH 7.5). Five parallel tests were repeated for every substrate at different pH and temperatures.
2014-10-01T00:00:00.000Z
2010-10-01T00:00:00.000
{ "year": 2010, "sha1": "9bcdb67ca504cdbd1767c56326ea43183da2c6f7", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0013143&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9bcdb67ca504cdbd1767c56326ea43183da2c6f7", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
259677431
pes2o/s2orc
v3-fos-license
Second row of eyelashes with lower extremity edema the Western University of Health Sciences, College of steopathic Medicine of the Pacific Northwest, Lebanon, regon; Department of Dermatology, Warren Alpert Medical hool of Brown University, Providence, Rhode Island; and ermatology Health Specialists, Bend, Oregon. ing sources: None. pproval status: Approved. Research Not HSR. nt consent statement: The patient who is the subject in this se gave their consent for their photographs and medical formation to be published. Correspondence to: Oliver Wisco, DO, Department of Dermatology, Warren Alpert Medical School of Brown University, 593 Eddy Street APC 10, Providence, RI 02903. E-mail: oliver_wisco@ brown.edu. JAAD Case Reports 2023;38:155-7. 2352-5126 a 2023 by the American Academy of Dermatology, Inc. Published by Elsevier Inc. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). https://doi.org/10.1016/j.jdcr.2023.06.023 HISTORY A 71-year-old male presented for a full-body skin examination due to a history of basal cell and squamous cell carcinoma. Physical examination revealed a second row of eyelashes emerging from the meibomian gland on bilateral upper and lower eyelids, which the patient stated had been present his entire life (Fig 1). No trichiasis was noted. Further inspection showed pitting edema with mild fibrosis in the bilateral lower extremities (Fig 2). The patient stated that his daughter had lower extremity lymphedema and distichiasis as well. A. Ocular Cicatricial Pemphigoid e Incorrect. This is a mucous membrane pemphigoid characterized by bilateral conjunctivitis. Distichiasis in this condition is acquired and is not associated with lymphedema. B. LDS e Correct. The patient's presentation is consistent with LDS. LDS is a rare condition in which patients develop lower extremity lymphedema and a second row of eyelashes emerging from what would have been the meibomian glands (distichiasis). 1 Distichiasis affects 94% to 100% of patients and is often present at birth. 2,3 While it can be asymptomatic, in 75% of cases it causes chronic keratitis, conjunctivitis, and photophobia. 2 Lymphedema involving the lower extremities and external genitals can affect up to 80% of patients and typically develops during late childhood or puberty. 1 Diagnosis can be made clinically or through genetic testing for a FOXC2 mutation. C. PRS e Incorrect. While this can be rarely associated with LDS (approximately 0.5% of PRS cases), both conditions more often present separately. Classic findings, including micrognathia, cleft palate, and tongue displacement, were not present. D. Milroy disease e Incorrect. It can also present with lower extremity lymphedema and atypical eyelashes, although lymphedema is usually present at or near birth. C. FLT4 e Incorrect. FLT4 mutations are associated with Milroy disease. This gene is a transmembrane receptor for vascular endothelial growth factor C and vascular endothelial growth factor D. D. FOXC2 e Correct. Loss-of-function mutations in the FOXC2 gene cause LDS. It is inherited in an autosomal dominant manner and 75% of those with LDS have an affected parent. 3 This gene plays a role in embryogenesis and the development of lymphatic vessels, veins, lungs, cardiovascular system, and kidneys. 2 When mutated, it prevents the development of lymphatic valves and increases recruitment of mural cells to lymphatic capillaries, resulting in insufficient movement of lymphatic fluid and subsequent lymphedema. Additionally, it is highly expressed in venous valves leading to venous insufficiency and varicose veins when mutated. 4 While the pathogenesis of distichiasis remains unclear, mutations have been shown to interfere with the interaction of the FOXC2 protein with the Wnt4 promoter, which has been hypothesized to result in abnormal signaling from the Wnt4-Frizzled-RYK signaling pathway. This may cause the abnormal differentiation of the meibomian gland into hair follicles, leading to distichiasis. 5 E. SOX18 e Incorrect. Variants in the SOX18 gene cause hypotrichosis-lymphedema-telangiectasia. This gene plays a role in the development of the lymphatic system. Answers: A. IPL e Incorrect. IPL is used to treat Meibomian gland dysfunction which can lead to acquired distichiasis. IPL is thought to increase the skin temperature of the eyelid making the meibum less viscous, unclogging the gland. It also helps to reduce inflammation and reduce risk of infection. B. Warm Compress e Incorrect. A warm compress can be used to treat blepharitis which can lead to acquired distichiasis. It increases circulation and helps to increase secretion production from the meibomian glands, however it would not treat the distichiasis itself. C. Electrolysis e Correct. While distichiasis is congenital in LDS, several other variants of distichiasis can be acquired. Distichiasis in both congenital and acquired forms are treated in similar fashions, with both surgical and nonsurgical approaches to treatment, which is often aimed at removal of the second row of eyelashes, as they can cause chronic trauma and inflammation to the conjunctiva. Surgical options include partial tarsal plate excision, wedge resection, and palpebral conjunctival resection. 3 Nonsurgical options include electrolysis, epilation, and cryotherapy. D. Topical Cyclosporine e Incorrect. Topical cyclosporine is used in individuals with blepharitis who have not responded to standard treatments. It increases meibomian gland expressibility and tear break up time. E. Monitor e Incorrect. While not a definitive treatment for congenital distichiasis, monitoring an asymptomatic patient for eye irritation is acceptable management.
2023-07-12T08:09:51.609Z
2023-06-01T00:00:00.000
{ "year": 2023, "sha1": "aab5406661b2d0b7fcb246a073557f24773f6abf", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.jdcr.2023.06.023", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "82f15769c9384a3d2155df5cc3be2b7cf7370c18", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
232289260
pes2o/s2orc
v3-fos-license
Biomimetic oxygen delivery nanoparticles for enhancing photodynamic therapy in triple-negative breast cancer Background Triple-negative breast cancer (TNBC) is a kind of aggressive breast cancer with a high rate of metastasis, poor overall survival time, and a low response to targeted therapies. To improve the therapeutic efficacy and overcome the drug resistance of TNBC treatments, here we developed the cancer cell membrane-coated oxygen delivery nanoprobe, CCm–HSA–ICG–PFTBA, which can improve the hypoxia at tumor sites and enhance the therapeutic efficacy of the photodynamic therapy (PDT), resulting in relieving the tumor growth in TNBC xenografts. Results The size of the CCm–HSA–ICG–PFTBA was 131.3 ± 1.08 nm. The in vitro 1O2 and ROS concentrations of the CCm–HSA–ICG–PFTBA group were both significantly higher than those of the other groups (P < 0.001). In vivo fluorescence imaging revealed that the best time window was at 24 h post-injection of the CCm–HSA–ICG–PFTBA. Both in vivo 18F-FMISO PET imaging and ex vivo immunofluorescence staining results exhibited that the tumor hypoxia was significantly improved at 24 h post-injection of the CCm–HSA–ICG–PFTBA. For in vivo PDT treatment, the tumor volume and weight of the CCm–HSA–ICG–PFTBA with NIR group were both the smallest among all the groups and significantly decreased compared to the untreated group (P < 0.01). No obvious biotoxicity was observed by the injection of CCm–HSA–ICG–PFTBA till 14 days. Conclusions By using the high oxygen solubility of perfluorocarbon (PFC) and the homologous targeting ability of cancer cell membranes, CCm–HSA–ICG–PFTBA can target tumor tissues, mitigate the hypoxia of the tumor microenvironment, and enhance the PDT efficacy in TNBC xenografts. Furthermore, the HSA, ICG, and PFC are all FDA-approved materials, which render the nanoparticles highly biocompatible and enhance the potential for clinical translation in the treatment of TNBC patients. Supplementary Information The online version contains supplementary material available at 10.1186/s12951-021-00827-2. a high rate of metastasis and recurrence, poor overall survival time, and are lack of chances for targeted therapy [3]. Despite conventional therapies of breast cancer patients, including the chemotherapy, endocrine therapy, and targeted therapy, photodynamic therapy (PDT) is a new choice to improve the therapeutic efficacy and overcome the drug resistance to the TNBC treatment [4]. PDT is a rapidly developing and clinically approved cancer treatment [5] where the anti-tumor effects of the PDT depend on the reactive oxygen species (ROS) and the singlet oxygen ( 1 O 2 ) generated by the oxygen in the photodynamic reaction to enhance cell killing during PDT [6,7]. However, PDT does consume oxygen and initiates vascular shutdown, which translates to less oxygen and worsening hypoxia [8,9]. Therefore, the development of effective strategies to overcome a hypoxic tumor microenvironment is highly sought after to achieve excellent anti-tumor therapy efficacy. Perfluorocarbons (PFC), with the ability to extend the half-life time of the 1 O 2 to approximately 10 5 -fold to other solvents [10], is an ideal carrier for oxygen delivery [11]. Further, PFC is a highly biocompatible and inert chemical compound and has been widely used in the clinic for the contrast-enhanced ultrasound imaging and prevention of ischemia/reperfusion in tissue and organ injuries [12]. PFC is supposed to be helpful to enhance the efficacy of PDT by delivering oxygen. A proper photosensitizer is required to perform PDT. For deep tissue penetration and low autofluorescence, near-infrared (NIR) light (700-900 nm) is usually preferred as the excitation wavelength range [13]. Indocyanine green (ICG), a kind of NIR photosensitizer, is the U.S. Food and Drug Administration (FDA)-approved dye for blood volume measurement [14]. Nevertheless, fluorescence quenching often occurs because of ICG aggregation and short-time blood circulation [15]. To mitigate this obstacle, human serum albumin (HSA) has been employed to increases the stability of ICG and prolongs circulation time [16,17]. In addition to PFC, ICG, and HSA are all FDA-approved for human use, thereby facilitating the usage of these materials clinically. Immune evasion and specific tumor-targeting characteristics remain challenges to tackle to maximize the efficacy of PDT. Our previous work revealed that cancer cell membranes (CCm) might possess the desired immune evasion and homologous targeting characteristics [18,19]. Here, we designed the biomimetic oxygen delivery nanoprobe, namely cancer cell membrane-coated HSA-ICG-doped perfluorotributylamine (CCm-HSA-ICG-PFTBA) for homologous targeting and hypoxia relieving at tumor sites. A non-invasive and dynamic 18 F-fluoromisonidazole ( 18 F-FMISO) positron emission tomography/computed tomography (PET/CT) imaging was performed to measure hypoxia levels at tumor sites in vivo [20,21]. We concurrently used CCm-HSA-ICG-PFTBA for PDT in 4T1 mice xenografts to observe the enhanced therapeutic efficacy because of the relieved oxygenation at the tumor sites (Scheme 1). Preparation and characterization of CCm-HSA-ICG-PFTBA HSA was used as a carrier to stabilize ICG and PFTBA. The HSA-ICG-PFTBA was prepared by stirring and ultrasonic. CCm were processed from 4T1 cells using a procedure described in our previous study [18]. CCm-HSA-ICG-PFTBA was produced via physical extrusion [18]. Dynamic light scattering (DLS) showed that the hydrodynamic size of HSA-ICG-PFTBA was 98.11 ± 6.99 nm, while that of CCm-HSA-ICG-PFTBA was 131.3 ± 1.08 nm (Fig. 1a, b). The zeta potential results revealed that the surface potential of CCm-HSA-ICG-PFTBA was similar to that of the CCm (Fig. 1c), indicating that CCm had been successfully coated onto the surface of the HSA-ICG-PFTBA. Both CCm-HSA-ICG-PFTBA and HSA-ICG-PFTBA exhibited good stability of hydrodynamic size stored in the phosphatebuffered saline (PBS) for 5 days (Fig. 1d). The structures of both CCm-HSA-ICG-PFTBA and HSA-ICG-PFTBA were verified by the transmission electron microscopy (TEM) (Fig. 1e-i). The characteristic peak of the ICG was observed in the CCm-HSA-ICG-PFTBA by UV-vis and fluorescence spectra (Fig. 2a), suggesting that CCm coating had no impact on the optical properties of the ICG. In dark conditions, the ICG peaks of the CCm-HSA-ICG-PFTBA, HSA-ICG-PFTBA, and HSA-ICG showed nearly no degradation, while 63% degradation was observed in the ICG water solution (Additional file 1: Fig. S1a-e), demonstrating that ICG degradation was overcame because of the help of HSA. The release study was conducted in the serum at 37 ℃, where approximately 70% of ICG was released from the HSA-ICG-PFTBA after 12 h of incubation, which was significantly higher than that of the CCm-HSA-ICG-PFTBA (approximately 20%, P < 0.001, Fig. 2b), indicating that cancer cell membrane coating conferred stability and lowed ICG leakage of this nanoprobe. To verify the oxygen enhancement ability, the oxygen concentrations in different solutions were measured. Higher oxygen concentration and faster oxygen increasing rate were observed in the preoxygenated CCm-HSA-ICG-PFTBA added group (from 9 to 22.23 mg/L within 100 s) than that of the same amount of the preoxygenated water added group (from 9 to 17.23 mg/L within 150 s) (Fig. 2c). These results showed that CCm-HSA-ICG-PFTBA exhibited the ability to enhance oxygen concentration. Fang et al. J Nanobiotechnol (2021) 19:81 In vitro 1 O 2 and ROS evaluation To measure the 1 O 2 generation ability of the CCm-HSA-ICG-PFTBA, an 1 O 2 indicator, Singlet Oxygen Sensor Green (SOSG) was used. By using the NIR laser irradiation, the fluorescence of the CCm-HSA-ICG-PFTBA and HSA-ICG-PFTBA significantly increased than Scheme 1. Illustration of the biomimetic oxygen-delivery nanoprobe. It was cancer cell membrane-coated indocyanine green-doped perfluorocarbon (CCm-HSA-ICG-PFTBA) for homologous targeting and improving oxygen concentration at tumor sites. 18 F-FMISO PET/CT imaging was performed to measure the hypoxia in vivo. CCm-HSA-ICG-PFTBA was injected into 4T1 xenografts and then photodynamic therapy was performed. Tumor volume was measured to evaluate the therapeutic efficacy enhancement that of the HSA-ICG (P < 0.001, Fig. 2d), demonstrating the higher 1 O 2 generation capability resulted from the PFTBA. To measure the ROS concentration, we used the dichloro-dihydro-fluorescein diacetate (DCFH-DA) as an indicator. 4T1 cells were incubated with DCFH-DA, which is prone to cleavage by intracellular esterases into 2,7-dichlorofluorescin (H 2 DCF) [22]. After the NIR laser irradiation, the generated ROS oxidized H 2 DCF to DCF, producing green fluorescence, of an intensity that was directly proportional to the ROS concentration [22]. As shown in Fig. 3a, with NIR laser irradiation, strong green fluorescence was observed in the CCm-HSA-ICG-PFTBA and HSA-ICG-PFTBA groups, while that of HSA-ICG and saline groups were negligible. These results coincided with those from the flow cytometry, where ample higher fluorescence intensity was observed in the CCm-HSA-ICG-PFTBA (94.5%) and HSA-ICG-PFTBA (89.0%) groups, compared with that of the HSA-ICG (28.3%) and saline control (20%, Fig. 3b). The results of the high 1 O 2 and ROS concentration indicated that CCm-HSA-ICG-PFTBA could enhance the PDT efficacy in vitro by the high oxygen capacity of PFTBA. In vitro cytotoxicity To evaluate the cytotoxicity of PDT enhancement, 4T1 cells were incubated with samples at different concentrations with or without NIR irradiation, and cell viabilities were measured by a cell counting kit-8 (CCK-8). As shown in Fig. 3c, all concentrations without NIR irradiation exhibited negligible toxicity to the cells. For the groups received NIR irradiation, the toxicity of CCm-HSA-ICG-PFTBA was significantly higher than that of the HSA-ICG-PFTBA and HSA-ICG when the concentration of ICG was higher than 7.5 μg/mL. And the higher the concentration was, the significantly higher the toxicity was shown. In vivo fluorescence imaging To determine the best time window for PDT, we examined the in vivo tumor distribution of CCm-HSA-ICG-PFTBA in 4T1 mice xenografts. The ICG fluorescence signals were obtained at 0, 3, 6, 9, 12, 24, 36, and 48 h post-injection of CCm-HSA-ICG-PFTBA, HSA-ICG-PFTBA, HSA-ICG, and saline, all via tail veins. We found that the tumor fluorescence of the CCm-HSA-ICG-PFTBA group was stronger than that of other groups and lasted till 48 h post-injection (Fig. 4a). The tumor fluorescence of the HSA-ICG group rapidly faded away and the residual signal in the tumor site was ascribed to blood pool emissions. The liver is well-known as one of the primary sites of the phagocyte-enriched reticuloendothelial system (RES) [23] and hence can accumulate most of the exogenous materials [24]. Liver accumulation of the CCm-HSA-ICG-PFTBA was much lower than that of the HSA-ICG-PFTBA, indicating that cancer cell membrane coating decreased the RES uptake. At 48 h postinjection, the main organs and tumors were collected for the ex vivo fluorescence imaging. As shown in Fig. 4b, the tumor fluorescence was higher and the liver and spleen fluorescence was lower in the CCm-HSA-ICG-PFTBA group compared with other groups. The fluorescence imaging results indicated that CCm-HSA-ICG-PFTBA could homologously target tumors and enhance the immune evasion. In vivo and ex vivo tumor oxygenation enhancement After verifying the in vivo distribution of these nanoprobes, we aimed to confirm whether the hypoxia of the tumor microenvironment was relieved. 18 F-FMISO PET imaging is widely used for measuring in vivo tumor hypoxia [25,26]. Tumor hypoxia is heterogeneous and exhibits complex dynamic changes during tumor growth [27]. For 18 F-FMISO PET/CT imaging, more uptake corresponds to higher hypoxia level. The 18 F-FMISO PET/CT imaging prior to the injection of nanoprobes showed high tumor radioactivity uptake in all groups (Fig. 5a). Then, CCm-HSA-ICG-PFTBA, HSA-ICG-PFTBA, HSA-ICG, and saline were administered to the mice via tail veins. At 24 h post-injection, 18 F-FMISO PET/CT imaging was performed for a second time (Fig. 5b). A global decrease of radioactivity across the whole body, including the tumor site, was shown after the injection of the CCm-HSA-ICG-PFTBA. The tumor uptake showed no obvious changes after the injection of the HSA-ICG-PFTBA, while there was a slight increase tumor uptake after the injection of HSA-ICG and saline, which due to the fast tumor growth of 4T1 xenografts (Fig. 5a). Considering the fast tumor growth and the imaging results of the HSA-ICG and saline groups, the tumor hypoxia was slightly relieved after the injection of the HSA-ICG-PFTBA. After drawing the ROIs and quantitatively analyzing the SUVmax at the tumor sites, the post SUVmax of tumor (0.33 ± 0.09) was significantly lower than that of the pre SUVmax of tumor (1.25 ± 0.11) in the CCm-HSA-ICG-PFTBA group (P < 0.001, Fig. 5d). The liver radioactivity uptake was also significantly reduced after the injection of CCm-HSA-ICG-PFTBA at 24 h post-injection (P < 0.001, Additional file 1: Fig. S2a, b). There were no significant differences between Ex vivo tumor slices of each group were obtained to further confirm the oxygen concentration by using a hypoxia-probe (pimonidazole hydrochloride) for immunofluorescence staining. The hypoxia areas showed obvious reduction, from 87.4% before injection to 8.3% 24 h post-injection of the CCm-HSA-ICG-PFTBA (Fig. 5c, e), with a tenfold decrease. Less improvement of hypoxia was observed in the HSA-ICG-PFTBA and HSA-ICG group (Additional file 1: Fig. S3). It is noticed that the fluorescence of hypoxia areas (green) and blood vessels (red) both decreased, which was due to the vascular shutdown effects during the PDT [8]. These above results indicated that CCm-HSA-ICG-PFTBA could relief tumor hypoxia and therefore could be an ideal strategy to enhance PDT efficacy. In vivo PDT efficacy evaluation The in vivo PDT efficacy evaluation was performed on 4T1 xenografts. The mice were randomly divided into eight groups, and injected with CCm-HSA-ICG-PFTBA, HSA-ICG-PFTBA, HSA-ICG, and saline (Day 0), respectively, with or without NIR laser irradiation at 24 h post-injection (Day 1). 18 F-FDG PET imaging was performed to monitor tumor burden in Day 2, Day 7, and Day 14 (Additional file 1: Fig. S4). On Day 7 and Day 14, the tumor-to-muscle radioactivity (T/M) ratio of the CCm-HSA-ICG-PFTBA with NIR group was significantly lower than that of the saline group (P = 0.03 and P = 0.04 on Day 7 and 14, respectively, Fig. 6a), and gradually decreased, while that of the other groups all increased (Fig. 6b). The tumor volumes were normalized to their initial size. As shown in Fig. 6c, the normalized tumor volumes of the CCm-HSA-ICG-PFTBA with NIR group showed significantly slower increase (P = 0.01) than that of the HSA-ICG with NIR group and saline control (P < 0.001) on Day 14 (Fig. 6c). There were no significant differences between the CCm-HSA-ICG-PFTBA and HSA-ICG-PFTBA without NIR groups, and the HSA-ICG and saline with or without NIR groups. On Day 14, all the mice were sacrificed and the tumors were weighed. The mean tumor weight of the CCm-HSA-ICG-PFTBA with NIR group was significantly lower than that of the HSA-ICG with NIR group and saline control (P = 0.038 and P = 0.002, respectively; Fig. 6d). There were no significant differences between other groups. The photos of the tumors were shown in Fig. 6e. These results suggested that CCm-HSA-ICG-PFTBA could enhance the PDT efficacy and relief the tumor growth. Neither death nor significant decrease in body weight was observed in all groups during the 14 days duration of the experiment (Fig. 7a). On Day 14, all the mice were euthanized and their blood and major organs were collected for blood tests and hematoxylin and eosin (H&E) staining. There was no significant difference between the treatment and the saline control groups in blood parameters and blood chemistry indicators (Fig. 7b-e). Furthermore, no noticeable organ damage was observed on H&E-stained slices (Fig. 7f ). Hence, these results showed no obvious toxicity of CCm-HSA-ICG-PFTBA in vivo. Discussion In this study, we designed a cancer cell membrane-coated oxygen delivery nanoprobe CCm-HSA-ICG-PFTBA, which exhibited appropriate structural characteristics, stable optical properties, high biocompatibility, and showed no obvious biotoxicity, making it ideal for biomedical applications. With the homologous targeting and immune evasion abilities from the cancer cell membrane coating, and the oxygen delivery function of the PFTBA, the CCm-HSA-ICG-PFTBA improved the hypoxia in the tumor environment and enhanced the therapeutic efficacy of PDT in TNBC xenografts, indicating that CCm-HSA-ICG-PFTBA was able to contribute to the development of TNBC treatment. In addition, coating with cell membrane can also stabilize the nanoprobes. In this study, the release of ICG in the bare HSA-ICG-PFTBA was 70%, which was 3.5-fold than that of the CCm-HSA-ICG-PFTBA (the release was 20%) at 12 h after dialysis in the serum. Wu et al. also reported that cancer cell membrane coating can suppress the release of doxorubicin and icotinib loaded into the nanoparticles [28]. The slow release of drug encapsulated in the nanoparticles can ensure the long circulation in the bloodstream and reduce systemic toxicity, which makes biomimetic nanoparticles become a powerful vehicle for drug delivery in cancer treatment. 18 F-FMISO PET/CT imaging was performed to measure the hypoxia at tumor sites in vivo. The radioactivity decreased throughout the whole body including the tumor and liver after injection of CCm-HSA-ICG-PFTBA. This can be explained that, with cancer cell membrane coating, CCm-HSA-ICG-PFTBA was more stable, and the blood circulation time was prolonged, and then some of them can also be captured by RES and circulated in the bloodstream throughout the whole body, which leading to increase the overall oxygenation levels. Although to some extent the cancer cell membrane coating can reduce the RES uptake, the excretion pathway of CCm-HSA-ICG-PFTBA is mostly based on the liver. Due to the oxygen delivery ability of PFTBA, the liver oxygenation level increased, resulting in the highly decreased liver radioactivity uptake of 18 F-FMISO. The tumor radioactivity uptake showed no obvious changes after the injection of HSA-ICG-PFTBA. According to the fast tumor growth and the imaging results of the HSA-ICG and saline groups, the tumor hypoxia can be considered slightly relieved after the injection of the HSA-ICG-PFTBA. It can be explained that the HSA-ICG-PFTBA cannot uniformly distribute throughout the whole body due to the instability of the nanoprobe and immune system clearance because of the absence of cancer cell membrane coating, which was in accordance with the results of the ICG release study (Fig. 2b). The slightly improvement of tumor hypoxia after the injection of the HSA-ICG-PFTBA can be attributed to the enhanced permeability and retention (EPR) effect of solid tumors, which in any case was still less effective than the homologous targeting of the CCm-HSA-ICG-PFTBA. It is noticed that for the immunofluorescence staining, the fluorescence of hypoxia areas (green) and blood vessels (red) both decreased, which was due to the vascular shutdown effects during the PDT. The ROS generated during PDT can damage vascular endothelial cells and cause the vascular shutdown, which is an important PDT mechanism for tumor treatment [8]. There are still some limitations to this study. Although we achieved the aim of partly relieving the tumor growth and slightly enhancing the therapeutic efficacy in TNBC treatment, the tumor growth was not completely inhibited or regressed. Here, we only performed a monotherapy PDT. As reported, the efficacy of combination therapy is better than a monotherapy [29][30][31]. Hence, it would be more effective to combine the biomimetic oxygen delivery PDT strategy with other therapies, such as chemotherapy, gene therapy, and immunotherapy. The therapeutic efficacy of these combination therapies still needs to be further validated. Conclusions In summary, we successfully designed a biomimetic oxygen delivery nanoprobe CCm-HSA-ICG-PFTBA, in which the PFTBA core could dissolve a large amount of oxygen and the cancer cell membrane coating enabled homologous targeting and immune evasion abilities. We used a non-invasive and dynamic 18 F-FMISO PET/CT imaging to measure hypoxia levels in vivo, and proved prominently hypoxia reduction at tumor sites. The therapeutic efficacy of PDT was further enhanced after the administration of the CCm-HSA-ICG-PFTBA because of the oxygen delivery, without causing notable additional side effects to the treated animals. Since the HSA, ICG, and PFTBA used in the nanoprobe are all FDA-approved and highly biocompatible, the nanoprobe may have the potential for clinical translation as an effective oxygen delivery agent to relief tumor hypoxia. Besides, there are many other therapies influenced by oxygen levels, such as radiotherapy [32], immunotherapy [33], chemotherapy [34], and sonodynamic therapy [35]. This strategy of biomimetic oxygen delivery nanoprobe could be a promising method to enhance the efficacy of hypoxia-limited therapies. China). All of the aqueous solutions were prepared using deionized water (DI water) purified with a purification system. The other reagents used in this work were purchased from Aladdin-Reagent (Shanghai, China). Preparation of CCm-HSA-ICG-PFTBA HSA (20 mg) was mixed in deionized water (1 mL) with stirring for 10 min. ICG dissolved in DI water (1 mg/mL) then dispersed in HSA solution and shake for 30 min at 37 ℃ to obtain HSA-ICG. The PFC (0.1 mL) was added gradually under sonication at 300 W in an ice bath for 8 min (ultrasonic for 7 s and rest for 3 s in every 10 s) to formulate HSA-ICG-PFTBA. Free ICG was removed by ultrafiltration centrifuge tube (Millipore molecular weight cutoff = 30 kDa). Cancer cell membrane derivation could be achieved by emptying harvested 4T1 cells of their intracellular contents using a combination of hypotonic lysing, mechanical membrane disruption, and differential centrifugation according to the previous report [18]. The CCm coated on the surface of HSA-ICG-PFC were fabricated by the approach used in our previous study as reported [18]. HSA-ICG-PFC solution (1 mL) mixed with the prepared CCm-vesicles at different proportions. The mixture was subsequently extruded 11 times through 400 nm porous polycarbonate membrane. The resulting CCm-HSA-ICG-PFC were kept in PBS at 4 ℃ for further use. Characterization of CCm-HSA-ICG-PFC The hydrodynamic diameter and zeta potential were measured by dynamic light scattering (DLS; Man 0486, Malvern, UK). The morphology and structure of HSA-ICG-PFC, CCm-HSA-ICG-PFC and CCm-vesicles will be characterized by transmission electron microscope (TEM; Talos F200X, FEI, Netherlands). The TEM samples were prepared by contacting the droplet containing HSA-ICG-PFC, CCm-HSA-ICG-PFC or CCm-vesicles with the copper grids for 60 s, negatively stained with 1% phosphotungstic acid for 30 s and dried with absorbent paper before the characterization. The stability experiments were carried out by measuring HSA-ICG-PFC and CCm-HSA-ICG-PFC in 1× PBS for 5 days using DLS for monitoring dynamic diameter. The fluorescence of ICG was measured by the multifunctional microplate reader. The photoexcitation wavelength was 710 nm and the emission wavelength was 740-850 nm. The photostability of ICG was measured by 808 nm laser irradiation (1 W/cm 2 ) to different samples (ICG, 2 μg/mL), and recording the absorption every 10 s for 1 min. The storage stability of ICG in different samples was performed by UV-vis spectra under dark condition till 60 h. The release of ICG in CCm-HSA-ICG-PFTBA and HSA-ICG-PFTBA (80 μg/mL) was determined by putting two samples into the dialysis bag (MWCO10k), and the dialysis bag was put into 15 mL of plasma, as release medium. The release of ICG in plasma was detected at 2, 4, 8, and 12 h by the UV-vis spectra and calculated based on the standard curve. Oxygen release experiment was performed with a dissolved oxygen meter, to measure the oxygen concentrations in different solutions. Sample solutions (10 mL) were preoxygenated, and added into 50 mL deoxygenated water. The oxygen concentration in the water was monitored and recorded every 5 s for 800 s with a dissolved oxygen meter. In vitro 1 O 2 and ROS evaluation SOSG was applied to detect the 1 O 2 generation of these samples. 100 μL different samples with the same concentration of ICG (50 μg/mL) and 20 μL SOSG (50 μM) were added into a black 96-well plate. With 808 nm laser irradiation, the fluorescence of oxidized SOSG (Ex/ Em = 504/525 nm) was recorded every 10 s by multifunctional microplate reader. DCFH-DA (Ex/Em = 495/529 nm) was used to indicate the ROS by confocal laser scanning microscope (CLSM). The 4T1 cells were seeded in confocal glass bottom dish with a density of 1 × 10 4 cells. After incubated for 24 h, medium containing CCm-HSA-ICG-PFTBA, HSA-ICG-PFTBA, HSA-ICG and PBS were added to the dishes at the concentration of 10 μg/mL ICG for 3 h incubation. After washing for 3 times by PBS, the medium containing DCFH-DA (25 μM) was added to incubate with cells for 30 min. After washing for 3 times by PBS, cells were divided into two lines, with or without 808 nm laser irradiation (2 W/cm 2 ) for 20 s (30 s pause after each 10 s irradiation). Then the cells were fixed by 4% polymer formaldehyde and the cell nucleus were labeled with 4′,6-diamidino-2-phenylindole (DAPI). CLSM was used to detect the green fluorescence of DCF. Flow cytometry was applied to quantitatively reflect ROS generation. The procedure was similar to that for fluorescence imaging. The 4T1 cells were seeded in 6-well plates at the density of 1 × 10 5 cells and stained by DCFH-DA (25 μM) for 30 min. After 808 nm laser irradiation (2 W/cm 2 ) for 20 s (30 s pause after each 10 s irradiation), the cells were centrifuged, re-suspended in 300 mL PBS and analyzed by flow cytometry. The green fluorescence was detected on FL1 channel (Ex/Em = 488/525 nm). In vitro cytotoxicity A CCK-8 assay was used to evaluate the enhanced PDT efficacy of CCm-HSA-ICG-PFTBA. 4T1 cells were seeded in 96-well plates at a density of 5 × 10 3 cells per well and cultured for 12 h. CCm-HSA-ICG-PFTBA, HSA-ICG-PFTBA, and HSA-ICG were added to incubate with cells for 3 h at various concentrations of ICG (i.e., 1.25, 2.5, 5, 7.5, 10, 20, and 40 μg/mL). The saline group was used as control. Then the cells were irradiated by 808 nm laser (2 W/cm 2 ) for 20 s (30 s pause after each 10 s irradiation). After 2 h co-incubation, cells were washed by PBS, and fresh culture medium was added. After further 24 h incubation, the fresh culture medium without serum (90 μL) mixed with CCK-8 (10 μL) was added into wells and the plates were incubated for another 2 h. Finally, the absorbance values of the cells per well were determined with a microplate reader (Bio-rad, Hercules CA, USA) at 450 nm for analyzing the cell viability. The background absorbance of the well plate was measured and subtracted. Animals and tumor models Animals received care under the instruction of the Guidance Suggestions for the Care and Use of Laboratory Animals. Balb/c female mice (6 weeks) were purchased (Beijing HuaFuKang Bioscience Co. Ltd, China). To obtain tumor-bearing mice, hairs on the upper limb were removed. Then, 1 × 10 7 4T1 cells were subcutaneously injected into the right upper limb of each mouse. The tumor bearing mice was used for further experiments when the tumor volume reached 60-250 mm 3 . In vivo fluorescence imaging When the volumes of tumor reached 100-150 mm 3 , the BALB/c mice were divided into four groups randomly. CCm-HSA-ICG-PFTBA, HSA-ICG-PFTBA, HSA-ICG (200 μL, 0.8 mg/kg for ICG), and saline were intravenously injected into tumor-bearing mice via the tail vein. All mice were anesthetized by isoflurane. The fluorescence images of mice at different time points (0, 3, 6, 12, 24, 36, and 48 h) were obtained by imaging system (Ex/Em = 710/790 nm). Then all mice were sacrificed to obtain the major organs (including heart, lung, liver, spleen, and kidney) and tumors to conduct the ex vivo fluorescence imaging. In vivo micro PET/CT imaging PET/CT imaging was performed on a micro PET/CT (Trans-PET Discoverist 180, Raycan Technology Co., Ltd., Suzhou, China). 18 For 18 F-FDG PET imaging, mice in each group were randomly selected and injected with 5.55 MBq (150 μCi) of 18 F-FDG via the tail vein. 10 min static scans were acquired at 1 h post injection. All the mice for 18 F-FDG PET imaging were fasted overnight prior to the probe injection, maintained under isoflurane anesthesia and kept warm during the injection, waiting phase, and scanning periods. The images were reconstructed using the orderedsubset expectation maximization (OSEM) algorithm. For each micro PET image, 3.0 mm diameter spherical regions of interest (ROIs) were drawn over the liver, tumor, and the contralateral muscle on the decay-corrected images using Amide to obtain the percentage of injected dose per gram-tissue (%ID/g) and measure the SUVmax of tumor, liver, and calculate the tumor to contralateral muscle (T/M) ratio. The highest uptake point of the entire tumor and liver was included in the ROI, and no necrosis area was included. Ex vivo immunofluorescence staining Hypoxyprobe plus kit was used to stain tissues and detect hypoxia. Tumor-bearing mice were injected with CCm-HSA-ICG-PFTBA, HSA-ICG-PFTBA, and HSA-ICG (200 μL, 0.8 mg/kg for ICG) via tail vein, and divided into six groups (0, 6, 12, 24, 36, and 48 h). Then pimonidazole hydrochloride (60 mg/kg, Hypoxyprobe plus kit) was injected into the mice via tail vein. After 90 min later, all mice were sacrificed to obtain tumors for immunofluorescence staining following the protocols [36]. Hypoxia were stained with green fluorescence, cell nucleus were stained with DAPI and showed blue fluorescence, and blood vessels were stained with anti-CD31 and showed red fluorescence. All slices were examined by CLSM. In vivo photodynamic therapy and systematic toxicity When tumor size reached about 60 mm 3 , the mice were randomly divided into eight groups (n = 6). The treatment groups were as follows: CCm-HSA-ICG-PFTBA (NIR), CCm-HSA-ICG-PFTBA, HSA-ICG-PFTBA (NIR), HSA-ICG-PFTBA, HSA-ICG (NIR), HSA-ICG, saline (NIR), and saline. On Day 0, all groups were injected with different samples (200 μL, 0.8 mg/kg for ICG) via tail veins, respectively. After 24 h later, namely on Day 1, all NIR groups were treated with 808 nm laser irradiation (2 W/cm 2 ) for 2 min (1 min pause after each 30 s irradiation). 18 F-FDG PET imaging and photograph taken were performed on Day 2, 7, and 14 to evaluate the tumor burden. The length and width of the tumor and mice body weight were recorded every 2 days over 14 days. The tumor volumes were calculated according to this formula: V = D × d 2 /2 (D is the longest diameter of tumor, and d is the shortest diameter of tumor). Relative tumor volume was calculated as V/V 0 (V 0 is the original tumor volume on Day 0). On Day 14, mice were sacrificed and tumors were weighted and photographed.
2021-03-21T13:38:05.240Z
2021-02-09T00:00:00.000
{ "year": 2021, "sha1": "660063d2c3f7d6ffcf7154adb3c4ec2ea51fda1c", "oa_license": "CCBY", "oa_url": "https://jnanobiotechnology.biomedcentral.com/track/pdf/10.1186/s12951-021-00827-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "660063d2c3f7d6ffcf7154adb3c4ec2ea51fda1c", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
130130023
pes2o/s2orc
v3-fos-license
THE CHARACTERISTICS OF THE BIOGENIC ELEMENT OF THE RUNOFF FROM THE DRAINAGE AREAS OF THE GULF OF FINLAND BASIN EXPERIENCING A LIMITED ANTHROPOGENIC IMPACT Anthropogenic transformation of the ecosystem 61 This article analyses characteristics of 25 rivers of the Gulf of Finland basin where the monitoring of the streamflow chemical composition was performed. The authors consider the dynamics of biogenic element content in the streamflow, the relation of the drainage areas to certain landscapes, the share of agricultural lands and tillage in the drainage areas, the forest-land percentage, a rural population density, and the forest age and type. This article analyses characteristics of 25 rivers of the Gulf of Finland basin where the monitoring of the streamflow chemical composition was performed.The authors consider the dynamics of biogenic element content in the streamflow, the relation of the drainage areas to certain landscapes, the share of agricultural lands and tillage in the drainage areas, the forest-land percentage, a rural population density, and the forest age and type. Key words: drainage area, landscape composition, share of agricultural lands, percentage of tillage, forest-land percentage, density of rural population, forest age and type, biogenic element concentration dynamics An important feature of the Baltic Sea basin is the abundance of flowing water body systems connected by watercourses.The ecosystem of flowing water bodies transforms the chemical composition of the river flow. The anthropogenic discharge of biogenic elements combines with the biogenic element inflow from natural landscapes.Thus, in order to indentify the ecologically justified norms of anthropogenic pressure on water bodies, one should take into account the intra-annual natural dynamics of biogenic element discharge.To this effect, we will focus on the dynamics of discharge composition and the structure of drainage basins. We used the data of the Federal Service for Hydrometeorology and Environmental Monitoring for the 1950s-1980s.The monitoring sites were chosen according to the following criteria: -the absence of anthropogenic regulation of the river; -maximum homogeneity of a landscape drainage basin structure; -a small degree of anthropogenic transformation of the drainage basin; -continuous monitoring over at least ten years.We chose 25 out of 115 possible monitoring sites.The scheme of drainage area locations and the monitoring sites is presented in figure 1.In order to establish a possible link between these irregularities and an increase in fertiliser amounts, the cumulative sum method was applied to the amounts of mineral fertiliser used in the Russian national economy.The cumulative sum chart shows steep increases in nitrogen fertiliser amounts in 1969, 1976, and 1984 and in phosphorus fertiliser in 1967, 1972, and 1979 (fig.2).The cumulative sum chart for annual biogenic concentrations in the Asilanjoki, the Golokhovka, the Sinyaya, the Sharya, the Berezaika, the Kunya, the Tohmajoki, the Nemina, the Vazhina, the Valya, the Volozhba, the Vidlitsa, the Tuksa, the Unitsa, the Kimsa, the Pchyovzha, the Pyalma, the Svyatreka, and the Tigoda rivers shows irregularities over the selected years.It can indicate the influence of an increase in the amounts of mineral fertilisers on the concentration of biogenic elements in the water flow.In order to prove this assumption, we calculated the values of coefficients of pair correlations between the average annual content of biogenic elements in the river flow for the given year and the amount of fertilisers used in agriculture during that particular year and the previous ones.In most cases a pair correlation coefficient module of less than 0.5 indicated weak connection between the amount of fertiliser and the content of biogenic elements of the rivers studied. THE CHARACTERISTICS OF THE BIOGENIC ELEMENT OF THE RUNOFF FROM THE DRAINAGE AREAS OF THE GULF OF FINLAND BASIN EXPERIENCING A LIMITED ANTHROPOGENIC IMPACT We carried out a statistical analysis of the homogeneity of average annual concentrations of mineral forms of biogenic elements (fig.3).The observations were divided into intervals before and after the shifts in the cumulative sum charts.Since the number of elements did not exceed 25 in either samples we used the non parameteric Mann-Whitney-Wilcoxon and Siegel-Tukey tests.Both tests showed the absence of statistically significant differences in all cases.Consequently, no changes in the concentration of biogenic elements in the waters of the studied rivers over the period under consideration have been detected.The shifts in the charts of cumulative sums of the average annual biogenic element concentrations took place only in those years when only a couple of measurements were performed in the high-water period. In order to determine the degree of anthropogenic pressure on the drainage basin territory, we calculated the coefficient of anthropogenic pressure according to the formula devised by Prof. G. T. Frumin [7; 8].The results are given in the table below. River Population density, people/km In most cases, the coefficient of anthropogenic pressure is less than 0.5 and in all cases it is below 1.0.It means that the anthropogenic pressure on the drainage areas is below and, in most cases, 1.5-2 times less the world average.We compared the concentrations of biogenic elements in the rivers under consideration and the rivers which drainage areas are exposed to significant anthropogenic stress: the Velikaya and the Luga rivers.The law of biogenic element distribution in the waters of these rivers is close to normal.Applying the Student's distribution, the Fisher's exact test, and the Siegel-Tukey and Mann-Whitney-Wilcoxon test, we compared the biogenic element content in all 25 rivers with that of the Velikaya and the Luga.In all cases we detected a significant difference in the content of biogenic elements between each of the studied rivers, and the Velikaya and the Luga rivers.Thus, one can assume that the anthropogenic pressure on the chosen drainage areas is much lower than the pressure on the drainage areas of the Velikaya and the Luga. In order to identify the anthropogenic component of the studied river flow, we used the values of background concentrations from the following sources [2; 3; 5].The concentration of biogenic elements in the rivers studied does not exceed the background levels.There are several exceptions relating the samples taken during flood periods, when the inflow of biogenic elements increases within all, including unimpaired, drainage areas. In order to identify the features of drainage areas of the studied rivers' basins, their boundaries were superimposed with the help of GIS ArcView software on the maps of landscape provinces of the North-West, the share of agricultural lands, the share of tillage areas, the forest land percentage by geographical mesoregions and timber industry facilities, and the forest age and type [1]. We identified the following features of the drainage areas.The Karelian south taiga subprovince includes the drainage areas of the Seleznyovka, the Asilanjoki, and the Volchya rivers; the Karelian middle taiga subprovincethe drainage areas of the Vidlitsa, the Nemina, the Kumsa, the Lososinka, the Pchyovzha, the Tohmajoki, the Unitsa, the Tuksa, the Pyalma, and the Svyatreka rivers; the north-western south taiga subprovince -the drainage areas of the Mshaga, the Volozhba, the Tigoda, the Sharya, the Vazhina, the Valya, and the Golokhvka rivers; the north-western sub-taiga subprovinceof the Sorot, the Severka, the Sinyaya, the Berezaika, and the Kunya rivers. In most cases, the drainage basins are characterised by an insignificant degree of agricultural cultivation (not more than 20 %), except the drainage basins of the Kunya, the Sinyaya, the Sorot, the Severka, and, partially, the Mshaga rivers.The volume of the biogenic elements discharge is in direct proportion to the share of agricultural lands [6].The share of agricultural lands does not exceed 40 % in the studied drainage basins. The cartographical analysis of the forest land percentage of the drainage areas was carried out on the basis of the forest land maps by mesoregions and on the basis of timber industry enterprises [1].The greater part of the drainage basins is characterised by a high percentage of forest lands -not less than 50 %.Exceptions are the drainage basins of the Sorot, the Sinyaya, the Kunya, and, partially, the Severka rivers. The share of tillage lands does not exceed 10 % in most studied drainage areas, except those of the Sorot, the Sinyaya, the Severka, the Kunya, and, partially, the Mshaga rivers, where the share of tillage lands does not exceed 20 %. Young or mature coniferous species prevail in most drainage basins, except those of the Mshaga, the Severka, the Berezaika, the Golokhovka, the Tigoda, and, partially, the Sorot and the Pchyovzha rivers. We divided the drainage basins into several groups according to the following parameters: 1) association with one landscape province; 2) forest land percentage; 3) the share of agricultural lands; 4) the share of tillage lands; 5) the density of rural population; 6) the age of forests; 7) the prevailing tree species. The key feature is the association with one landscape province.Later, other characteristics are considered.For drainage basins to be assigned to different groups, they should differ in at least two parameters.The drainage areas are divided into four groups: the northern group consists of the Kumsa, the Nemina, the Pyalma, the Unitsa rivers; the Karelian group -the Asilanjoki, the Seleznyovka, the Volchya rivers; the central group -the Berezaika, the Vazhina, the Valya, the Vidlitsa, the Volozhba, the Golokhovka, the Lososinka, the Pchzhyova, the Svyatreka, the Tigoda, the Tuksa, the Unitsa rivers; the southern group -the Kunya, the Severka, the Sinyaya, the Sorot rivers; the Mshaga drainage area cannot be assigned to any of these groups. Fig. 2 . Fig. 2. The cumulative sum of mineral fertilisers used in agriculture in Russia in 1951-1990: a -nitrogen fertilisers, b -phosphorus fertiliser Fig. 3 . Fig. 3.The examples of charts based on the cumulative sum method for the cases of stable and significantly altered average annual content (the Lososinka (a) and the Sharya (b) rivers respectively)
2019-04-25T13:10:52.451Z
2011-01-01T00:00:00.000
{ "year": 2011, "sha1": "6fe6a210b945c25c4abfd4d98bd4a38a8ed22968", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5922/2074-9848-2011-1-8", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "6fe6a210b945c25c4abfd4d98bd4a38a8ed22968", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
6414423
pes2o/s2orc
v3-fos-license
Risk of Nonlower Respiratory Serious Adverse Events Following COPD Exacerbations in the 4-year UPLIFT® Trial Introduction Chronic obstructive pulmonary disease (COPD) exacerbations are associated with systemic consequences. Data from a 4-year trial (Understanding Potential Long-term Impacts on Function with Tiotropium [UPLIFT®], n = 5,992) were used to determine risk for nonlower respiratory serious adverse events (NRSAEs) following an exacerbation. Methods Patients with ≥1 exacerbation were analyzed. NRSAE incidence rates (incidence rate [IR], per 100 patient-years) were calculated for the 30 and 180 days before and after the first exacerbation. NRSAEs were classified by diagnostic terms and organ classes. Maentel-Haenszel rate ratios (RR) (pre- and postexacerbation onset) along with 95% confidence intervals (CI) were computed. Results A total of 3,960 patients had an exacerbation. The mean age was 65 years, forced expiratory volume in 1 s (FEV1) was 38% predicted, and 74% were men. For all NRSAEs, the IRs 30 days before and after an exacerbation were 20.2 and 65.2 with RR (95% CI) = 3.22 (2.40–4.33). The IRs for the 180-day periods were 13.2 and 31.0 with RR (95% CI) = 2.36 (1.93–2.87). The most common NRSAEs by organ class for both time periods were cardiac, respiratory system (other), and gastrointestinal. All NRSAEs as well as cardiac events were more common after the first exacerbation, irrespective of whether the patient had cardiac disease at baseline. Conclusions The findings confirm that, after exacerbations, serious adverse events in other organ systems are more frequent, particularly those that are cardiac in nature. Introduction Exacerbations of chronic obstructive pulmonary disease (COPD) impair health status [1,2] and appear to accelerate the progression of the disease [3]. An exacerbation can lead to profound symptoms, disrupt the ability to engage in activities of daily living, and may take several weeks or months to resolve [4]. Of note, some patients may not fully return to baseline function following an exacerbation of COPD. Exacerbations are associated with a considerable early mortality [5], but the frequency and severity of exacerbations are also associated with long-term mortality, independent of age, forced expiratory volume in 1 s (FEV 1 ), body mass index, and the presence of comorbidities [6]. Comorbidities are common in people with COPD, and they may be worsened by the occurrence of an exacerbation and potentially worsen the severity and impact of the exacerbation itself. The Understanding Potential Long-term Impacts on Function with Tiotropium (UPLIFT Ò ) trial was designed to determine the long-term efficacy and safety of tiotropium, a once-daily, inhaled anticholinergic, in patients with COPD, with the rate of decline in FEV 1 being the primary end point. Although there was no difference in the rate of FEV 1 decline over the control group, patients receiving tiotropium had significant improvements in lung function and health-related quality of life and a reduced risk for exacerbations, associated hospitalizations, and episodes of respiratory failure, as well as a reduced all-cause mortality [7]. Patients who participated in the UPLIFT Ò study were carefully observed over a period of 4 years, and the occurrence of exacerbations and adverse events were recorded throughout the period in which patients were receiving the study drug. We therefore examined this large clinical trial database to assess the relationship between exacerbations and the occurrence of nonrespiratory morbidity recorded as adverse events. Methods Details of the UPLIFT Ò study design and results of the primary and secondary end points have been reported previously [7,8]. All patients gave written informed consent, and the study was approved by local ethical review boards and conducted in accordance with the Declaration of Helsinki. Study Design The study was a 4-year, randomized, double-blind, placebo-controlled, parallel-group trial of patients with COPD. Patients received either tiotropium 18 lg once daily or a matching placebo delivered via the HandiHaler Ò inhalation device (Boehringer Ingelheim International GmbH, Ingelheim, Germany). Patients were recruited from 490 investigational centers in 37 countries. Criteria for participation included diagnosis of COPD, age at least 40 years, smoking history of at least 10 pack-years, postbronchodilator FEV 1 B70% of the predicted normal, and FEV 1 B70% of forced vital capacity. Postrandomization clinic visits occurred at 1 and 3 months and then every 3 months throughout the 4-year treatment period. All respiratory medications, other than inhaled anticholinergics, were permitted during the trial. Exacerbations Exacerbations were defined as an increase in or the new onset of more than one respiratory symptom (cough, sputum, sputum purulence, wheezing, or dyspnea) lasting 3 days or more and requiring treatment with an antibiotic or a systemic corticosteroid. Data regarding exacerbations and related hospitalizations were collected on study-specific case-report forms at every visit. Adverse Events Adverse events, including those deemed serious and fatal, were coded using the Medical Dictionary for Regulatory Activities (MedDRA) ver. 11.1. Diagnostic terms are referred to as preferred terms. These are totaled in higher categories, including organ systems (referred to as system organ classes [SOCs]). Additional prespecified categories of preferred terms were also formed prior to unblinding of the trial, where several preferred terms described a similar clinical event. Because numerous categories exist, those that are representative of major groupings and are of significant public health concern have been chosen for display. An individual patient may contribute several terms but will be represented only once in a category, such as a SOC. Serious adverse events were identified according to the standard definition: ''A serious adverse event (experience) or reaction is any untoward medical occurrence that at any dose: results in death, is life-threatening, requires inpatient hospitalization or prolongation of existing hospitalization, results in persistent or significant disability/incapacity, or is a congenital anomaly/birth defect'' [9]. Data Analysis The population included in the current analysis was restricted to only those patients with an exacerbation of COPD. Only patients who survived their first exacerbation (i.e., the investigator indicated resolution of the exacerbation prior to a fatal event, if present) were included in order to determine the frequency of nonrespiratory events following the onset of an exacerbation. Serious adverse event incidence rates (IRs) (per 100 patient-years) were calculated for time periods limited to 30 and 180 days before and after the first recorded exacerbation (using a standardized definition). In patients who had more than one exacerbation, the analysis was restricted to before and after the first exacerbation. IRs were calculated from the number of patients experiencing an event divided by the person-years at risk. Maentel-Haenszel rate ratios (RR) (pre-and postexacerbation onset) were computed along with the associated 95% confidence interval (CI). Only people who remained exacerbation-free for at least 30 or 180 days were included in the analyses. Events occurring on the same day as the onset of the exacerbation were included in the ''after'' period as they were considered to be related to the occurrence of the exacerbation. The IRs and RRs for nonlower respiratory serious adverse events (NRSAEs) were also calculated separately for patients who did or did not have a cardiac disorder present at entry into the study. In order to examine the influence of season on the relationship between exacerbations and subsequent adverse events, the IRs and RRs before and after the first exacerbation were calculated separately according to whether the first exacerbation occurred in the winter (October-March for northern hemisphere countries and April-September for southern hemisphere countries) or summer period. For this analysis, each patient was included in the preexacerbation period for as long as he/she did not experience an exacerbation or the occurrence of the respective adverse event. Each patient was included in the postexacerbation period, starting with the onset of the first exacerbation and for as long as they were in the study until 30 days after treatment or the occurrence of the respective adverse event. Study Population The UPLIFT Ò population consisted of 5,992 randomized patients who received the study drug (3,006 to placebo and 2,986 to tiotropium). The baseline demographics have been previously reported [7]. The mean age was 65 ± 8 years, 75% of the patients were men, and 30% were smoking at randomization. Mean prebronchodilator FEV 1 was 1.10 ± 0.40 L (39% predicted) and postbronchodilator FEV 1 was 1.32 ± 0.44 L (48% predicted). Approximately 45% of the control population prematurely discontinued placebo compared with 36% of patients treated with tiotropium. At baseline, approximately 62% of patients used an inhaled steroid, 60% used a long-acting b-agonist, and 23% used theophylline-containing preparations. Serious adverse events were reported by 52% in the tiotropium group and 50% in the placebo group [7]. Serious adverse events reported by more than 1% of patients in either study group were cardiac, respiratory, or neoplastic [7]. A total of 3,960 patients had a nonfatal exacerbation during the 4-year follow-up period. Table 1 shows the baseline characteristics of patients who had an exacerbation. Serious adverse events were reported by 52% in the tiotropium group and 50% in the placebo group [7]. The IRs (per 100 patient-years) and incidence RRs for 30 days before and after the first exacerbation for the NRSAEs by organ class where at least five people experienced an event are shown in Table 2 (sorted by the IR in the postexacerbation period). The most common prespecified adverse event categories where at least five people experienced the event during this time period are also listed in Table 2. Cardiac disorders and gastrointestinal (GI) disorders were the most commonly occurring serious nonrespiratory adverse event organ classes. In all organ classes, the RR (i.e., risk) of an event was higher after an exacerbation. For five of these 13 organ classes affected, the lower limits of the 95% CI exceeded ''1.'' Cardiac failure, ischemic heart disease, myocardial infarction (MI), angina, atrial fibrillation/flutter, and stroke were the most common prespecified events overall. For all nine of the categories, the risk of an event was again higher after an exacerbation, and for four of these categories (all cardiac), the lower limits of the 95% CI for the RR exceeded ''1.'' The IRs (per 100 patient-years) and incidence RRs for 180 days before and after the first exacerbation for the NRSAEs by organ class where at least 10 people experienced an event during the period are shown in Table 3 (sorted by the IR in the postexacerbation period). Cardiac disorders and GI disorders were again the most commonly occurring serious nonrespiratory adverse event organ classes. In all organ classes, the risk of an event was higher after an exacerbation. For seven of these 11 organ classes Smoking history (pack-years) 49 (28) COPD duration (years) 10 (7) Baseline LABA (%) 64 Baseline ICS (%) 65 Baseline ICS ? LABA (%) 53 Baseline anticholinergic (%) 46 SGRQ total score, units 47 (17) Data are mean (SD) (where shown) affected, the lower limits of the 95% CI for the RR exceeded ''1.'' Cardiac failure, ischemic heart disease, MI, angina, atrial fibrillation/flutter, and stroke were again the most common prespecified events overall. For all six of the common events, the risk of an event was higher after an exacerbation, and for five of these categories (all cardiac), the lower limits of the 95% CI for the RR exceeded ''1.'' The IRs and RRs for NRSAEs in the 30 and 180 days before and after the first exacerbation according to whether patients did or did not have a cardiac disorder present at entry to the study are shown in Tables 4 and 5. Overall serious adverse events were more common in people who had cardiac disease at baseline. Ischemic heart disease, MI, angina, cardiac failure, atrial fibrillation/flutter, nonventricular tachycardia, and stroke were all more common in the 30-and 180-day periods after an exacerbation than before, irrespective of the presence of cardiac disease at baseline. In people who did not have cardiac disease at entry to the study, the lower limits of the 95% CI exceeded ''1'' only for ischemic heart disease and cardiac failure for the 30-day period and for ischemic heart disease, MI, angina, and cardiac failure for the 180-day period. The IRs and RRs before and after the first exacerbation for first exacerbations that occurred in the winter and IR incidence rate, RR rate ratio (after/before), NRSAE nonlower respiratory serious adverse event, CI confidence interval, SOC system organ class, NE not estimable as pre-exacerbation IR = 0, MI myocardial infarction, SVT supraventricular tachycardia, MedDRA Medical Dictionary for Regulatory Activities a IR per 100 patient-years b All primary SOCs are defined by MedDRA with the exception of ''Respiratory, thoracic, and mediastinal disorders,'' which has been divided into separate classes of respiratory system disorders: Lower, Upper, and Other c SOC ''General disorders and administration site conditions'' includes the cardiac preferred terms chest discomfort, chest pain, edema peripheral, sudden death, edema due to cardiac disease, and cardiac death d Preferred terms with a secondary relationship to MedDRA SOC respiratory, thoracic, and mediastinal disorders are not included summer periods are shown in Table 6. The CIs of the RRs (before/after an exacerbation) for the two periods overlapped, indicating that the relationship between adverse events and exacerbations did not differ according to whether the exacerbation occurred in the summer or winter. Discussion The UPLIFT Ò trial was designed to assess the effect of tiotropium on the clinical course of patients with COPD who were permitted to use all respiratory medications throughout the trial, other than inhaled anticholinergics. The study showed that use of tiotropium was associated with improvements in lung function and quality of life and a reduction of 14% in the risk for an exacerbation (p \ 0.001) [7]. The incidence of serious adverse events was also lower in patients receiving tiotropium. In addition to providing data on the clinical effects of tiotropium, the UPLIFT Ò study provided information on the relationship between exacerbations and NRSAEs. The results of this analysis show that exacerbations were associated with an increased risk of serious events in other organ systems, most commonly cardiac. This was true in both patients treated with tiotropium and those receiving the placebo in addition to their usual medication, and in patients who did or did not have cardiac disease at entry to the study. A previous smaller observational study of the temporal relationship between exacerbations and cardiovascular events using the Health Improvement Network database suggested a 2.3-fold increase in the risk of a MI 1-5 days after an exacerbation; however, there did not appear to be an association at any other time following an exacerbation. The risk for stroke was increased by 1.3-fold within 1-49 days following an exacerbation [10]. Exacerbations may be triggered by bacteria, viruses, and noninfective stimuli such as air pollution. These stimuli appear to amplify the inflammatory process present in the stable state [11]. Exacerbations increase the level of systemic inflammation [12,13] and oxidative stress [14,15], which can have adverse effects on other organs. For example, troponin T is elevated during exacerbations and is associated with increased mortality [16]. Similarly, renal endothelin-1 production is increased during exacerbations [17], which may underpin some of the vascular consequences of exacerbations. There is an increased prothrombotic state in patients with COPD during acute exacerbations, as shown by increased circulating fibrinogen levels [12], and there is evidence of increased endothelial dysfunction during and after exacerbations of COPD [18], increasing the risk of cardiovascular morbidity. It is also possible that the factors that lead to exacerbations may also have systemic consequences. MIs, pulmonary emboli, and venous thromboses are significantly more common immediately after respiratory infections [19], which are associated with peripheral acute-phase responses, including the production and release of TNF-a, IR incidence rate, RR rate ratio (after/before), NRSAE nonlower respiratory serious adverse event, CI confidence interval, MI myocardial infarction, SVT supraventricular tachycardia, MedDRA Medical Dictionary for Regulatory Activities a Preferred terms with a secondary relationship to MedDRA SOC IL-6, and CRP. Particulate air pollution may also trigger a systemic inflammatory response by inducing oxidative stress in the airways [20]. In addition to the fact that exacerbations may worsen the systemic effects of COPD, the presence of systemic effects or comorbidities may also worsen the severity of an exacerbation and lead to worse outcomes. In order to study the effect of exacerbations on these effects it is important to have prospective data from large cohorts such as the UPLIFT Ò trial. The data provide important evidence for the relationship between exacerbations and serious nonrespiratory outcomes, although they cannot prove causality. The conclusions are strengthened by the duration of the study and the use of a standardized definition of an exacerbation. Nevertheless, there are still some potential limitations to the analysis. It could be argued that the patients who took part were selected for involvement in a clinical trial and therefore may not be fully representative of patients seen in practice, particularly with regard to disease severity and the presence of comorbidities. However, the inclusion and exclusion criteria were relatively liberal and recruitment included a broad selection of COPD patients with multiple comorbidities. Time-based analyses are potentially subject to biases, such as the need to survive long enough to be included in the analysis, and these need to be considered when interpreting the results. Events occurring on the same day as the onset of the exacerbation were included in the ''after'' and, although it is possible that the adverse event in the other organ system triggered the COPD exacerbation, we believe that it is unlikely from a clinical perspective that an exacerbation would be regarded as starting on the same day. It is more likely it would be reported as starting after the nonrespiratory adverse event. Another possible confounder is the potential for detection bias as a result of additional tests being ordered as part of an evaluation for an exacerbation or the possibility that other medical conditions are identified when the patients are admitted to hospital with an exacerbation. These factors are likely to have only a limited role given that the current analysis is based on serious adverse events and not concomitant nonserious events that may be incidental findings as part of a broader medical evaluation. The frequency of hospitalization is too low to allow a meaningful analysis of differences in the occurrence of adverse events after hospitalized and nonhospitalized exacerbations. Finally, it is also possible that the treatment given to the patient at the time of the exacerbation may have led to the development of the adverse event. In addition to the difficulty of proving a causal link, there are other limitations to the analysis of the relationship between exacerbations and systemic effects. To be recorded as a serious adverse event, the systemic effect must lead to death or be judged life-threatening, have required inpatient hospitalization or prolongation of existing hospitalization, or result in persistent or significant disability/ incapacity. While this could be considered restrictive, the advantage is that only the most clinically important events are part of the definition. In conclusion, this analysis confirms that besides worsening respiratory outcomes, the risk of systemic events is increased after exacerbations, particularly shortly after the event. It further reinforces the importance of preventing or reducing exacerbation rates as an aim of COPD management. Treating physicians must also be vigilant for concomitant disease in other organ systems which may follow an exacerbation. Increased awareness and prompt treatment may contribute to reductions in morbidity associated with COPD.
2014-10-01T00:00:00.000Z
2011-06-16T00:00:00.000
{ "year": 2011, "sha1": "2dec1dd35c427541bda4c1f707871862f7394f81", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00408-011-9301-8.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "2dec1dd35c427541bda4c1f707871862f7394f81", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
260433100
pes2o/s2orc
v3-fos-license
Evoked compound action potentials during spinal cord stimulation: effects of posture and pulse width on signal features and neural activation within the spinal cord Objective. Evoked compound action potential (ECAP) recordings have emerged as a quantitative measure of the neural response during spinal cord stimulation (SCS) to treat pain. However, utilization of ECAP recordings to optimize stimulation efficacy requires an understanding of the factors influencing these recordings and their relationship to the underlying neural activation. Approach. We acquired a library of ECAP recordings from 56 patients over a wide assortment of postures and stimulation parameters, and then processed these signals to quantify several aspects of these recordings (e.g., ECAP threshold (ET), amplitude, latency, growth rate). We compared our experimental findings against a computational model that examined the effect of variable distances between the spinal cord and the SCS electrodes. Main results. Postural shifts strongly influenced the experimental ECAP recordings, with a 65.7% lower ET and 178.5% higher growth rate when supine versus seated. The computational model exhibited similar trends, with a 71.9% lower ET and 231.5% higher growth rate for a 2.0 mm cerebrospinal fluid (CSF) layer (representing a supine posture) versus a 4.4 mm CSF layer (representing a prone posture). Furthermore, the computational model demonstrated that constant ECAP amplitudes may not equate to a constant degree of neural activation. Significance. These results demonstrate large variability across all ECAP metrics and the inability of a constant ECAP amplitude to provide constant neural activation. These results are critical to improve the delivery, efficacy, and robustness of clinical SCS technologies utilizing these ECAP recordings to provide closed-loop stimulation. Introduction Evoked compound action potential (ECAP) recordings have been utilized as a quantitative measure of the neural response to spinal cord stimulation (SCS) for treating specific chronic pain conditions [1][2][3]. The ECAP is a bioelectrical signal reflecting the summation of action potentials generated by SCS and thus represents a quantitative measure of neural recruitment in the spinal cord. By automatically adjusting stimulation parameters to maintain a relatively constant ECAP amplitude, the ostensible goal of this approach is to maintain a consistent level of neural activation in the spinal cord despite changes in body position and activity [3,4]. Unlike cardiac pacemakers, which routinely incorporate electrogram sensing to inform a more physiologic, automatic control of the heart rhythm (so-called 'closed-loop' control) [5,6], SCS has traditionally been programmed in an 'open-loop' manner [3]. That is, the stimulation parameters are fixed unless manually changed. This approach may lead to sub-optimal therapeutic outcomes as the SCS is under-or over-delivered due to frequent movement of the spinal cord relative to the SCS electrodes [7,8]. To address this challenge, one approach is to utilize spinal ECAPs recorded from inactive (nonstimulating) electrodes as a feedback signal to provide real-time, closed-loop control of the stimulation [2,3]. As described above, these recordings can theoretically be used to maintain more consistent neural activation despite changes in body position and activity. However, several factors present challenges to the utility of this ECAP-based approach. First, the microvolt-amplitude ECAP must be distinguished from several sources of electrical (e.g., stimulation artifact) and biological noise in the recorded signal that are typically several orders of magnitude larger than the underlying neural response [9,10]. Second, this approach relies on the fundamental assumption that a constant ECAP amplitude equates to a constant degree of neural activation or dosing in the spinal cord [3,4]. This assumption may be an oversimplification as the ECAP is influenced by motion of the spinal cord relative to not only the stimulating electrodes, but the recording electrodes as well [11]. Finally, it remains unclear how ECAP recordings vary over stimulation parameters and across patients and postures. These unanswered questions complicate the broad application of ECAP-controlled, closed-loop SCS and its utility to contemporary SCS modalities. To answer these questions, we utilized a combined experimental and computational modeling approach. We used experimentally acquired ECAP recordings in 56 patients undergoing a trial of SCS to quantify the variability in ECAP features. Then, we used computational models to interpret the observed experimental trends and to investigate the assumption that a constant ECAP amplitude equals a constant level of neural activation across various spinal cord positions. Our experimental results demonstrate dramatic variability in ECAP recordings across patients that must be accounted for in clinical systems utilizing this closed-loop approach. Furthermore, our computational modeling results demonstrate that maintaining a constant ECAP amplitude, the approach currently utilized in ECAP-controlled, closed-loop SCS [4], may not provide a constant degree of neural activation or dosing in the spinal cord. Materials and methods In this study, we acquired a library of spinal ECAP recordings and perceptual thresholds (PTs) from subjects already undergoing commercial SCS trials according to approved labeling. We performed recordings over a wide assortment of postures and stimulation parameters. We then processed the recordings to reduce stimulation artifact (a common noise signal associated with ECAP recordings) and assess relevant ECAP features. Finally, we used a computational model of ECAPs to simulate the effects of SCS pulse width (PW) and dorsal cerebrospinal fluid (dCSF) thickness on the ECAP recordings. We then related our model findings back to the experimentally acquired ECAP dataset. We provide the in-depth methodology for these steps below. Experimental data acquisition In this study, we analyzed data obtained from 56 subjects. These experimental data were acquired as part of a non-significant risk feasibility trial assessing the effects of stimulation parameters, electrode choice, and subject activity on both spinal ECAPs and the control characteristics of an investigational closed-loop SCS system. For these ECAP recordings, we used a custom, investigational research system to connect to the subjects' conventional, eight-electrode, 60 cm long, percutaneous leads (Model #977D260, Medtronic plc). The research system delivered cathodic-leading, symmetric, biphasic stimulation with an interphase interval of 30 µs and recorded the ECAP elicited by the stimulus pulse. The details of the stimulating and recording functions of this system are described elsewhere [12]. Similar to other studies [9,11], we used the research system to acquire ECAPs from the subjects' trial leads at the end of their commercial trial and immediately prior to removal of the leads. The ECAPs were elicited at a fixed rate of 50 Hz with a selection of PWs (90-300 µs) while the subject assumed various postures (seated, supine, right and/or left lateral recumbency, standing) (see table S1 for descriptions of the subsets of trials for each experimental condition). The lead position was selected by the implanting physicians to optimize the subjects' therapeutic outcomes during the trial and in general consisted of two staggered leads near the T9 spinal level. However, one subject had two staggered leads placed near C4 and another subject had a single lead implanted. We delivered stimulation on either end (rostral or caudal) of one lead via a guarded cathode configuration, or adjacent and/or skipped electrode bipolar electrode pairs. The bipolar recording electrodes were allocated to the opposite end of the same lead. The stimulation consisted of 'growth curve sweeps' in which the SCS amplitude was ramped up from either 0 mA or from a stimulation amplitude below subject perception in 0.1 mA increments until the subject perceived the stimulation (perception threshold, or PT). We then ramped the stimulation up further until the subject reported discomfort (discomfort threshold, or DT). We defined the DT as the stimulation amplitude at which the subject would not want to experience the stimulation for more than 30 s. The time at each stimulation amplitude interval varied, with a median time of 0.64 s (range: 0.08-50.66 s). The dwell time at each amplitude step was chosen collaboratively between the subject and the researcher. For instance, subjects often needed very little dwell time to assess if there was perceptible stimulation with very low SCS amplitudes. However, they requested longer intervals with supra-threshold SCS when describing the perception of the stimulation. We did not assess all postures, PWs, and electrode choices in each subject owing to time constraints, the inability of the subjects to comfortably assume all postures, and the desire to limit subject fatigue. We performed all measurements and data analyses identically between subjects; no specific randomization or investigator blinding was employed. Following data collection, we disconnected the subjects' leads from the research system and the subjects exited the study. ECAP processing Stimulation artifact is an electrical noise contaminant that manifests both coincident with and shortly after the delivery of a stimulation pulse. Reducing this artifact is critical for accurate and consistent ECAP measurement and characterization [9]. In this study, we adopted previously described methods for reducing artifact and quantifying ECAP amplitudes (figure 1) [11]. Briefly, we averaged 50 consecutive recordings in a growth curve sweep (corresponding to 1 s of data) to limit non-synchronous noise (figure 1(B)), then we used an artifact model (AM) scheme to reduce the stimulation artifact [9]. The AM scheme consisted of fitting an exponentially shaped 'model' to the stimulation artifact, and then digitally subtracting the model from the recording to isolate the neural response ( figure 1(C)). For the ECAP amplitude estimate (the voltage difference between the N1 and P2 features of the polyphasic ECAP), we defined N1 as the minimum ECAP amplitude in the window from 0.75 ms to 1.05 ms following the leading edge of the stimulation waveform. Similarly, we defined P2 as the maximum ECAP amplitude in the window from 1.05 ms to 1.45 ms following the leading edge of stimulation. As changes in SCS parameters influence the ECAP latency [12], we shifted our N1 and P2 search windows as needed to account for delayed ECAP initiation with widened PWs. Following ECAP denoising and amplitude estimation, we plotted growth curves representing the ECAP amplitudes for each sweep of the stimulation amplitude (figure 1(E)) [13]. Growth curves are a convenient tool for summarizing the changes in neural activation reflected by the ECAP as stimulation amplitudes are varied. The curvilinear transition point in the growth curve from sub-threshold stimulation (no neural activation) to supra-threshold stimulation (linear growth) is of particular interest. By fitting the growth curves, we were able to derive a parameter at this curvilinear transition point, the ECAP threshold (ET) ( figure 1(E)). The ET tightly tracks with the patient PT across SCS PWs and postures [11]. The ET and the supra-threshold slope of the growth curve (ECAP growth rate) depend on a multitude of factors, including variable thickness of the cerebrospinal fluid (CSF) between both the recording and stimulating electrodes and the spinal cord [11], the distribution and type of fibers contributing to the ECAP [14,15], and the selected stimulation parameters [16]. See the appendix for a detailed description of the methods used in the AM, growthcurve fitting, and ET calculation. Computational modeling We improved upon a previously published computational modeling infrastructure of spinal ECAPs generated during SCS [14]. This modeling infrastructure included a finite element method (FEM) model consisting of a spinal cord (both gray and white matter), CSF, dura, epidural tissue, spine, and surrounding bulk tissue previously described by Anaya et al [14]. We defined the gray and white matter boundaries of the spinal cord model using human cadaver samples of the lower thoracic spinal cord [17]. We did not include explicit representations of the dorsal roots because a previous study found that the inclusion of dorsal roots had a minimal effect on simulation results [18]. Our initial dCSF thickness was 3.2 mm [14,19,20]. Within the epidural tissue along the anatomical midline, we embedded a cylindrical SCS electrode array with a total length of 75 mm. The distal end included eight electrodes, each with a length of 3 mm, diameter of 1.3 mm, and an edgeto-edge spacing of 4 mm. This electrode design was the same design used in our experimental dataset. We surrounded the electrode by a 0.3 mm thick encapsulation layer [14]. We discretized our FEM model into tetrahedral elements using 3-matic (Materialise NV, Belgium) and exported the final volume mesh into COMSOL Multiphysics (COMSOL Inc., USA). Next, we assigned electrical properties to the tissues using the conductivities described in Anaya et al [14]. To match the average bipolar electrode impedance measured in our experimental dataset, we set the electrical conductivity of the encapsulation layer to be 0.089 S m −1 [21,22]. We used the conjugate gradient method to calculate the electrostatic potential fields generated in the FEM model. As described below, we used these potential fields to estimate both the neural We estimated the stimulation artifact and subtracted the artifact from the averaged raw recording to produce the predicted ECAP. (D) Experimental ECAPs at various stimulation amplitudes. Note, the P1, N1, and P2 peaks in the waveform. As stimulation amplitude increases, the timing of the peaks remains relatively constant, yet the ECAP amplitude increases. (E) Experimental growth curve generated from the ECAP recordings shown in D. The experimental growth curve (blue) and a curve fit (brown) to the experimental data using the method described in the appendix. We used this curve fitting to estimate the ECAP threshold (ET). response to SCS as well as the corresponding ECAP recording. In contrast to the previous version of this model [14], we divided our population of axon models into 100 unique fiber-diameter groups (one fiberdiameter group for every 0.1 µm between 6 and 15.9 µm), for a total of 10 000 fibers per simulation. As axons in the ventral half of the white matter of the spinal cord did not contribute significantly to ECAP recordings (data not shown), we exclusively distributed axons in the dorsal half of the spinal cord using Lloyd's algorithm (figure 2) [18,23]. We performed axon simulations using the software package, NEURON, in a Python programming environment [24]. Our axon models consisted of multicompartment cable models of dorsal column sensory axons described in previous studies [14,[25][26][27][28]. Unless otherwise stated, we evaluated models using a pulse frequency of 50 Hz, a biphasic, cathodic-leading guarded cathode (E5+/E6−/E7+) stimulation configuration, a PW of 150 µs, and a bipolar recording configuration. We utilized a reciprocity-based approach to calculate the SCSinduced ECAPs recorded from the implanted electrode array (figure 2) [29][30][31]. To calculate model ECAP recordings, we scaled the signal contribution of each fiber diameter group according to the physiological densities observed in the cadaveric human spinal cord [14,15,18]. Following previous SCS computational modeling studies, for each model, we defined the model PT (mPT) as the minimum stimulation amplitude to activate ⩾10% of dorsal column axons [32,33]. We defined the model DT (mDT) as 1.4 * mPT [19,32,33]. For each set of stimulation parameters, we generated a model growth curve by incrementing the applied stimulation amplitudes from 0.1 mA to the mDT, in 0.1 mA increments (figure 3). ECAP variability due to PW Previous experimental work has shown that increasing the stimulus PW linearly increases the delay of (1) Stimulation is applied to the spinal cord. We used a finite element method model to estimate the potential fields generated during SCS (isopotential lines shown near the proximal electrodes). (2) Stimulation induces action potentials in spinal neurons (only a single axon is shown for clarity). In our model, we distributed multicompartment cable models of axons in the dorsal white matter and simulated their response to the applied stimulus. (3) Action potentials propagate rostrally and caudally from the site of initiation. As the action potentials travel past the recording electrodes, the voltage difference between the recording electrodes will measure a (4) spinal ECAP. We used a reciprocity-based approach to calculate the model ECAP recordings. * Importantly, the measured ECAP represents the summation of all active fibers passing by the recording configuration, rather than the single fiber shown here. the ECAP relative to the stimulus onset [12]. In our experiments, we built on this work by characterizing the effect of different PWs (90-300 µs) on the growth curve. We compared experimental growth curves for multiple PWs obtained from within a single subject with consistent posture, stimulation configuration, and recording configuration. If a subject had measurements for at least three independent PWs, we generated both strength-duration and charge-duration curves using the ET from each trial. Then, we estimated the chronaxie and rheobase from the chargeduration curve (see the appendix for details describing the charge-duration curve fitting) and analyzed fits with an R 2 above 0.50. We performed simulations for eight PWs ranging from 90 µs to 300 µs in 30 µs increments. We calculated the ECAP amplitude, growth curve and corresponding metrics (e.g. ECAP amplitudes, model ET (mET), ECAP growth rate), and directly compared these model predictions to trends in the experimental dataset. Finally, we generated model-based strengthduration and charge-duration curves and compared these curves to the corresponding experimental data. 2.5. ECAP variability due to posture dCSF thickness, a parameter that varies by millimeters during postural changes, strongly influences the ECAP amplitudes and perception of the . The evoked compound action potential (ECAP) growth curve. The ECAP growth curve is the relationship between the stimulation amplitude and the corresponding ECAP amplitude. The left column shows the applied stimulus (top), example neural activation (2nd row), and simulated ECAP response (3rd row) at a low stimulus amplitude (1.7 mA) in which no neural activity/ECAP is generated. The middle and right columns show the same parameters for both a moderate stimulus (6.0 mA) and high stimulus (10.8 mA), respectively. The bottom row shows the growth curve summarizing the relationship between the stimulation amplitude and ECAP response. Each dashed vertical line shows the corresponding point on the growth curve to the column above. stimulation (figure 4) [7,14,34]. Therefore, we focused our analyses on differences in ET and ECAP growth rate for alternate postures in our experimental dataset and analogously for various dCSF thicknesses in our computational models. In the experimental dataset, our analysis exclusively compared seated versus supine postures due to a limited number of experimental trials for other postures. For consistency in the applied waveform, we restricted this analysis to trials with guarded cathode stimulation with PWs ranging from 100 to 200 µs. To simulate the effects of changes in posture, we developed two computational models with dCSF thicknesses of 2.0 mm and 4.4 mm (in addition to our 3.2 mm dCSF model) ( figure 4) [7,14,19,20]. We characterized model ECAP recordings The growth curve summarizing the relationship between the stimulation amplitude and the ECAP amplitude. Each growth curve corresponds to a different dorsal CSF thickness. Postures that result in more dorsal CSF (e.g., prone) exhibit a less steep growth curve that is shifted to the right (i.e., more current is required to generate a given ECAP amplitude). Conversely, postures that result in less dCSF thickness (e.g., supine) exhibit a steeper growth curve and require less current for ECAP generation. We generated this example data using our computational modeling infrastructure. by calculating the ECAP amplitude, growth curve, and corresponding metrics (e.g., ECAP amplitudes, mET, ECAP growth rate). We then compared these model predictions to trends in the experimental data. Using these models, we also evaluated changes in neural activation resulting from the alternate dCSF thicknesses. Statistical analysis We quantified variations in ETs and suprathreshold ECAP growth rates using a linear mixed effects model. We fit the statistical model using fitlme in MATLAB (MathWorks, USA). This statistical model had a fixed intercept for posture and PW and a random effect to account for variability between subjects. Results From our experimental data, we considered a total of 479 trials from 56 subjects. Of these 479 trials, 195 did not have sufficient maximum ECAP amplitude (i.e., >4 µV) to be used for growth curve fitting or had poor growth curve fits (28 subjects had at least one trial that was included in analysis and one that was excluded; 11 subjects had all trials excluded from analysis). Therefore, our analyses considered 284 datasets obtained from 45 subjects. For the subjects with low maximum ECAP amplitudes observed during our measurements, it is important to note that these subjects were not entirely without ECAPs. Detectable ECAPs may have manifested with other electrode configurations or during activities, such as a back arch, that transiently resulted in a detectable ECAP. ECAP strength-and charge-duration relationships We examined how the stimulus PW affected the ECAP growth curves with respect to both the stimulation amplitude and the charge per phase. In both our experimental and modeling data, increasing the PW shifted the growth curve to the left, with longer PWs requiring lower stimulation amplitudes to elicit an equivalent ECAP amplitude (figures 6(A) and (C)). With regards to charge per phase, increasing the PW shifted the ECAP growth curve to the right, with longer PWs requiring higher charge per phase to elicit an equivalent ECAP amplitude (figures 6(B) and (D)). We also used our ET calculations to examine the rheobase and chronaxie in both our experimental data and modeling predictions. To estimate the rheobase and chronaxie, we fit the charge-duration plots with Weiss' equation (see the appendix for details regarding the charge-duration curve fitting) (figures 6(E) and (F)) [35]. For a stimulation amplitude corresponding to ET, the experimental group had a median rheobase of 1. (R 2 = 0.964). Across subjects considered in our PW analysis, we observed a general decrease in PTs, DTs, and ETs as the PW increased (figure S1). ECAP variability due to posture We evaluated how the experimental ET and ECAP growth rate changed for recordings performed in seated versus supine positions. For 12 of the 45 subjects, growth curve sweeps were acquired in both seated and supine postures. In the seated group, the median ET was 4.5 mA with a median ECAP growth rate of 18.4 µV mA −1 (figures 7(A) and (B)). In contrast, the supine position had a median ET of 2.0 mA with a median ECAP growth rate of 32.0 µV mA −1 (figures 7(A) and (B)). When using a linear mixed effects model to account for inter-subject variability, we found that the ET for the supine posture was estimated to be 65.7% lower than the ET for the seated posture (95% confidence interval 59.1%-73.0%). The ECAP growth rate was also 178.5% larger for the supine position versus the seated positions (95% confidence interval of 157.1%-202.7%). In our computational model, we evaluated three spinal cord positions to mimic different postures. We generated model ECAP recordings for dCSF thickness of 2.0, 3.2, and 4.4 mm (figure 7(C)) [20]. The model with 2.0 mm of dCSF had a 50.0% decrease in mET and a 92.5% increase in the ECAP growth rate relative to the base model with 3.2 mm of dCSF. In contrast, the model with 4.4 mm of dCSF increased mET by 78.1% and decreased the neural slope by 42.0% relative to the base model with 3.2 mm of dCSF. Overall, our modeling trends showed strong alignment with the experimental trends. In addition to evaluating growth curve parameters, we used our computational model to estimate the number of fibers that must be activated to generate the same ECAP amplitude for different postures. We focused our analyses on two ECAP amplitudes: 4 µV (representing stimulation near PT/ET) [11], and 25 µV (an approximate amplitude of a paresthesia-centric closed-loop SCS system) [1]. To generate a 4 µV ECAP, we applied Model predicted neural activation (G) and corresponding percent differences (H) throughout the spinal cord with an equivalent ∼25 µV ECAP recording. At physiological fiber densities, 1467 and 2847 fibers were activated for a dorsal CSF thickness of 2.0 and 4.4 mm, respectively. This difference corresponded to a 94.1% increase in the number of fibers necessary to generate the same ECAP amplitude. In C and F, darker colors indicate increased neural recruitment of fibers with diameters between 7.0 and 15.9 µm. Note, fibers smaller than 7.0 µm were excluded from this visualization as they were not activated at the given stimulation amplitudes. Discussion The electrophysiologic insight afforded by spinal ECAP sensing holds promise as a tool for optimizing the configuration and delivery of SCS. However, realizing this promise requires an understanding of the technical challenges, biophysical factors, and realworld effects influencing the acquisition and interpretation of the spinal ECAP. In our work, we used findings from 45 human subjects in conjunction with computational modeling to develop insight into these phenomena and the clinical utility of ECAP sensing with SCS. We discuss these considerations below. ECAP variability Unlike other bioelectric signals (e.g., the electrocardiogram) that are consistently observable and exhibit comparatively similar attributes (i.e., signal amplitudes and morphologies) between patients, spinal ECAPs exhibit both inter-and intra-patient variability that depend on several factors. These factors include the choice of stimulating and recording configurations, lead position, the patient's perception of SCS, the artifact rejection capability of the recording system, the type of SCS therapy selected, prescribed medication, and the patient's posture and activity [9,11,12,36]. This variability is plainly evident in figure 5 which provides an aggregate view of the ECAP growth curves and properties from our experimental dataset. With regards to the ECAP amplitude at DT (figure 5(C)), the median ECAP amplitude was 19.4 µV with a range from 3.4 µV to 235.4 µV. This observation speaks clearly to the need for robust systems that can consider this variability and customize the stimulation for the unique needs of each patient. Of equal interest are the cases in which we detected no measurable ECAP above the 4 µV noise floor for a particular stimulation configuration or posture (156/479 trials; 33% of the trials in our dataset), or subjects in which we detected no measurable ECAP for any stimulation configuration or posture (4/56 subjects in our dataset). Several possible explanations exist for this observation. First, we included allcomers irrespective of SCS trial success or lead location. The lead placement was at the discretion of the implanting physician, and we did not enforce a strict midline placement (i.e., to maximize ECAP amplitudes). We believe that collecting ECAPs from such a heterogeneous population affords a more comprehensive view of what may be reasonably anticipated when acquiring ECAPs in a 'real-world' clinical setting. Second, for some patients receiving SCS, the PT is either the same as, or closely approximates, the DT. With these patients, the perception of any sensation associated with SCS for a given posture/electrode combination may be unacceptable and represent a potential limitation for ECAP-controlled, closed-loop SCS systems that rely on the continuous delivery of paresthesia-centric SCS [3]. Finally, the ability to resolve the µV-level ECAP is influenced by the signal shunting effect of the CSF layer, and potentially other factors, such as the proximity of the electrodes to the laminar bone [14]. For patients with comparatively large spinal canals and dCSF thickness, insufficient ECAP signal-to-noise ratios may mean that ECAP-controlled, closed-loop SCS is not possible in some patients. Relating experimental observations to the computational model Our experimental ECAP data also provided a means to assess the validity of our computational model. We compared ECAP measures (e.g. ET, growth curve slope, ECAP amplitude) as a function of posture/dCSF thickness between the experimental results and the corresponding modeling predictions, as well as the effect of the stimulus PW on the growth curves and strength-and charge-duration curves. In both our experimental ECAP recordings and simulations, we noted a clear dependence of the ECAP on posture and dCSF thickness. Growth curves in the supine posture demonstrated lower ETs and higher growth rates when compared to a seated posture ( figure S2). Furthermore, the ECAP amplitudes at DT were higher in the supine versus seated position. These trends likely stem from smaller separation between both the stimulating and recording electrodes and the spinal cord when supine versus upright. Our computational model showed similar trends to these experimental findings when we decreased the dCSF thickness (figure 7). Increasing the dCSF thickness, as happens with shifting from a supine to prone posture, exhibited an opposite effect, with larger mETs and reduced growth rates. Does a constant ECAP amplitude imply constant neural recruitment? Because our computational model predictions matched the trends observed in our experimental dataset as described above, we could then employ our computational model to answer additional questions regarding the ECAP recordings. One such question is whether adjusting the stimulation amplitude to maintain a constant ECAP amplitude results in a constant level of neural recruitment. A constant degree of neural activation is the ostensible goal employed by some ECAP-controlled, closed-loop SCS systems as a method to improve the comfort and efficacy of paresthesia-centric SCS [3,4]. With our computational model, we demonstrated that the relationship between ECAP amplitude and neural recruitment is more nuanced than previously suggested in the literature. To help illustrate the relationship between the ECAP recording and the underlying neural recruitment, we analyzed a series of computational models representing different postures by varying the dCSF thickness. In each model, we set the stimulation amplitude to elicit equivalent ECAP amplitudes. We found that each model required vastly different degrees of neural activation to produce equivalent ECAP amplitudes ( figure 8). For instance, to maintain a constant ECAP amplitude of 25 µV, 94% more axons had to be activated in the prone position (dCSF = 4.4 mm) relative to the supine position (dCSF = 2.0 mm). These results suggest that using an ECAP amplitude as a 'target' for closed-loop SCS does not guarantee consistent neural recruitment. This phenomenon results from the fact that the ECAP amplitude is influenced not only by variable distance between the stimulation electrodes and the spinal cord, but also by the variable distance between the recording electrodes and the spinal cord [11]. As the dCSF thickness increases, smaller ECAP amplitudes are noted due to increased distances between the recording electrodes and the neural sources in the spinal cord [14]. Although our models predicted that approximately twice as many fibers were activated in a prone versus supine posture for both target ECAP amplitudes, it is important to consider the change in the absolute number of fibers activated for each target ECAP amplitude. To maintain a target ECAP amplitude of 4 µV, our model predicted that 135 fibers had to be activated in the supine position and 238 fibers in the prone position, corresponding to an absolute difference of 103 fibers ( figure 8(D)). To maintain a constant target ECAP amplitude of 25 µV, our model predicted that 1467 fibers had to be activated in the supine position and 2847 fibers in the prone position, corresponding to an absolute difference of 1380 fibers ( figure 8(H)). This result means that 13 times as many fibers were activated to maintain the larger target ECAP amplitude of 25 µV relative to the lower target ECAP amplitude of 4 µV. Recognizing that perceived intensity of stimulation grows linearly with fibers activated, our modeling suggests a supine-to-prone postural shift would result in a larger change in perceived stimulation intensity with a closed-loop SCS system configured to maintain a fixed 25 µV ECAP versus a 4 µV ECAP. Further, differential changes in neural activation (figure 8(D)) with the modeled postural shifts in this work are constrained to the dorsal columns when stimulation is configured to maintain a constant 4 µV ECAP. In contrast, the differential neural activation with a 25 µV ECAP (figure 8(H)) spreads laterally past the dorsal columns toward the dorsal root entry zone. The clinical implications of this phenomenon are unknown but may be related to the stimulation-related events seen in some ECAP sensing SCS systems [2]. Clinically, these results suggest that system operation at a lower target ECAP amplitude, such as near ET (which closely tracks PT)-an approach employed with some contemporary SCS therapies [37], and potentially enhanced further with closedloop control using ECAPs [8]-would provide more consistent dosing in the spinal cord and better approximate perceptual and electrophysiologic equivalence over posture and activity. Furthermore, it may be an option for patients that prefer paresthesiafree stimulation and help avoid complications associated with SCS-induced paresthesia, which can disturb sleep, or be experienced as excessive or uncomfortable [38,39]. Study limitations and future work This study had some potential limitations that should be noted. One potential limitation was that we only performed acute experimental recordings during the externalized trial phase of SCS. It is possible that the results may differ for ECAP recordings performed with chronically implanted SCS systems. Another potential limitation of our modeling work is the absence of sources of stimulation artifact and biological noise. This lack of noise sources causes the model growth curves to have two distinct regions: a sub-threshold stimulation region with no neural activation (i.e., ECAP amplitude of 0 µV below mET) and a supra-threshold stimulation region. This shape differs from the experimental growth curves in the sub-threshold region which contain both an offset and a non-zero slope due to stimulation artifact. Additionally, in the supra-threshold stimulation region, there were often differences between the experimental and model ECAP growth rates. These differences may be due to several factors, such as variations in anatomy, lead placement, and recruitment profiles across individual subjects relative to the generalized model. However, it is important to note that we observed a large amount of variability in the experimental ECAP growth rates (figures 5(A) and (F)), and our model growth rates fell within the experimental range. Finally, the geometrical and electrical properties of our computational model were defined using averaged geometrical values from literature. This generalized approach can be used to investigate technical and anatomical factors (e.g., lead lateralization, dCSF thickness) [14], but future work should examine these factors in more detail and consider a patient-specific modeling approach to fully characterize the influence of sources of interpatient variability on ECAP-based closed-loop SCS [40,41]. Conclusions Spinal ECAP sensing affords an unprecedented opportunity to optimize SCS therapies by directly assessing the neural response elicited by the stimulation. To further improve this therapy, it is imperative that we fully understand the anatomical and technical factors that influence these ECAP recordings. Therefore, we performed a combined experimental and computational modeling study to address these knowledge gaps. In our experimental data, we found high inter-subject variability across ECAP metrics, and this variability needs to be considered for robust and consistent closed-loop implementations. ECAP-based, closed-loop SCS was developed to provide consistent neural dosing or recruitment by maintaining a consistent ECAP amplitude. However, our computational modeling results demonstrate that maintaining a constant ECAP amplitude does not guarantee constant neural recruitment in the spinal cord and highlights a potential limitation in this closed-loop approach, particularly with paresthesiacentric SCS. These results are critical to improve the delivery, efficacy, and robustness of closed-loop SCS techniques. Data availability statement The data cannot be made publicly available upon publication because they contain commercially sensitive information. The data that support the findings of this study are available upon reasonable request from the authors. ) . The neural transition equation assumes no neural activity below I thr and a linear rate of change of neural activity significantly above I thr . The transition between these two linear regimes is described by the curvature term, σ. In some trials, the subject derived growth curve may predominantly be a result of improper artifact removal. To assess the quality of the artifact removal, we fit a line (with yintercept = N) to the last three quarters of the data from each subject's growth curve. If the growth curve model did not reduce the average error of the linear model by 50%, we assumed that the resulting data was primarily a result of improper artifact removal, and the trial was discarded. Details regarding this growthcurve fitting have been previously published [11]. ET calculation We calculated the ET from the growth curve using the following equation: Based on the results of Pilitsis et al, we assumed a value of G = 1.5 [11].
2023-08-04T06:17:42.838Z
2023-08-02T00:00:00.000
{ "year": 2023, "sha1": "979ce08f3eea28fc2105f60fee9c048a02c7e4c3", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1741-2552/aceca4/pdf", "oa_status": "HYBRID", "pdf_src": "IOP", "pdf_hash": "514c0092e132f65dec67111c4663e86a00a1def8", "s2fieldsofstudy": [ "Biology", "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
131778229
pes2o/s2orc
v3-fos-license
Growth Arrest Triggers Extra-Cell Cycle Regulatory Function in Neurons: Possible Involvement of p27kip1 in Membrane Trafficking as Well as Cytoskeletal Regulation Cell cycle regulation is essential for the development of multicellular organisms, but many cells in adulthood, including neurons, exit from cell cycle. Although cell cycle-related proteins are suppressed after cell cycle exit in general, recent studies have revealed that growth arrest triggers extra-cell cycle regulatory function (EXCERF) in some cell cycle proteins, such as p27(kip1), p57(kip2), anaphase-promoting complex/cyclosome (APC/C), and cyclin E. While p27 is known to control G1 length and cell cycle exit via inhibition of cyclin-dependent kinase (CDK) activities, p27 acquires additional cytoplasmic functions in growth-arrested neurons. Here, we introduce the EXCERFs of p27 in post-mitotic neurons, mainly focusing on its actin and microtubule regulatory functions. We also show that a small amount of p27 is associated with the Golgi apparatus positive for Rab6, p115, and GM130, but not endosomes positive for Rab5, Rab7, Rab8, Rab11, SNX6, or LAMTOR1. p27 is also colocalized with Dcx, a microtubule-associated protein. Based on these results, we discuss here the possible role of p27 in membrane trafficking and microtubule-dependent transport in post-mitotic cortical neurons. Collectively, we propose that growth arrest leads to two different fates in cell cycle proteins; either suppressing their expression or activating their EXCERFs. The latter group of proteins, including p27, play various roles in neuronal migration, morphological changes and axonal transport, whereas the re-activation of the former group of proteins in post-mitotic neurons primes for cell death. INTRODUCTION Cell cycle regulation is fundamental for normal development and homeostasis in multicellular organisms. Deregulation of cell cycle causes many severe diseases, such as developmental abnormalities and cancer. Cyclin-dependent kinases (CDKs) and CDKinhibitory proteins (CKIs) positively and negatively control cell cycle progression as an accelerator and brake, respectively, that make it possible to tightly regulate the cell cycle. Many CDKs, including Cdk1, 2, and 4, are activated via binding to cyclins, whereas Cdk5 is mainly activated by p35 and p39 (Kawauchi, 2014). CKIs are classified into cip/kip and ink4 families. The cip/kip family contains p21 cip1 , p27 kip1 , and p57 kip2 , whereas the ink4 family consists of p16 Ink4a , p15 Ink4b , p18 Ink4c , and p19 Ink4d (Sherr and Roberts, 1999;Lu and Hunter, 2010). Like the other cip/kip proteins, p27 kip1 (hereafter, p27) binds to a cyclin-CDK complex to suppress the kinase activity of CDKs in general, although p27 does not inhibit the Cyclin D-Cdk4 complex in proliferating cells, where the Cyclin D/Cdk4 enhances the phosphorylation of p27 at Tyr88, resulting in its degradation (Chu et al., 2007;Grimmler et al., 2007;James et al., 2008;Ou et al., 2011). Both cyclin-and CDK-binding domains of p27 are located in the N-terminal. A crystal structure of the N-terminal region of p27, cyclin A and Cdk2 shows that the 3 10 helix (residues 85-90 in human) of p27 binds deep within the catalytic cleft of Cdk2 and occupies its ATP-binding site (Russo et al., 1996). This structural inhibition of the kinase activity of CDK requires strong binding between p27 and the cyclin-CDK complex, because mutations in the cyclin-binding region of p27 reduce the CDK inhibitory activity of p27 (Vlach et al., 1997). In the neural progenitor cells in the developing cerebral cortex, p27 controls the lengths of the G1 phase and cell cycle exit (Mitsuhashi et al., 2001;Tarui et al., 2005). In the postnatal and adult subventricular zones, which provide new neurons that migrate to the olfactory bulb, p27 negatively regulates neurogenesis (Doetsch et al., 2002;Li et al., 2009). Thus, many studies have indicated that p27 is essential for cell cycle regulation in proliferating cells, including neural progenitors. In addition to its cell cycle-related roles, accumulating evidence indicates that p27 has extra-cell cycle regulatory function (EXCERF) in growth-arrested cells (Kawauchi et al., 2013). Here, we introduce the roles of p27 in cytoskeletal organization and cell migration and discuss its possible involvement in membrane trafficking pathways, with particularly focusing on immature neurons in the developing cerebral cortex. EXCERF of Cytoplasmic p27 in Neuronal Migration p27 is mainly localized in the nucleus to inhibit the activity of cyclin-CDK complexes and this nuclear p27 acts as a tumor suppressor (Blain et al., 2003;Roininen et al., 2019). However, in breast cancer cells, p27 becomes relocalized in the cytoplasm in an Akt-mediated phosphorylation at Thr157-dependent manner, and appears to lose its tumor suppressive activity (Liang et al., 2002;Shin et al., 2002;Viglietto et al., 2002). Interestingly, many cancers, such as ovarian cancer and melanoma, show a correlation between cytoplasmic p27 and malignancy (Rosen et al., 2005;Denicourt et al., 2007). In addition, cytoplasmic p27 promotes the migration of HepG2 hepatocarcinoma cells (McAllister et al., 2003), suggesting that p27 has functionally significant roles outside of the nucleus. In primary cortical neurons, p27 is mainly localized in the nucleus but also exhibits punctate localization in the cytoplasm (Kawauchi et al., 2006). In vivo, some p27 is localized in the cytoplasm of the immature neurons in the developing cerebral cortex and postnatal subventricular zone (Kawauchi et al., 2006;Li et al., 2009). It has been reported that p27 promotes the migration of immature excitatory and inhibitory neurons in the developing cerebral cortex (Kawauchi et al., 2006;Nguyen et al., 2006;Godin et al., 2012;Nishimura et al., 2014Nishimura et al., , 2017; Figure 1A). p27 regulates immature neurite formation in multipolar-shaped immature excitatory neurons (Kawauchi et al., 2006) and neurite branching in immature inhibitory neurons (Godin et al., 2012). These effects on cell migration and morphological changes are at least in part dependent on cytoplasmic p27, because p27 is shown to regulate actin and microtubule organization to promote migration (Kawauchi et al., 2006;Godin et al., 2012; Figure 1A). Furthermore, a recent paper shows that p27 controls the acetylation of microtubules via stabilization of α-tubulin acetyltransferase 1 (ATAT1), which modulates axonal transport (Morelli et al., 2018). EXCERF of Nuclear p27 in Neuronal Differentiation and Migration Nuclear p27 also participates in EXCERF, because p27 regulates transcription factors in neurons. p27 interacts with p300 and E2F4 and recruits histone deacetylases and mSIN3A to repress transcription of target genes (Pippa et al., 2012). Furthermore, p27 is shown to regulate gene expression of cell adhesion molecules, including protocadherin-9 and Ncam1 (Bicer et al., 2017). Thus, p27 is associated with various chromatin regions to control transcription. In the developing cerebral cortex, p27 stabilizes Neurogenin2, a bHLH transcription factor, in cortical neural progenitors and promotes neuronal differentiation (Nguyen et al., 2006; Figures 1A,B). It is also reported that p27 regulates neuronal migration as well as cell cycle exit in cooperation with Rp58, a transcriptional repressor (Clement et al., 2017), but it is unclear whether p27 interacts with Rp58. In the adult hippocampus, neural stem cells give rise to granule neurons of the dentate gyrus throughout life. While p27 is involved in the regulation of stem cell quiescence, this may not result from the EXCERF of p27 (Andreu et al., 2015). The negative regulation of the hippocampal stem cell proliferation depends on the cyclinand CDK-binding domain of p27. p27 suppresses the kinase activity of Cdk6, which promotes the expansion of hippocampal progenitors (Caron et al., 2018). However, it is unclear whether like in the developing cerebral cortex, p27 also promotes FIGURE 1 | Schematics depicting the roles of p27 in mitotic neural progenitors and post-mitotic neurons. (A) p27 controls G1 length and cell cycle exit in neural progenitors. In addition, p27 exhibits many extra-cell cycle regulatory function (EXCERF) in growth-arrested neurons. p27 promotes immature neurite formation, neuronal migration and axonal transport. p27 is also required for dendritic spine maturation and long-term memory in adult hippocampus. (B) p27 regulates actin reorganization through the suppression of RhoA and the activation of an actin-binding protein, cofilin. p27 also interacts with microtubules and ATAT1 to control microtubule organization and axonal transport, respectively. (C) In post-mitotic cells, cell cycle proteins select two different fates. In general, cell cycle-related proteins, such as cyclin A and PCNA, are suppressed after cell cycle exit and the re-activation of these proteins in growth arrested cells induces cell cycle events, which are priming events to cell death. In contrast, accumulating evidence indicate that other cell cycle-related proteins, including p27, p57, and cyclin E, maintain their expression levels after growth arrest. Growth arrest may switch on extra-cell cycle regulatory functions (EXCERFs) in these proteins. neurogenesis in the adult dentate gyrus through the regulation of transcription. In differentiating chick retinal ganglion cells (RGCs), p27 is involved in the prevention of extra-DNA synthesis (Ovejero-Benito and Frade, 2015). The differentiating chick RGCs contain tetraploid cells. While the nuclei of some invertebrate neurons, including Aplysia californica giant neurons, contain 200,000fold of the normal amount of haploid DNA, chick RGCs remain tetraploid (or diploid). This may be mediated by the EXCERF of p27, because knockdown of p27 promotes extra-DNA synthesis, which cannot be suppressed by Cdk4/6 inhibition (Ovejero-Benito and Frade, 2015). Upstream Factors of p27 As described above, the protein stability of p27 is controlled by its phosphorylation. Cdk5 is an atypical CDK that is activated in post-mitotic neurons in a cyclin-independent manner, whereas Cdk2 binds to cyclin E and controls G1/S transition. Cdk5 is shown to directly phosphorylate p27 at Ser10, which protects it from proteasome-dependent protein degradation (Kawauchi et al., 2006; Figure 1B). In a Cdk5-deficient cerebral cortex, p27 protein levels are reduced in the cytoplasm and nucleus (Zhang et al., 2010), suggesting that Cdk5 regulates the stability of both cytoplasmic and nuclear p27. Ser10 on p27 is also phosphorylated by other kinases, including Dyrk1A and Dyrk1B (Deng et al., 2004;Soppa et al., 2014). Dyrk1A stabilizes p27 and induces cell cycle exit and neuronal differentiation in SH-SY5Y neuroblastoma cells, but the in vivo function of this Dyrk1Amediated regulation of p27 is still unclear. The Cdk5-p27 pathway plays roles in not only cortical neurons but also non-neuronal cultured cells, including migrating endothelial cells (Li et al., 2006;Liebl et al., 2010). However, p27 can also act upstream of Cdk5 in the cultured neurons treated with Aβ 1−42 peptide that is a major cause of Alzheimer's disease. In brains with Alzheimer's disease, the expression of several cell cycle proteins is abnormally induced (Yang and Herrup, 2007). In response to treatment with Aβ 1−42 peptide, p27 promotes the formation of a Cdk5-Cyclin D1 complex that dissociates the Cdk5-p35 complex, resulting in neuronal cell death (Jaiswal and Sharma, 2017). It is consistent with previous reports revealing that the induction of Cyclin D1 in post-mitotic neurons leads to cell death (Ino and Chiba, 2001;Koeller et al., 2008), although its underlying mechanism is unclear because Cyclin D cannot activate Cdk5 (Lee et al., 1996). In contrast, the binding of p27 to Cdk5 in the nucleus has been reported to protect neurons from cell death via the suppression of cell cycle events (Zhang et al., 2010). The disruption of the p27 and Cdk5 interaction in nuclei enhances the nuclear export of Cdk5, which deactivates the cell cycle events. Thus, Cdk5 and p27 have multiple functions in neurons and may activate several distinct downstream pathways that are associated with neuronal cell death. In addition to these kinases, it has been reported that connexin-43 (Cx43), a component of the gap junction, acts as an upstream regulator of p27. Knockdown of Cx43 reduces the protein levels of p27 in cortical neurons and disturbs the formation of immature neurites in cortical migrating neurons (Liu et al., 2012). Consistently, suppression of Cx43 expression perturbs the neuronal positioning in the developing cerebral cortex (Elias et al., 2007;Qi et al., 2016). However, it is unclear whether this regulation is mediated by Cdk5 or not. Downstream Factors of p27: Regulation of Cytoskeletal Organization What are the underlying mechanisms of the EXCERF of p27? Accumulating evidence indicates that a major downstream pathway targeted by p27 in EXCERF is cytoskeletal regulation. It has been reported that p27 promotes cofilin-mediated actin reorganization in neurons (Kawauchi et al., 2006; Figure 1B). Cofilin severs actin filaments and enhances the depolymerization of actin filaments (Moriyama and Yahara, 1999). Cofilin directly binds to actin filaments but the phosphorylation at Ser3 by LIM kinase decreases its actin-binding affinity (Moriyama et al., 1996;Arber et al., 1998;Yang et al., 1998). LIM kinase is activated by RhoA-Rho kinase/ROCK and Rac1-PAK1 pathways. p27 negatively regulates Ser3-phosphorylation of cofilin via the suppression of RhoA, rather than Rac1, in cortical immature neurons, resulting in the activation of cofilin (Kawauchi et al., 2006). p27 can directly bind to RhoA to inhibit the interaction between RhoA and its activators in nonneuronal cells (Besson et al., 2004). However, the binding affinity of p27 for RhoA is low (Phillips et al., 2018), implying that some upstream signals strengthen the binding of these proteins or that p27 indirectly suppresses RhoA activity in neurons. Interestingly, RSK1 is reported to phosphorylate Thr198 of p27, resulting in enhanced binding between p27 and RhoA (Larrea et al., 2009). The p27-RhoA-cofilin pathway is important for not only neuronal migration and morphological changes in the developing cerebral cortex but also the establishment of long-term memory in the adult brain. Cks1 knockout mice have increased p27 protein levels and decreased Ser3-phosphorylation of cofilin (that is, cofilin activity is abnormally increased), and impairment of learning and long-term memory (Kukalev et al., 2017). Cks1 is strongly expressed in the hippocampus and required for latephase long-term potentiation (late LTP) and proper maturation of dendritic spines. A recent report indicates that p27 binds to another actinbinding protein, Cortactin, in non-neuronal cells (Jeannot et al., 2017). p27 promotes the interaction between Cortactin and PAK1, and PAK1-mediated phosphorylation of Cortactin enhances the turnover of invadopodia. Thus, it is possible that FIGURE 2 | Subcellular localization of p27. (A,B,E-K) Primary cortical neurons from E15 cerebral cortices incubated for 2 days in vitro and stained with the indicated antibodies. Immunocytochemical analyses were performed as described previously (Shikanai et al., 2018a). Fluorescence images were obtained by A1R laser scanning confocal microscopy with a high sensitivity GaAsP detector (Nikon) using the narrow pinhole size (0.3) and subjected to deconvolution processing with the Richardson-Lucy algorithm in NIS-ER software (Nikon). The graph in (K) shows the colocalization efficient (Pearson's correlation) of the indicated proteins with p27, as determined using NIS elements software (Nikon). Significance was determined by Kruskal-Wallis test with post hoc Steel-Dwass test [< the critical value at 1% (Rab6 vs. SNX6 or LAMTOR1 or Rab5 or Rab7 or Rab8 or Rab11 or β-tubulin; p115 vs. SNX6 or LAMTOR1 or Rab5 or Rab7 or Rab8 or Rab11 or β-tubulin; GM130 vs. SNX6 or LAMTOR1 or Rab5 or Rab7 or Rab8 or Rab11 or β-tubulin. Dcx vs. SNX6 or LAMTOR1 or Rab5 or Rab7 or Rab8 or Rab11 or β-tubulin; Rab5 vs. LAMTOR1 or Rab7; LAMTOR1 vs. Rab8 or Rab11 or β-tubulin)]. (C,D) Primary cortical neurons from E15 cerebral cortices transfected with control vector or shRNA-expressing vector targeting for Rab7 (Rab7-sh108 (Kawauchi et al., 2010)) plus pCAG-EGFP (Kawauchi et al., 2003) and incubated for 2 days in vitro. Immunoblot analyses were performed as described previously (Kawauchi et al., 2006). The graph in (D) shows the ratios of immunoblot band intensities of p27/β-actin ± s.e.m. (n = 6). No significant differences (n.s.) between control and Rab7-sh108-transfected neurons were found by Student's t-test (P = 0.3232). in addition to cofilin, p27 may also regulate the actin-binding protein(s) in cortical neurons. In the inhibitory neurons in the developing cerebral cortex, p27 interacts with microtubules and promotes its polymerization (Godin et al., 2012). Furthermore, p27 binds to ATAT1 to increase the acetylation of microtubules, as described above (Morelli et al., 2018). Thus, p27 regulates multiple downstream events to control both actin and microtubule cytoskeletal organization ( Figure 1B). Downstream Factors of p27: Possible Involvement in Membrane Trafficking A major EXCERF of p27 is cytoskeletal organization. In addition to this, several reports suggest roles for p27 in membrane trafficking. In cultured cell lines, including NIH-3T3 and HeLa cells, p27 is colocalized with SNX6, a sorting nexin family protein that controls retrograde vesicular transport from early endosomes to trans-Golgi networks (TGNs), and LAMP2, a marker for lysosomes (Fuster et al., 2010). Although p27 is generally degraded in proteasomes, a small fraction of p27 may undergo lysosomal degradation when cells reenter the cell cycle in response to serum stimulation (Fuster et al., 2010). Furthermore, p27 is reported to interact with p27RF-Rho/p18/LAMTOR1 (hereafter, LAMTOR1), which is localized in late endosomes and lysosomes Hoshino et al., 2011;Takahashi et al., 2012). However, our high-resolution microscopy analyses revealed no colocalization of p27 with SNX6 or LAMTOR1 in primary cortical neurons (Figures 2A,B,K), although some localization of p27 and LAMTOR1 is observed together in the same vesicular components (Figure 2B). In addition, inhibition of lysosomal degradation pathways by knockdown of Rab7 did not significantly affect the protein levels of p27 in cortical neurons (Figures 2C,D). Given that the degradation of p27 is not dependent on lysosomes, we examined the possible association of p27 with other endosomal pathways. Rab5, Rab7, and Rab11, markers for early, late and recycling endosomes, respectively, are known to regulate cortical neuronal migration, similar to p27 (Kawauchi et al., 2010;Kawauchi, 2012). However, we observed little to no colocalization between p27 and these Rab proteins in primary cortical neurons (Figures 2E,K). A similar result was found with Rab8, a regulator of secretion pathways (Henry and Sheff, 2008;Shikanai et al., 2018b;Figures 2E,K), indicating low association of p27 with endosomal and Rab8-dependent pathways. In contrast, a small percentage of Rab6, a marker for Golgi, seems to be associated with p27. High-resolution microscopy analyses revealed that some p27 associates with Rab6-positive compartments (Figures 2F,K). In addition, a small percentage of p27 is observed at the tubular compartments positive for GM130 or p115, markers for Golgi apparatus (Figures 2G,H,K). These data suggest that p27 may preferentially associate with Golgi apparatus in cortical neurons. It is unclear whether p27 is associated with the Golgi membrane or not. Considering that p27 binds to microtubules and its associated proteins, stathmin and ATAT1, and regulates axonal transports (Baldassarre et al., 2005;Godin et al., 2012;Morelli et al., 2018), it is possible that p27 regulates microtubuleassociated motor proteins to control the intracellular transport of the Golgi and other endosomes/organelles. In fact, some p27-positive puncta were observed along the microtubules ( Figure 2I). Furthermore, p27 partially colocalizes with Dcx, a microtubule-regulatory protein that is associated with human X-linked lissencephaly (Figures 2J,K). Interestingly, knockout of p27 enhances the trafficking of CTxB, a marker for GM1 ganglioside-positive lipid rafts, in cultured fibroblasts possibly due to altered stathmin-mediated regulation of microtubule stability (Belletti et al., 2010), suggesting that p27 negatively regulates lipid raft-mediated endocytosis. In cortical neurons, CTxB is internalized via caveolin-1-mediated endocytosis at least in part (Shikanai et al., 2018a). In addition, a recent report indicates that caveolin-1 enhances the elimination of the immature neurites in cortical neurons (Shikanai et al., 2018a), which is opposite to the effect of p27 that promotes immature neurite formation (Kawauchi et al., 2006). Thus, the observation of p27-mediated microtubule regulation in lipid raft trafficking in non-neuronal cells may be consistent with in vivo function of p27 and caveolin-1 in immature neurons in the developing cerebral cortex. CONCLUSION AND FUTURE DIRECTION In this paper, we introduce the EXCERF of p27, such as cytoskeletal organization and membrane trafficking. Other cell cycle-related proteins also exhibit EXCERF (Frank and Tsai, 2009;Kawauchi et al., 2013). For example, p21 and p57 are known to regulate neurite extension and neuronal migration, respectively (Tanaka et al., 2002;Itoh et al., 2007). E2F3, a transcription factor that is negatively controlled by Rb, is also reported to control neuronal migration (McClellan et al., 2007). Cdh1-Anaphase promoting complex (APC) and Cdc20-APC, both of which are E3 ubiquitin ligases, regulate axonal growth and dendrite morphogenesis (Konishi et al., 2004;Kim et al., 2009). Furthermore, cyclin E regulates synapse number and synaptic plasticity through the restraining of Cdk5 activity (Odajima et al., 2011). Thus, many cell cycle-related proteins have functions in G0-arrested neurons ( Figure 1C). Alternatively, expression of other cell cycle-related proteins is suppressed during cell cycle exit. Re-expression of these cell cyclerelated proteins, including cyclin A and PCNA, activates cell cycle events in post-mitotic neurons, and leads to cell death and neurodegenerative diseases (Yang and Herrup, 2007; Figure 1C). Consistently, knockdown of p27 in post-mitotic supporting cells in postnatal cochleae induces cell cycle re-entry, but these cells eventually undergo apoptosis (Ono et al., 2009). Thus, we could classify cell cycle-related proteins into two categories ( Figure 1C). One set exhibits EXCERF even in G0arrested cells, such as post-mitotic neurons. The other set is normally suppressed soon after growth arrest and their reactivation is a trigger to cell death. Growth arrest may be an important signal that activates the EXCERF of the former group of proteins, including p27, p57, APC, and cyclin E, and silences proteins in the latter group. This concept raises many questions to be solved. What happens at the timing of growth arrest? What are the differences between cell cycle proteins with or without EXCERF? What are the global picture and regulatory mechanisms of EXCERF? More specifically, what are the physiological roles of the puncta-like cytoplasmic p27? Future studies will answer these fundamental questions and will consolidate this new concept of EXCERF in cell biology and neuroscience. ETHICS STATEMENT For primary culture experiments of mouse embryonic cortical neurons, pregnant ICR mice were purchased from SLC Japan or Animal Facility of RIKEN-BDR. Animals were handled in accordance with guidelines established by RIKEN-BDR and Institute of Biomedical Research and Innovation, FBRI. There is no data using human subjects in this manuscript. AUTHOR CONTRIBUTIONS TK conceived the project, performed experiments and wrote the manuscript. YN administrated the experimental environments and provided helpful comments. FUNDING The authors' research group was funded by JSPS KAKENHI Grant No. JP26290015 (to TK), JP26110718 (to TK), JP26115004 (to YN) and Grant-in-Aid for Scientific Research on Innovation Areas "Dynamic regulation of Brain Function by Scrap and Build System" (JP17H05757 to TK) from The Ministry of Education, Culture, Sports, Science, and Technology of Japan (MEXT), and by grants from AMED under grant number JP18gm5010002 (to TK) and the Takeda Science Foundation (to TK).
2019-04-26T13:08:21.962Z
2019-04-26T00:00:00.000
{ "year": 2019, "sha1": "67cd1c7ef2ef62ab40883cc7c4c5d74dc5790af0", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcell.2019.00064/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "67cd1c7ef2ef62ab40883cc7c4c5d74dc5790af0", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
257932317
pes2o/s2orc
v3-fos-license
Preparation of Crosslinked Poly(acrylic acid-co-acrylamide)-Grafted Deproteinized Natural Rubber/Silica Composites as Coating Materials for Controlled Release of Fertilizer The crosslinked poly(acrylic acid-co-acrylamide)-grafted deproteinized natural rubber/silica ((PAA-co-PAM)-DPNR/silica) composites were prepared and applied as coating materials for fertilizer in this work. The crosslinked (PAA-co-PAM)-DPNR was prepared via emulsion graft copolymerization in the presence of MBA as a crosslinking agent. The modified DPNR was mixed with various contents of silica (10 to 30 phr) to form the composites. The existence of crosslinked (PAA-co-PAM) after modification provided a water adsorption ability to DPNR. The swelling degree values of composites were found in the range of 2217.3 ± 182.0 to 8132.3 ± 483.8%. The addition of silica in the composites resulted in an improvement in mechanical properties. The crosslinked (PAA-co-PAM)-DPNR with 20 phr of silica increased its compressive strength and compressive modulus by 1.61 and 1.55 times compared to the unloaded silica sample, respectively. There was no breakage of samples after 80% compression strain. Potassium nitrate, a model fertilizer, was loaded into chitosan beads with a loading percentage of 40.55 ± 1.03% and then coated with the modified natural rubber/silica composites. The crosslinked (PAA-co-PAM)-DPNR/silica composites as the outer layers had the ability of holding water in their structure and retarded the release of fertilizer. These composites could be promising materials for controlled release and water retention that would have potential for agricultural application. Introduction The controlled release system is an effective strategy for various applications such as biomedical [1,2] and agricultural [3,4] applications that can enable the release of the active ingredients to achieve a desired response. For the agricultural field, water and nutrients are essential for the growth of plants. The fertilizer is the substance that is added to improve agricultural productivity. The fertilizer is usually in a soluble salt form and can dissolve quickly by water. So, it can be leached out from the soil. The leaching of the fertilizer can contaminate natural resources and leads to environmental pollution. Moreover, the addition of a high amount of fertilizer may not be consistent with plant growth and may cause damage to the root system of a plant. Therefore, the design of materials for the coating of fertilizer to allow for controlled release behavior was developed [5,6]. For the controlled release fertilizer, the coating materials should be natural, nontoxic and environmentally friendly materials. They should release nutrients along with the growth rate of plants to support the plant growth. They should retain a lot of water to increase the moisture in the soil and reduce the soil compaction [7]. Therefore, they could improve the physical quality of soil and slow down soil deterioration. Moreover, they should contain functional groups capable of absorbing fertilizer for holding nutrients in the soil and reducing the loss of nutrients. Bio-based polymeric materials are interesting and have advantages for several fields including agricultural application because of their low cost, low toxicity and environmental friendliness. Natural rubber (NR) is a biopolymer, obtained from Hevea Brasiliensis rubber trees. NR has high elasticity and film-forming ability. The excellent properties of NR make it suitable for coating substances [8]. NR can be applied as a film barrier to prevent the release of water-soluble molecules due to its hydrophobic characteristics [9,10]. It contains polyisoprene chains with a low glass transition temperature that is favorable to form rubbery film. However, in order to prepare the natural-rubber-based coating materials with controlled release behavior and water retention ability, the combination and modification with other functional components to have such characteristics are interesting strategies. Superabsorbent polymer materials are promising materials for the agricultural sector. They are composed of a three-dimensional polymeric network structure that can absorb and hold water and nutrients during the growth period. Poly(acrylic acid-co-polyacrylamide) is a copolymer that possesses hydrophilic properties with a large number of polar groups [11]. J. Zhu et al. successfully prepared poly(acrylic acid-co-polyacrylamide)-based superabsorbent materials through the graft copolymerization of okara with acrylic acid and acrylamide [12]. They exhibited an enhanced water retention ability, depending on the contents of the grafted polymer. Their water adsorption capacities were found to be in the range of 120 to 200 g/g in tap water. The enhancement in plant growth was observed, presenting more than 80% determined by the weight and leaf area of the plant. Thus, poly(acrylic acid-co-polyacrylamide)-based materials were used as soil supplements for plant growth application under a water-limited condition. According to the our previous work, poly(acrylic acid-co-acrylamide)-grafted deproteinized natural rubber ((PAA-co-PAM)-DPNR) was successfully prepared via emulsion graft copolymerization which was a water-based process [13]. This system was environmentally friendly and safe without using organic solvents. The PAA-co-PAM was grafted on the particle surface of natural rubber. The monomer contents were varied at 10 and 30 phr. The grafting efficiency and grafting percentage were found in the range of 20.8-38.9 and 2.1-9.9%, respectively. However, when the products were used in the wet condition, the ungrafted polymers could be dissolved in the aqueous medium. So, the undesired performance in application was achieved. Moreover, the modified natural rubber was then mixed with silica. The modified natural rubber/silica composites showed enhanced mechanical properties due to strong interactions between the silica and modified natural rubber. Natural rubber/filler composites have been studied for improving the properties of natural-rubber-based materials, including mechanical and thermal properties to promote application efficacy meaning they can be used in a wide range of applications. The silica-reinforced natural rubber composites were used for many rubber applications such as tire manufacturing [14]. The composites can be used as the supporting frameworks for the adsorption process [15]. N.C. Oliveira et al. prepared the composites of natural rubber/fluorescent silica particles [16]. The composites exhibited good structural stability under fluid flow and high ionic strength conditions (NaCl, 0.85% w/v). Therefore, the natural rubber/silica-based materials could be applied for use in various coating applications such as biomedical devices, microelectronics and the automobile industries. P. Boonying et al. prepared the biocomposite coating films from lignin and natural rubber grafted with polyacrylamide (NR-g-PAM) for slow release fertilizer [17,18]. The lignin/NR composites used as the outer coating shell of fertilizer can enhance mechanical resistance from the osmotic pressure building up. NR-g-PAM acts as the compatibilizer to enhance the stability of lignin/NR dispersions. It can enhance the hydrophilicity to the composites and allow water transport through the lignin/NR composite film. The Li/NR-g-PAM-coated urea revealed a slow release of N, but only 60% of the total N after 112 days. Therefore, the natural rubber/filler composites could have potential for various applications. In this work, to prepare the natural-rubber-based composites for coating fertilizer, the deproteinized natural rubber was modified via emulsion graft copolymerization with acrylic acid and acrylamide. In order to enhance the grafting of PAA-co-PAM on the DPNR, the crosslinked (PAA-co-PAM)-DPNR was prepared using N',N'-Methylenebisacrylamide (MBA) as a crosslinking agent. The crosslinked (PAA-co-PAM)-DPNR latex was then mixed with silica to prepare the natural rubber/silica composites and applied as coating materials for fertilizer. The crosslinked (PAA-co-PAM)-DPNR/silica composites were characterized and coated on chitosan beads. The release behavior and water retention ability of the crosslinked (PAA-co-PAM)-DPNR/silica-covered chitosan beads were investigated. Materials Natural rubber latex preserved with high ammonia (NR; 60% of dry rubber content) was purchased from Chemical and Materials Co., Ltd. (Bangkok, Thailand). Acrylic acid (AA) and cumene hydroperoxide (CHP) were obtained from Aldrich (St. Louis, MO, USA). The purification of the AA monomer was performed by passing through a column packed with alumina adsorbent before polymerization [19]. Acrylamide (AM) monomer was obtained from Loba Chemie Pvt. Ltd. (Mumbai, India). Tetraethylene pentamine (TEPA) and chitosan (CS, molecular weight of 100,000-300,000 g/mol) were purchased from Acros organics (Geel, Belgium). Terric16A (10% w) was obtained from Rubber Authority of Thailand (Bangkok, Thailand). N',N'-Methylenebisacrylamide (MBA) and sodium tripolyphosphate (TPP) were purchased from Alfa Aesar (Haverhill, MA, USA). Silica was prepared via the precipitation method from rice husk ash (byproduct from biomass power plants from Chia Ment Co., Ltd., Nakhon Ratchasima, Thailand) with an average size of 44.02 ± 5.06 nm, determined via SEM. Potassium nitrate (KNO 3 ) was obtained from Kemaus (New South Wales, Australia). Deionized (DI) water was used throughout the study. Preparation of Crosslinked Poly(acrylic acid-co-acrylamide)-Grafted Deproteinized Natural Rubber The crosslinked (PAA-co-PAM)-DPNR via emulsion graft copolymerization was performed according to a previous study with some modifications [13]. The deproteinized natural rubber (DPNR) latex was prepared via treatment with urea in the presence of sodium dodecyl sulfate [20]. The DPNR latex and 5 phr of Terric16A were added into the three-necked reactor and stirred at 100 rpm using a mechanical stirrer under nitrogen atmosphere for 45 min. After that, the chemicals were injected into the reactor as follows: CHP, acrylic acid (40 mol% of acrylic acid was neutralized with 20% w of NaOH solution), acrylamide, MBA and TEPA. The CHP and TEPA were fixed as 1 phr. The content of the comonomer was used at 30 phr with 50:50 by weight ratio of acrylic acid and acrylamide. The MBA contents were varied as 0.25 and 0.50% w of monomer. The total solid content was kept constant at 15 wt%. The polymerization was carried out at 50 • C for 6 h. To determine the monomer conversion, the dried samples were weighed before and after immersion in ethanol for 24 h. The samples were then dried at 60 • C for 24 h. The monomer conversion was calculated as follows [21]: Monomer conversion (%) = Weight of polymer formed Weight of total monomer added × 100. For the calculation of grafting efficiency and grafting percentage, the ungrafted PAAco-PAM was removed via extraction with DI water. The dried samples were immersed with DI water for 72 h. The medium was changed every 8 h. The samples were then dried at 60 • C for 24 h. The grafting efficiency and grafting percentage were calculated as follows [22]: Grafting efficiency (%) = Weight of (PAA − co − PAM) grafted Weight of total polymer formed × 100 (2) Grafting percentage (%) = Weight of (PAA − co − PAM) grafted Weight of DPNR used × 100. Morphology The morphology of samples was studied using transmission electron microscopy (TEM). A drop of diluted latex was put onto a carbon-coated copper grid. The sample was stained with osmium tetroxide in the carbon-carbon double bonds of natural rubber for 24 h to increase the contrast [23]. The morphology was observed through a transmission electron microscope using Talos F200X (Thermo Fisher Scientific, Waltham, MA, USA) at 120 kV. Gel Fraction The dried samples were weighed and then immersed in DI water to extract the sol fraction from the matrix at room temperature for 72 h. The medium was changed 3 times a day. The samples were subsequently freeze-dried overnight. The samples were weighed after drying and the gel fraction was calculated as follows [24]: where Wi is the initial weight of dried samples and Wf is the weight of dried samples after immersion in water and freeze drying. Preparation of Crosslinked (PAA-co-PAM)-DPNR/Silica Composites The crosslinked (PAA-co-PAM)-DPNR/silica composites were prepared by mixing the crosslinked (PAA-co-PAM)-DPNR latex with different silica contents. The silica was prepared via the precipitation method by treatment with rice husk ash with 1 M of HCl solution for 3 h to remove the metallic oxide. The ash was then filtered and washed with DI water until the pH was neutral. The purified ash was dried in the hot air oven at 110 • C for 12 h. The dried sample was added into 1 M of NaOH solution and stirred for 12 h at 90 • C to obtain the sodium silicate. The undissolved product was removed via filtration. To prepare the silica, 1 M of acetic acid was dropped into the sodium silicate solution until the pH was neutral under stirring at room temperature. The precipitate silica was obtained and then separated via filtration followed by washing with the excess of DI water. The resulting silica was dispersed in DI water with the solid content of 3%. The silica dispersion was dropped into the latex and stirred at 100 rpm for 3 h at room temperature. The silica contents were varied as 10, 20 and 30 phr. The chemical compositions are shown at Table 1. The mixture was poured into the plastic mold and dried at 60 • C for 24 h. The samples were kept for further characterization. Fourier-transform infrared spectroscopy (FTIR) with an attenuated total reflection (ATR) mode was used to study the chemical structure of the crosslinked (PAA-co-PAM)-DPNR/silica composites using a Tensor 27 FTIR spectrometer (Bruker, Billerica, MA, USA). The scanning of each spectrum was performed with 64 scans at a resolution of 4 cm −1 . The FTIR spectra of all samples were recorded in the range of 4000-400 cm −1 . Morphology The morphology of samples was observed using a scanning electron microscope (SEM). The samples were frozen and broken in liquid nitrogen. The samples were fixed on the stub using conductive carbon tape and kept in a desiccator overnight. The samples were sputter-coated with gold under a vacuum for 3 min. The cross-section of samples was observed through a JSM-6010LV (JEOL, Tokyo, Japan). Moreover, the element compositions of the crosslinked (PAA-co-PAM)-DPNR/silica composites were determined via energydispersive spectroscopy coupled with SEM (SEM/EDS). Swelling Degree To determine the water absorption ability of samples, the swelling test was carried out via immersion in DI water for 24 h. The swollen samples were taken out from the medium and wiped with filter paper. Then, the swollen samples were weighed. The swelling degree was calculated as follows [25]: where Wi is the initial weight of dried samples and Ws is the weight of swollen samples. Contact Angle The contact angle measurement was carried out by dropping water via a microsyringe on the rubber sample. After 20 s, the water droplet on the surface was recorded and the angle formed between the interface was measured using ImageJ software (IJ 1.46r image analyzer software). Thermogravimetric Analysis The thermal properties of the composites were traced via thermogravimetric analysis using a TGA/DSC1 (Mettler Toledo, Columbus, OH, USA). A total of 10 mg of the dried sample was added in a sample pan. The pan without the addition of the sample was used as a reference. The reference and sample pans were placed into the furnace. The measurement was performed in the temperature range of 50 to 600 • C at a heating rate of 10 • C/min under nitrogen atmosphere. Compressive Properties The compressive properties of the crosslinked (PAA-co-PAM)-DPNR/silica composites were determined using a TA.XT plus texture analyzer (Stable Micro systems Ltd., Surrey, UK). The sample was immersed in DI water before measurement and then cut into a cylindrical shape with a diameter of approximately 10 mm and a thickness of 2 mm. The swollen sample was compressed to 80% strain with a fixed strain rate at 0.05 mm/s at room temperature. The compressive modulus was determined from the slope of the stress-strain curve (5-10% strain) [26]. The measurement was repeated six times for each sample. The chitosan beads were prepared following the ionic gelation technique according to J.J. Perez et al. [27]. The 10 g of 3% w/w of chitosan (CS) solution in acetic acid (1% v/v) was dropped into 50 mL of sodium tripolyphosphate (TPP) solution using a syringe with a needle. The concentration of TPP was kept constant at 1% w/v. The sample was continuously stirred at 200 rpm for 4 h. The CS beads were removed from the solution and washed with DI water. The beads were then dried at 35 • C for 24 h. The crosslinked (PAA-co-PAM)-DPNR/silica latex with various silica contents was dropped on CS beads. The weight ratio of crosslinked (PAA-co-PAM)-DPNR/silica and CS was kept constant at 1:15. The sample was dried at 60 • C for 24 h to obtain crosslinked (PAA-co-PAM)-DPNR/silica-coated CS beads. Morphology The morphology of the prepared beads was observed using a scanning electron microscope (SEM). Before SEM observation, the crosslinked (PAA-co-PAM)-DPNR/silicacoated CS beads were immersed in DI water for 1 h. The samples were subsequently freeze-dried overnight. The freeze-dried beads were fixed on a stub using conductive carbon tape. Then, they were sputter-coated with gold under vacuum for 3 min. The samples were subjected to FEI Quanta 450 SEM (Philips, Hillsboro, OR, USA). Water Retention The water retention test was performed by measuring the change in the remaining weight of water in the sample container. The sample container was filled with the sample and sand (Wo). Sand with a size in the range of 425-625 µm determined via passing it through a sieve mesh was dried at 105 • C for 24 h before testing. The different types of beads (5% w) were buried in 30 g of sand in a plastic cup, followed by the addition of DI water (30% w) (Ws). The weight of the sample container was recorded at certain time intervals (Wt). The measurement was performed at 25 and 45 • C. The water retention was calculated as follows [28,29]: Loading Percentage To encapsulate potassium nitrate in the CS beads, the dried CS beads were immersed in 20% w of potassium nitrate solution for 4 h. Subsequently, the samples were dried at 35 • C for 24 h [27]. The loading percentage was calculated according to the following equation: where Wa is the weight of the dried sample after loading and Wb is the initial weight of dried beads, respectively. Release Behavior The release experiment was performed by placing potassium-nitrate-loaded beads in a dialysis bag (CelluSep T4, molecular cut-off 6-8 kDa) and then immersed in 100 mL of DI water. The test was performed at 25 • C. The released amount of potassium nitrate was determined by measuring the conductivity of aqueous medium at various time intervals. Conversion, Grafting Efficiency and Grafting Percentage The MBA-crosslinked (PAA-co-PAM)-DPNR was prepared via graft copolymerization of a comonomer of acrylic acid and acrylamide. The MBA crosslinking agent was introduced during the polymerization process and its contents were varied at 0.25 and 0.50 by weight percentage of the comonomer, which were noted as M0.25/P30-DPNR and M0.50/P30-DPNR, respectively. The added crosslinking agent can link the PAA-co-PAM chains and form a crosslinked network of hydrophilic polymers into the natural-rubberbased matrix as the potential materials for application [30]. The properties of the crosslinked (PAA-co-PAM)-DPNR with different MBA contents were compared to the uncrosslinked sample (P30-DPNR). From the results shown in Figure 1, it was found that the conversion was found in the range of 89.0 ± 2.0 to 94.1 ± 3.2%. When the crosslinking agent was added, the grafting efficiency and grafting percentage were increased as compared to P30-DPNR. The grafting efficiency increased from 38.9 ± 2.1 to 81.5 ± 3.5%, and the grafting percentage increased from 9.9 ± 1.7 to 24.4 ± 1.1% when the crosslinking agent was increased from 0 to 0.50% w of the comonomer. The addition of the crosslinking agent allows for interactions between the PAA-co-PAM molecular chains and can link the ungrafted polymers to the natural-rubber-based structure. This resulted in an increase in the grafting efficiency and grafting percentage of the crosslinked samples. The release experiment was performed by placing potassium-nitrate-loaded beads in a dialysis bag (CelluSep T4, molecular cut-off 6-8 kDa) and then immersed in 100 mL of DI water. The test was performed at 25 °C. The released amount of potassium nitrate was determined by measuring the conductivity of aqueous medium at various time intervals. Conversion, Grafting Efficiency and Grafting Percentage The MBA-crosslinked (PAA-co-PAM)-DPNR was prepared via graft copolymerization of a comonomer of acrylic acid and acrylamide. The MBA crosslinking agent was introduced during the polymerization process and its contents were varied at 0.25 and 0.50 by weight percentage of the comonomer, which were noted as M0.25/P30-DPNR and M0.50/P30-DPNR, respectively. The added crosslinking agent can link the PAA-co-PAM chains and form a crosslinked network of hydrophilic polymers into the natural-rubberbased matrix as the potential materials for application [30]. The properties of the crosslinked (PAA-co-PAM)-DPNR with different MBA contents were compared to the uncrosslinked sample (P30-DPNR). From the results shown in Figure 1, it was found that the conversion was found in the range of 89.0 ± 2.0 to 94.1 ± 3.2%. When the crosslinking agent was added, the grafting efficiency and grafting percentage were increased as compared to P30-DPNR. The grafting efficiency increased from 38.9 ± 2.1 to 81.5 ± 3.5%, and the grafting percentage increased from 9.9 ± 1.7 to 24.4 ± 1.1% when the crosslinking agent was increased from 0 to 0.50% w of the comonomer. The addition of the crosslinking agent allows for interactions between the PAA-co-PAM molecular chains and can link the ungrafted polymers to the natural-rubber-based structure. This resulted in an increase in the grafting efficiency and grafting percentage of the crosslinked samples. Figure 2 shows the morphology of the uncrosslinked and crosslinked (PAA-co-PAM)-DPNR particles. From the TEM images, the particles exhibited a core-shell morphology. All particles showed the dark color particles of DPNR covered with the (PAA-co-PAM) shell [31]. This confirmed the presence of PAA-co-PAM after modification. After adding the MBA crosslinking agent, it could be clearly seen that at 0.50% w of MBA, the formation of linkage around the rubber surface occurred, suggesting that the network structure took place between rubber particles [32]. This can be explained by the reaction mechanism for preparing the grafted natural rubber using the CHP/TEPA system. According to D.J. Lamb et al. [33], the free radicals were generated on the natural rubber chains. The graft copolymerization was then performed on the rubber surface to form the polymer grafted natural rubber. However, in this approach, the ungrafted polymer also occurred from the chain transfer reaction. Therefore, in the condition without the addition of a crosslinking agent, the ungrafted PAA-co-PAM can be formed and covered on the natural rubber surface without being linked by chemical bonds, as seen in Figure 2g. When the MBA was added, it reacted with the formed PAA-co-PAM during the chain propagation [34]. Therefore, the crosslink between the grafted PAA-co-PAM chains could have resulted from and also be associated with ungrafted chains. Nevertheless, the addition of a high amount of crosslinking agent not only chemically bonded with PAA-co-PAM chains on the individual natural rubber particles, but it might have also led to crosslinking them on the other rubber particles. Morphology of linkage around the rubber surface occurred, suggesting that the network structure took place between rubber particles [32]. This can be explained by the reaction mechanism for preparing the grafted natural rubber using the CHP/TEPA system. According to D.J. Lamb et al. [33], the free radicals were generated on the natural rubber chains. The graft copolymerization was then performed on the rubber surface to form the polymer grafted natural rubber. However, in this approach, the ungrafted polymer also occurred from the chain transfer reaction. Therefore, in the condition without the addition of a crosslinking agent, the ungrafted PAA-co-PAM can be formed and covered on the natural rubber surface without being linked by chemical bonds, as seen in Figure 3g. When the MBA was added, it reacted with the formed PAA-co-PAM during the chain propagation [34]. Therefore, the crosslink between the grafted PAA-co-PAM chains could have resulted from and also be associated with ungrafted chains. Nevertheless, the addition of a high amount of crosslinking agent not only chemically bonded with PAA-co-PAM chains on the individual natural rubber particles, but it might have also led to crosslinking them on the other rubber particles. Gel Fraction The gel fraction of the modified natural rubber was studied to investigate the ability of samples for holding the (PAA-co-PAM) chains in their structure. From Figure 3, it was observed that the gel fraction of crosslinked (PAA-co-PAM)-DPNR was higher than that of the uncrosslinked sample. The gel fraction of P30-DPNR was found to be 78.8 ± 1.3%. When the MBA crosslinking agent was introduced, the gel fraction increased to 80.6 ± 0.2 and 88.6 ± 0.8% for M0.25/P30-DPNR and M0.50/P30-DPNR, respectively. It can be seen that the addition of a higher crosslinking agent leads to an increase in the gel fraction due to a larger crosslinking site in the sample. Indeed, natural rubber is a hydrophobic molecule composed of polyisoprene chains, while the PAA-co-PAM is a hydrophilic polymer. When the samples are immersed in aqueous medium, the ungrafted PAA-co-PAM segments can dissolve in aqueous medium, resulting in a reduction in the gel fraction. The crosslinking agent interacted with PAA-co-PAM chains and linked them to a rubber-based structure via chemical bonds. Thus, it was suggested that the presence of an MBA crosslinking agent can improve structural stability and hold the hydrophilic units on the sample structure. This would bring advantages for use in application in terms of water adsorption ability [35,36]. Gel Fraction The gel fraction of the modified natural rubber was studied to investigate the ability of samples for holding the (PAA-co-PAM) chains in their structure. From Figure 3, it was observed that the gel fraction of crosslinked (PAA-co-PAM)-DPNR was higher than that of the uncrosslinked sample. The gel fraction of P30-DPNR was found to be 78.8 ± 1.3%. When the MBA crosslinking agent was introduced, the gel fraction increased to 80.6 ± 0.2 and 88.6 ± 0.8% for M0.25/P30-DPNR and M0.50/P30-DPNR, respectively. It can be seen that the addition of a higher crosslinking agent leads to an increase in the gel fraction due to a larger crosslinking site in the sample. Indeed, natural rubber is a hydrophobic molecule composed of polyisoprene chains, while the PAA-co-PAM is a hydrophilic polymer. When the samples are immersed in aqueous medium, the ungrafted PAA-co-PAM segments can dissolve in aqueous medium, resulting in a reduction in the gel fraction. The crosslinking agent interacted with PAA-co-PAM chains and linked them to a rubber-based structure via chemical bonds. Thus, it was suggested that the presence of an MBA crosslinking agent can improve structural stability and hold the hydrophilic units on the sample structure. This would bring advantages for use in application in terms of water adsorption ability [35,36]. Preparation of Crosslinked (PAA-co-PAM)-DPNR/Silica Composites The crosslinked (PAA-co-PAM)-grafted deproteinized natural rubber/silica composites were prepared to be used as coating materials for fertilizer. From the above section, the crosslinked (PAA-co-PAM)-grafted DPNR was firstly prepared via emulsion graft copolymerization using the CHP/TEPA redox initiator system in the presence of MBA as a crosslinking agent. The crosslinked (PAA-co-PAM)-DPNR latex was then mixed with silica via the wet mixing process to produce natural rubber/silica composites, as shown in Figure 4. According to the results from Section 3.1, the M0.50/P30-DPNR with the highest grafting efficiency, grafting percentage and gel fraction was employed to be mixed with different silica contents at 10, 20 and 30 phr for preparing the composites, which were named as M0.50/P30-DPNR/Si10, M0.50/P30-DPNR/Si20 and M0.50/P30-DPNR/Si30, respectively. The crosslinked (PAA-co-PAM)-grafted deproteinized natural rubber/silica composites were prepared to be used as coating materials for fertilizer. From the above section, the crosslinked (PAA-co-PAM)-grafted DPNR was firstly prepared via emulsion graft copolymerization using the CHP/TEPA redox initiator system in the presence of MBA as a crosslinking agent. The crosslinked (PAA-co-PAM)-DPNR latex was then mixed with silica via the wet mixing process to produce natural rubber/silica composites, as shown in FTIR Analysis The FTIR was used to characterize the chemical structure and functional groups of the prepared samples, as shown in Figure 5a. The FTIR spectra of the crosslinked (PAAco-PAM)-DPNR/silica composites were compared to DPNR, M0.50/P30-DPNR and silica. The characteristic peaks of DPNR appeared at 1664, 1446, 1374 and 841 cm −1 , which were assigned to the vibration bands of C=C, -CH 2 , -CH 3 and =CH, respectively [37]. The M0.50/P30-DPNR shows additional peaks compared to DPNR. Peaks at 3362 cm −1 (OH stretching), 3206 cm −1 (NH stretching), 1663 cm −1 (C=O stretching), 1613 cm −1 (NH bending), 1563 cm −1 (-COO − ) and 1240 cm −1 (C-O stretching) were observed [38]. These correspond to polyacrylic acid and polyacrylamide, indicating the successful graft copolymerization. For the silica, its spectrum shows the peaks at 1065 cm −1 (Si-O-Si asymmetric stretching), 963 cm −1 (Si-OH bending), 794 cm −1 (Si-O symmetric stretching) and 458 cm −1 (Si-O bending) [39]. In the case of the crosslinked (PAA-co-PAM)-DPNR/silica composites, both characteristic peaks of M0.50/P30-DPNR and silica appeared. However, the peaks of OH stretching and NH stretching were shifted to 3350 cm −1 and 3204 cm −1 , respectively. The peaks of Si-OH and Si-O-Si were shifted from 963 to 974 cm −1 and 1065 to 1080 cm −1 , respectively. These demonstrated the generation of H-bonding between modified natural rubber and silica and confirmed the incorporation of silica in the composites [40]. Moreover, the peak intensity ratios of the composites calculated by the peak intensity at 1080 cm −1 of Si-O-Si stretching compared to those at 1374 cm −1 of -CH 3 stretching are shown in Figure 5b. It was found that the intensity ratios increased when the silica contents increased. The increase in the ratio corresponds to the increasing amount of silica added. Figure 6 presents the SEM images of the crosslinked (PAA-co-PAM)-DPNR/silica composites compared to the sample without the addition of silica. As seen in Figure 6a, the M0.50/P30-DPNR had a smooth surface. When the silica was introduced at 10, 20 and 30 phr, the cross-section surface of the M0.50/P30-DPNR/silica composites (Figure 6c,e,g) exhibited a rough surface by the presence of silica particles dispersed in the rubber matrix. The silica particles had a size of about 44.02 ± 5.06 nm, as observed in Figure S1 (see supplemental information). When the silica was added into the system at 10 phr, the good dispersion of silica particles was obtained. The strong interaction between the polar functional groups of modified DPNR and silica resulted in the formation of rubber-silica clusters with a size of about 1.62 ± 0.27 µm, which was larger than that of silica particles. When the amount of silica was increased, more interaction was obtained, resulting in an uneven layer as observed from SEM images. However, when the silica was increased up to 30 phr, Figure S1 (see supplemental information). When the silica was added into the system at 10 phr, the good dispersion of silica particles was obtained. The strong interaction between the polar functional groups of modified DPNR and silica resulted in the formation of rubber-silica clusters with a size of about 1.62 ± 0.27 µm, which was larger than that of silica particles. When the amount of silica was increased, more interaction was obtained, resulting in an uneven layer as observed from SEM images. However, when the silica was increased up to 30 phr, the large pit and the agglomeration of silica particles were found. Swelling Degree The swelling degree and characteristics of the composites after immersion in water were determined to estimate the water absorption capacity of the composites, as displayed in Figure 7. The M0.50/P30-DPNR film gradually expands in the medium as it can absorb water into its structure. The swelling degree of M0.50/P30-DPNR was found as 10905.5 ± 617.9%, while no dimensional change was observed for DPNR when immersed in water due to its hydrophobic characteristic. The swelling degree of DPNR was found to only be 4.3 ± 0.5%. Thus, the existence of crosslinked PAA-co-PAM in natural-rubber-based materials can promote the water adsorption capacity due to crosslink network structure formation, together with polar functional groups such as carboxylic acid and amide groups presenting in the chemical structure of PAA-co-PAM for the highly effective absorption. When the silica was added, the composites showed a lower swelling degree than that of MBA0.50/P30-DPNR. The swelling degree of M0.50/P30-DPNR/Si10, M0.50/P30-DPNR/Si20 and M0.50/P30-DPNR/Si30 were 8132.3 ± 483.8, 5232.1 ± 435.6 and 2217.3 ± 182.0%, respectively. The swelling degree decreased when silica increased. This could be explained by the fact that since the polar functional groups of PAA-co-PAM play an important role in the adsorption process, these groups are strongly interacted with silica and obtain more compact structures. This might lead to an increase in the crosslink density in the composites. A higher crosslink density results in a lower swelling degree, as can be observed in other research works [41]. In addition, the disintegration of all samples in aqueous medium was not found. They showed good physical stability and water adsorption ability that would be useful in application. Swelling Degree The swelling degree and characteristics of the composites after immersion in water were determined to estimate the water absorption capacity of the composites, as displayed in Figure 7. The M0.50/P30-DPNR film gradually expands in the medium as it can absorb water into its structure. The swelling degree of M0.50/P30-DPNR was found as 10,905.5 ± 617.9%, while no dimensional change was observed for DPNR when immersed in water due to its hydrophobic characteristic. The swelling degree of DPNR was found to only be 4.3 ± 0.5%. Thus, the existence of crosslinked PAA-co-PAM in natural-rubberbased materials can promote the water adsorption capacity due to crosslink network structure formation, together with polar functional groups such as carboxylic acid and amide groups presenting in the chemical structure of PAA-co-PAM for the highly effective absorption. When the silica was added, the composites showed a lower swelling degree than that of MBA0.50/P30-DPNR. The swelling degree of M0.50/P30-DPNR/Si10, M0.50/P30-DPNR/Si20 and M0.50/P30-DPNR/Si30 were 8132.3 ± 483.8, 5232.1 ± 435.6 and 2217.3 ± 182.0%, respectively. The swelling degree decreased when silica increased. This could be explained by the fact that since the polar functional groups of PAA-co-PAM play an important role in the adsorption process, these groups are strongly interacted with silica and obtain more compact structures. This might lead to an increase in the crosslink density in the composites. A higher crosslink density results in a lower swelling degree, as can be observed in other research works [41]. In addition, the disintegration of all samples in aqueous medium was not found. They showed good physical stability and water adsorption ability that would be useful in application. Figure 8 presents the contact angle of the M0.50/P30-DPNR and M0.50/P30-DPNR/silica composites with various silica contents. The contact angle of DPNR was reported in the literature to be about 96.0° [42]. As can be seen from the result, the contact angle of M0.50/P30-DPNR was 20.2°, which was lower than that of DPNR. The decrease in the contact angle indicated that the M0.50/P30-DPNR was more hydrophilic compared to DPNR. It is suggested that the presence of PAA-co-PAM after modification can enhance the hydrophilicity to the natural-rubber-based structure. For the M0.50/P30-DPNR/silica composites, their contact angle values were higher than those of M0.50/P30-DPNR and tended to increase with an increase in silica contents. The contact angle values of M0.50/P30-DPNR/Si10, M0.50/P30-DPNR/Si20 and M0.50/P30-DPNR/Si30 were found to be 45.8°, 65.2° and 69.1°, respectively. These results were corresponded to the swelling experiment. The composites with more interaction between polar groups of PAA-co-PAM and silica were more hydrophobic and showed a lower swelling degree. However, their contact angle values were lower than that of DPNR, suggesting that they still had hydrophilic behavior in their structure. The composites with more interaction between polar groups of PAA-co-PAM and silica were more hydrophobic and showed a lower swelling degree. However, their contact angle values were lower than that of DPNR, suggesting that they still had hydrophilic behavior in their structure. Thermal Properties The TGA and DTG thermograms of M0.50/P30-DPNR and M0.50/P30-DPNR/silica composites with various silica contents are displayed in Figure 9. The decomposition of samples in the temperature range of 50 to 600 • C were determined. The decomposition at 70-175 • C was found due to the loss of water in the samples. The weight loss between 175-290 • C corresponded to the decomposition of the carboxylic acid and amide side groups of the PAA-co-PAM chains [43]. The DPNR and polymer backbone of PAA-co-PAM decomposed at a temperature between 336 and 475 • C. From thermograms, the temperature at 10% of weight loss (T 10 ), temperature at maximum process rate (T max ) and residue of the degradation process are reported in Table 2. It was observed that the presence of silica caused the increase in the decomposition temperature of the composites. Their T 10 and T max values increased when the silica contents increased. Similar results have been reported in other research works [44]. The shift of T 10 and T max to a higher temperature demonstrated that the composites had stability over wide temperature ranges. It could be indicated that the thermal stability of the composites was improved because the silica particles dispersed in the natural rubber matrix could adsorb heat energy and retard the heat transfer to natural rubber [45]. Moreover, the residues of M0.50/P30-DPNR/silica composites with 10, 20 and 30 phr of silica were found at 10.47, 13.68 and 26.03%, respectively. The results showed that their residues increased with increasing the silica contents and that they were higher than those of M0.50/P30-DPNR (4.52%) due to the higher content of silica in the composites. Thermal Properties The TGA and DTG thermograms of M0.50/P30-DPNR and M0.50/P30-DPNR/silica composites with various silica contents are displayed in Figure 9. The decomposition of samples in the temperature range of 50 to 600 °C were determined. The decomposition at 70-175 °C was found due to the loss of water in the samples. The weight loss between 175-290 °C corresponded to the decomposition of the carboxylic acid and amide side groups of the PAA-co-PAM chains [43]. The DPNR and polymer backbone of PAA-co-PAM decomposed at a temperature between 336 and 475 °C. From thermograms, the temperature at 10% of weight loss (T10), temperature at maximum process rate (Tmax) and residue of the degradation process are reported in Table 2. It was observed that the presence of silica caused the increase in the decomposition temperature of the composites. Their T10 and Tmax values increased when the silica contents increased. Similar results have been reported in other research works [44]. The shift of T10 and Tmax to a higher temperature demonstrated that the composites had stability over wide temperature ranges. It could be indicated that the thermal stability of the composites was improved because the silica particles dispersed in the natural rubber matrix could adsorb heat energy and retard the heat transfer to natural rubber [45]. Moreover, the residues of M0.50/P30-DPNR/silica composites with 10, 20 and 30 phr of silica were found at 10.47, 13.68 and 26.03%, respectively. The results showed that their residues increased with increasing the silica contents and that they were higher than those of M0.50/P30-DPNR (4.52%) due to the higher content of silica in the composites. The mechanical properties of the composites are also important characteristics for the Compressive Properties The mechanical properties of the composites are also important characteristics for the application. Since composites are used as coating materials for fertilizer in agricultural applications, they might be soaked in water and buried in the soil. Therefore, the compressive properties of the composites were determined in the condition of swollen samples with water. Figure 10 displays the stress-strain curve of M0.50/P30-DPNR and M0.50/P30-DPNR/silica composites with different silica contents. The measurement was performed under compression mode and 0 to 80% strain. From the stress-strain curve, it was found that the stress increased with the increasing of strain. The compressive strength at 80% strain and compressive modulus are summarized in Table 3. For M0.50/P30-DPNR, the compressive strength and compressive modulus were 10.71 and 0.71 MPa, respectively. In the case of M0.50/P30-DPNR/silica composites, their compressive strength and compressive modulus were higher than those of M0.50/P30-DPNR and seemed to increase with an increase in silica contents. When the silica was varied from 0 to 20 phr, the compressive strength increased from 10.71 ± 0.94 to 17.29 ± 1.99 MPa and the compressive modulus increased from 0.71 ± 0.13 to 1.10 ± 0.13 MPa. The enhancement in the compressive strength and compressive modulus was obtained by the addition of silica particles. This was because of the replacement of the rubber matrix with the rigid silica particles. Furthermore, when the silica was increased, a stronger interaction between modified DPNR and silica resulted, leading to an improvement in mechanical properties. However, when the silica was added to 30 phr, the compressive strength and compressive modulus of the composites were decreased to 14.27 ± 1.64 and 0.98 ± 0.23 MPa, respectively. This was probably because the addition of large amounts of silica caused the agglomeration of silica, resulting in undesired mechanical properties. Therefore, M0.50/P30-DPNR/Si20 showed the highest compressive strength and compressive modulus, which increased by 1.61 and 1.55 times compared to M0.50/P30-DPNR, respectively. In addition, all samples can maintain their structures without the breakage of samples after increasing the compression strain up to 80%. Thus, these composites would be useful for application. The CS beads were prepared via the ionic gelation method. The different types of composites were used to coat onto CS beads. The physical appearance of crosslinked (PAA-co-PAM)-DPNR/silica-coated CS beads in the dry state is shown in Figure 11. All beads showed a spherical shape with a light-yellow color. Figure 12 shows the morphology of CS beads coated with different types of modified natural rubber/silica composites after immersion in water and freeze drying. The CS beads had a spherical shape with a size of about 1097.2 ± 149.3 µm. For the crosslinked (PAA-co-PAM)-DPNR/silica-coated CS beads with various silica contents, the size was higher than that of neat CS beads due to the coverage of high water-absorbing materials on CS beads. Their size was found to be in the range of 1246.2 ± 108.1 to 1398.9 ± 70.4 µm. Since the coating materials on CS beads exhibited high water absorption capacity, the porous morphology appeared on their surface. The pores on the surface of beads are formed via the sublimation of trapped water within the crosslinked network structure of the modified natural rubber/silica composite film after freeze drying [46]. The macro-pores formed on the surface of M0.50/P30-DPNR/CS beads with an average diameter of about 56.62 ± 8.12 µm were observed due to the access of water in their structure [47]. The interconnected porous structure was also found, and there were many small voids with a size of about 6.74 ± 2.96 µm on the wall of the samples. When the silica contents in the composites increased, it could be seen that the pore size tended to decrease. The average pore size was found to be 26.0 ± 5.6, 23.8 ± 5.1 and 18.2 ± 3.9 µm for the composites with 10, 20 and 30 phr of silica, respectively. The increase in the interaction of silica and modified natural rubber can reduce the spaces to adsorb water molecules [48]. These results corresponded to the decrease in the swelling degree when the silica was increased. Moreover, the pore wall thickness seemed to increase because the silica particles were buried in the wall of the composites. The porous framework became a closed-cell structure when the silica in the composites increased. Water Retention The water retention abilities of samples with different temperatures at 25 and 45 • C were examined, as displayed in Figure 13. To compare the water retention ability of the samples, the different types of beads were buried in sand. The amount of water remaining during the period time was collected. In this study, the sand without beads was used as a control experiment. From the results, it was observed that the water retention decreased with storage time because of the loss in water in the sample holder from evaporation. The lowest values of water retention were observed in the case of the control experiment for all of the studied time periods at the temperature of both 25 and 45 • C. At 25 • C, the water retention value was 21.55 ± 1.48% for a control sample after 96 h. However, at 45 • C, its water retention value was only 10.99 ± 3.34% after 36 h because the evaporation rate of water increased at a high temperature. After burying the beads in sand, it was found that the existence of the beads resulted in an increase in water retention capacity. As can be seen from the results, the coated CS beads had better water retention capacity compared to the uncoated CS beads. It was suggested that the prepared coating materials had the ability to absorb water and were also effective materials for retaining water in their structure. It was also observed that the CS coated with M0.50/P30-DPNR/silica composites showed higher water retention than that of M0.50/P30-DPNR. The water retention increased with an increase in silica contents. For example, at 25 • C, the water retention of M0.50/P30-DPNR/Si10/CS and M0.50/P30-DPNR/Si20/CS was 29.64 ± 1.09 and 29.98 ± 1.81%, respectively, while the water retention of M0.50/P30-DPNR/CS was 27.67 ± 1.27% after 96 h. The water retention of M0.50/P30-DPNR/Si20/CS increased by 8.35 and 39.12% when compared to M0.50/P30-DPNR/CS and the control experiment at a temperature of 25 • C. Moreover, its water retention increased by 20.48 and 61.15% when compared to M0.50/P30-DPNR/CS and the control experiment at 45 • C. The increase in water retention capacity suggests that the presence of silica can improve mechanical stability to the composites because of the strong H-bonding interactions between the silica and polar groups of modified natural rubber [49]. However, the water retention of M0.50/P30-DPNR/Si30/CS was not much different from M0.50/P30-DPNR/Si20/CS. This is because the excess of silica loading leads to forming a silica-silica interaction, and the agglomeration of silica is obtained. This resulted in the decrease in its properties, including water adsorption and mechanical properties, as described in the above section. Therefore, it is noted that the modification of natural rubber with PAA-co-PAM can improve water adsorption, and the incorporation of silica can enhance the mechanical property of the composites for holding water in the structure. These would have advantages for reducing water consumption as these materials can maintain their structure with high water-absorbing and -retaining abilities [50]. Polymers 2023, 15, x FOR PEER REVIEW 17 surface. The pores on the surface of beads are formed via the sublimation of trapped w within the crosslinked network structure of the modified natural rubber/silica comp film after freeze drying [46]. The macro-pores formed on the surface of M0.50 DPNR/CS beads with an average diameter of about 56.62 ± 8.12 µm were observed d the access of water in their structure [47]. The interconnected porous structure was found, and there were many small voids with a size of about 6.74 ± 2.96 µm on the w the samples. When the silica contents in the composites increased, it could be seen the pore size tended to decrease. The average pore size was found to be 26.0 ± 5.6, 2 5.1 and 18.2 ± 3.9 µm for the composites with 10, 20 and 30 phr of silica, respectively increase in the interaction of silica and modified natural rubber can reduce the spac adsorb water molecules [48]. These results corresponded to the decrease in the swe degree when the silica was increased. Moreover, the pore wall thickness seemed t crease because the silica particles were buried in the wall of the composites. The po framework became a closed-cell structure when the silica in the composites increase Water Retention The water retention abilities of samples with different temperatures at 25 and 45 °C were examined, as displayed in Figure 13. To compare the water retention ability of the samples, the different types of beads were buried in sand. The amount of water remaining during the period time was collected. In this study, the sand without beads was used as a control experiment. From the results, it was observed that the water retention decreased with storage time because of the loss in water in the sample holder from evaporation. The lowest values of water retention were observed in the case of the control experiment for all of the studied time periods at the temperature of both 25 and 45 °C. At 25 °C, the water retention value was 21.55 ± 1.48% for a control sample after 96 h. However, at 45 °C, its water retention value was only 10.99 ± 3.34% after 36 h because the evaporation rate of water increased at a high temperature. After burying the beads in sand, it was found that the existence of the beads resulted in an increase in water retention capacity. As can be seen from the results, the coated CS beads had better water retention capacity compared to the uncoated CS beads. It was suggested that the prepared coating materials had the ability to absorb water and were also effective materials for retaining water in their [49]. However, the water retention of M0.50/P30-DPNR/Si30/CS was not much different from M0.50/P30-DPNR/Si20/CS. This is because the excess of silica loading leads to forming a silica-silica interaction, and the agglomeration of silica is obtained. This resulted in the decrease in its properties, including water adsorption and mechanical properties, as described in the above section. Therefore, it is noted that the modification of natural rubber with PAA-co-PAM can improve water adsorption, and the incorporation of silica can enhance the mechanical property of the composites for holding water in the structure. These would have advantages for reducing water consumption as these materials can maintain their structure with high water-absorbing and -retaining abilities [50]. The loading capacity of the KNO3 in CS beads was determined. The KNO3 could be loaded in CS beads whereby the loading percentage was 40.55 ± 1.03%. This result was comparable to that reported by J.J. Perez [27]. Then, the KNO3-loaded CS beads were Loading Percentage and Release Behavior The loading capacity of the KNO 3 in CS beads was determined. The KNO 3 could be loaded in CS beads whereby the loading percentage was 40.55 ± 1.03%. This result was comparable to that reported by J.J. Perez [27]. Then, the KNO 3 -loaded CS beads were coated with various types of natural rubber/silica composites. The release behavior of KNO 3 from the different types of beads in water (pH~6) was determined because this pH was in the range of the optimal pH (5.5 to 6.5) for the growth of most plants such as ginger, cassava, maize, wheat, French bean and tomato [51]. The fertilizer release profiles are shown in Figure 14. For comparison, the uncoated sample (CS) was also investigated. From the release profile, the released amount of KNO 3 from CS reached 47.40 ± 1.37% after 2 days. Then, it reached 79.41 ± 2.01% after 14 days. The delayed release rate of KNO 3 was observed when the beads were coated with the prepared natural-rubber-based composites. The release percentages of KNO 3 from modified-natural-rubber/silica-coated CS beads were lower than those of the uncoated sample in all studied time periods. Thus, the coating with natural rubber/silica composites provided the controlled release of KNO 3 in aqueous medium. The coating agent acted as a protective layer against the fast release of fertilizer [52]. After 2 days, the release percentages were found to be 45.68 ± 0.71, 44.61 ± 0.88, 22.32 ± 1.30 and 15.26 ± 0.56% for the CS beads coated with M0.50/P30-DPNR/silica composites with 0, 10, 20 and 30 phr, respectively. In order to study the release mechanism, the different kinetic models such as the zero-order kinetic model, firstorder kinetic model, Higuchi model and Korsmeyer-Peppas model were applied to fit with the release data. The plots of various release kinetics are displayed in Figure S2 (see supplemental information) and the release kinetic parameters are presented in Table 4. The result shows that the release mechanism best fitted with the Korsmeyer-Peppas kinetic model with the highest value of the correlation coefficient (R 2 ). According to this model, the equation is Q t = kt n , where Q t is the fraction of fertilizer release at time t, k is the release rate constant and n is the release exponent for indicating the characteristics of the release mechanism. For n < 0.5, the transport mechanism followed Fickian diffusion, whereby the diffusion is the main release mechanism. In the case of n > 1, the transport mechanism is classified as Super Case II transport, in which the nutrient transport mechanism is associated with the relaxation process of hydrophilic polymers upon swelling in water. From this result, 0.5 < n < 1 was obtained, indicating a non-Fickian transport release mechanism. This demonstrated that the release occurred through both processes [53]. The coating materials influence the retarding of the release and the diffusion of the encapsulated fertilizer through the coating layer. In this case, the higher amount of silica in the composites exhibited the lower release percentage of KNO 3 . The stronger interactions between the crosslinked (PAA-co-PAM)-DPNR and silica became more hydrophobic, a characteristic that provided the stronger barrier to prevent the release of KNO 3 [54]. This resulted in a reduction in pore spaces in the network structure and a restriction to the access of water. From these characteristics, the KNO 3 can be encapsulated in CS beads and the crosslinked (PAA-co-PAM)-DPNR/silica composites, as the outer layers allow the water and dissolved substances to pass through their network structure with a low diffusion rate. Therefore, these materials can retard the release and enhance the water holding capacity that would have potential for applications [55]. characteristics, the KNO3 can be encapsulated in CS beads and the crosslinked (PAA-co-PAM)-DPNR/silica composites, as the outer layers allow the water and dissolved substances to pass through their network structure with a low diffusion rate. Therefore, these materials can retard the release and enhance the water holding capacity that would have potential for applications [55]. Conclusions The crosslinked (PAA-co-PAM)-DPNR was successfully prepared via emulsion graft copolymerization in the presence of MBA as a crosslinking agent. The grafting efficiency and grafting percentage were increased from 38.9 ± 2.1 to 81.5 ± 3.5% and from 9.9 ± 1.7 to 24.4 ± 1.1% when MBA was increased from 0 to 0.50% w of the monomer, respectively. The addition of a crosslinking agent can hold the hydrophilic PAA-co-PAM chains to a natural-rubber-based material and form a network structure that has the ability of the absorption of water. The crosslinked (PAA-co-PAM)-DPNR was mixed with silica to prepare natural rubber/silica composites. Due to the strong interaction between polar groups of the crosslinked (PAA-co-PAM)-DPNR and silica, the composites became more hydrophobic, as determined via contact angle measurement. The swelling degree in the water of crosslinked (PAA-co-PAM)-DPNR/silica composites was found in the range of 2217.3 ± 182.0 to 8132.3 ± 483.8%, when the silica was added at 10 to 30 phr. The presence of silica was found to improve the mechanical properties of the composites. For the crosslinked (PAA-co-PAM)-DPNR incorporated with 20 phr of silica, its compressive strength and compressive modulus increased by 1.61 and 1.55 times those of the unloaded silica sample, respectively. The composites exhibited good structural stability without the breakage of samples after immersion in water and compression. These composites were employed as coating materials for fertilizer. The crosslinked (PAA-co-PAM)-DPNR/silica composites were coated on potassium-nitrate-loaded chitosan beads. They had the ability of retaining water molecules and exhibited slower potassium nitrate release. Therefore, these composites had the ability for water adsorption, water retention and controlled release behavior that would be useful for agricultural application.
2023-04-05T15:06:38.736Z
2023-04-01T00:00:00.000
{ "year": 2023, "sha1": "f8b4db63078eda2360cc38744b6291524e15cd8d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/15/7/1770/pdf?version=1680424088", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "78aaa7268026ecdf659e5a6dac99f982eeb2752f", "s2fieldsofstudy": [ "Materials Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
165164137
pes2o/s2orc
v3-fos-license
Analytical treatment of the interaction quench dynamics of two bosons in a two-dimensional harmonic trap We investigate the quantum dynamics of two bosons, trapped in a two-dimensional harmonic trap, upon quenching arbitrarily their interaction strength thereby covering the entire energy spectrum. Utilizing the exact analytical solution of the stationary system we derive a closed analytical form of the expansion coefficients of the time-evolved two-body wavefunction, whose dynamics is determined by an expansion over the postquench eigenstates. The emergent dynamical response of the system is analyzed in detail by inspecting several observables such as the fidelity, the reduced one-body densities, the radial probability density of the relative wavefunction in both real and momentum space as well as the Tan contact unveiling the existence of short range two-body correlations. It is found that when the system is initialized in its bound state it is perturbed in the most efficient manner compared to any other initial configuration. Moreover, starting from an interacting ground state the two-boson response is enhanced for quenches towards the non-interacting limit. I. INTRODUCTION Ultracold quantum gases provide an excellent and highly controllable testbed for realizing a multitude of systems without the inherent complexity of their condensed matter counterparts [1]. Key features of ultracold atoms include the ability to manipulate their interparticle interactions by employing Feshbach resonances [2,3], to tune the dimensionality of the system [4,5], as well as to trap few-body ensembles possessing unique properties [6][7][8][9][10]. Two-dimensional (2D) systems are of particular interest due to their peculiar scattering properties, the emergent phase transitions, such as the Berezinskii-Kosterlitz-Thouless transition [11][12][13][14][15][16] and the existence of long-range thermal fluctuations in the homogeneous case. These thermal fluctuations in turn prohibit the development of a condensed phase, but can allow the occurence of a residual quasi-ordered state [17]. One among the few solvable quantum problems, is the system of two ultracold atoms confined in an isotropic harmonic oscillator. Here the two atoms interact via a contact pseudo-potential where only s-wave scattering is taken into account [18], an approximation which is valid at ultralow temperatures where two-body interactions dominate [19]. The stationary properties of this system have been extensively studied for various dimensionalities and for arbitrary values of the coupling strength [20][21][22][23]. Generalizations have also been reported including, for instance, the involvement of anisotropic traps [24], higher partial waves [25,26] and very recently long-range interactions [27] and hard-core interaction potentials [28]. Remarkably enough, exact solutions of few-body setups have also been obtained regarding the stationary properties of three harmonically trapped identical atoms in all dimensions [29][30][31][32][33][34]. A quench of one of the intrinsic system's parameters is the most simple way to drive it out-of-equilibrium [35]. Quenches of 87 Rb condensates confined in a 2D pancake geometry have been employed, for instance, by changing abruptly the trapping frequency to excite collective breathing modes [36,37] in line with the theoretical predictions [38,39]. On the contrary, the breathing frequency of two-dimensional Fermi gases has been recently measured experimentally [40,41] and found to deviate from theoretical predictions at strong interactions, a behavior called quantum anomaly. Also, oscillations of the density fluctuations being reminiscent of the Sakharov oscillations [42] have been observed by quenching the interparticle repulsion. Furthermore, it has been shown that the dynamics of an expanding Bose gas when switching off the external trap leads to the fast and slow equilibration of the atomic sample in one-and two-spatial dimensions respectively [43]. Moreover, the collisional dynamics of two 6 Li atoms has been experimentally monitored after quenching the frequencies of a three-dimensional harmonic trap [44]. Turning to two harmonically trapped bosons, the existing analytical solutions have been employed in order to track the interaction quench dynamics mainly in one- [45][46][47][48], but also in three-dimensional systems [49]. Focusing on a single dimension, an analytical expression regarding the eigenstate transition amplitudes after the quench has been derived [45]. Moreover, by utilizing the Bose-Fermi mapping theorem [50,51] a closed form of the time-evolved two-body wavefunction for quenches towards the infinite interaction strength has been obtained [47], observing also a dynamical crossover from bosonic to fermionic properties. Besides these investigations the interaction quench dynamics of the two-boson system in two spatial dimensions employing an analytical treatment has not been addressed. Here, the existence of a bound state for all interaction strengths might be crucial giving rise to a very different dynamics compared to its one-dimensional ana-logue. Also, regarding the strongly interacting regime the Bose-Fermi theorem does not hold. Therefore it is not clear whether signatures of fermionic properties can be unveiled although there are some suggestions for their existence [52][53][54]. Another interesting feature is the inherent analogy between three bosons interacting via a three-body force in one-dimension and two bosons interacting via a two-body force in two spatial dimensions [55][56][57][58][59]. Therefore, our work can provide additional hints on the largely unexplored three-body dynamics of three bosons in one spatial dimension [60]. The present investigation will enable us to unravel the role of the different eigenstates for the dynamical response of the system and might inspire future studies examining state transfer processes [61,62] which are currently mainly restricted to one-dimensional setups. In this work we study the interaction quench dynamics of two harmonically confined bosons in two spatial dimensions for arbitrary interaction strengths. To set the stage, we briefly review the analytical solution of the system for an arbitrary stationary eigenstate and discuss the corresponding two-body energy eigenspectrum [20]. Subsequently, the time-evolving two-body wavefunction is derived as an expansion over the postquench eigenstates of the system with the expansion coefficients acquiring a closed form. The quench-induced dynamical response of the system is showcased via inspecting the fidelity evolution. The underlying eigenstate transitions that predominantly participate in the dynamics are identified in the fidelity spectrum [63][64][65]. It is found that initializing the system in its ground state, characterized by finite interactions of either sign, it is driven more efficiently out-of-equilibrium when employing an interaction quench in the vicinity of the non-interacting limit. Due to the interaction quench the two bosons perform a breathing motion, visualized in the temporal evolution of the single-particle density and the radial probability density in both real and momentum space. These observables develop characteristic structures which signal the participation of the bound and energetically higher-lying excited states of the postquench system. The dynamics of the short-range correlations is captured by the two-body contact, which is found to perform an oscillatory motion possessing a multitude of frequencies. In all cases the predominantly involved frequency corresponds to the energy difference between the bound and the ground state. Additionally, the amplitude of these oscillations is enhanced when quenching the system from weak to infinite interactions. Moreover, it is shown that the system's dynamical response crucially depends on the initial state and in particular starting from an energetically higher excited state, the system is perturbed to a lesser extent, and a fewer amount of postquench eigenstates contribute in the dynamics [66][67][68][69][70]. However, if the quench is performed from the bound state the system is perturbed in the most efficient manner compared to any other initial state configuration. Finally, we observe that quenching the system from its ground state at zero interactions to-wards the infinitely strong ones the time-evolved wavefunction becomes almost orthogonal to the initial one at certain time intervals. This work is structured as follows. In Sec. II we introduce our setup, provide a brief summary of its energy spectrum and most importantly derive a closed form of the time-evolved wavefunction discussing also basic observables. Subsequently, we investigate the interaction quench dynamics from attractive to repulsive interactions in Sec. III and vice versa in Sec. IV as well as from zero to infinitely large coupling strengths in Sec. V. We summarize our results and provide an outlook in Sec. VI. A. Setup and its stationary solutions We consider two ultracold bosons trapped in a 2D isotropic harmonic trap. The interparticle interaction is modeled by a contact s-wave pseudo-potential, which is an adequate approximation within the ultracold regime. The Hamiltonian of the system, employing harmonic oscillator units ( = m = ω = 1), reads where r 1 and r 2 denote the spatial coordinates of each boson. Note that the prefactor 2 is used for later convenience in the calculations. The contact regularized pseudo-potential can be expressed as [71] V pp (r) = − πδ(r) with Λ being an arbitrary dimensionful parameter possessing the dimension of a wavevector and A = e γ /2 where γ = 0.577 . . . is the Euler-Mascheroni constant. We remark that the parameter Λ does not affect the value of any observable or the energies and eigenstates of the system as it has been shown in [16,71]. Furthermore, the 2D s-wave scattering length is given by a. To proceed, we perform a separation of variables in terms of the center-of-mass, R = 1 √ 2 (r 1 + r 2 ), and the relative coordinates ρ = 1 √ 2 (r 1 − r 2 ). Employing this separation, the Hamiltonian (1) acquires the form H = H CM + H rel with being the Hamiltonian of the center-of-mass and is the Hamiltonian corresponding to the motion in the relative coordinate frame. As a result, the Schrödinger equation can be casted into the form HΨ(r 1 , r 2 ) = EΨ(r 1 , r 2 ). Here the total energy of the system has two contributions, namely E = E CM +E rel , and the system's wavefunction is a product of a center-of-mass and a relative coordinate part i.e. Ψ(r 1 , r 2 ) = Ψ CM (R)Ψ rel (ρ). Since the center-ofmass hamiltonian H CM is interaction independent [see Eq. (3)] its eigenstates correspond to the well-known non-interacting 2D harmonic oscillator states [72]. We assume that the center-of-mass wavefunction takes the form Ψ CM (R) = e −R 2 /2 √ π , namely the non-interacting ground state of the 2D harmonic oscillator. Since we are interested in the interaction quench dynamics of the two interacting bosons we omit the center-of-mass wavefunction in what follows for simplicity. Following the above-mentioned separation of coordinates, the problem boils down to solving the relative part of the Hamiltonian, H rel , which is interaction dependent. For this purpose, we assume an ansatz for the relative wavefunction, which involves an expansion over the non-interacting energy eigenstates of the 2D harmonic oscillator In this expression, Γ(n) is the gamma function while L (m) n refer to the generalized Laguerre polynomials of degree n and value of angular momentum m. Also, ρ = (ρ, θ) where ρ is the relative polar coordinate and θ is the relative angle. The energy of the non-interacting 2D harmonic oscillator eigenstates in harmonic oscillator units is E rel,n,m = 2n + |m| + 1 [72]. Within our relative coordinate wavefunction ansatz [see Eq. (6) below] we will employ, however, only those states that are affected by the pseudo-potential and thus have a non-vanishing value at ρ = 0. These are the states with bosonic symmetry m = 0, i.e. zero angular momentum. The states with odd m are fermionic, since under the exchange θ → θ −π, they acquire an extra minus sign due to the term e imθ . Therefore, the ansatz for the relative wavefunction reads where the summation is performed over the principal quantum number n and we omit the angle θ since only the states with m = 0 are taken into account. Note that this ansatz has already been reported previously e.g. in Refs. [20,45]. In order to determine the expansion coefficients c n , we plug Eq. (6) into the Schrödinger equation that H rel satisfies and project the resulting equation onto the state ϕ * n (ρ). Following this procedure we arrive at The right hand side of Eq. (7) is related to a normalization factor of the relative wavefunction |Ψ rel . Indeed it has been shown [20,45] that the coefficients take the form with being a normalization constant and ψ (1) (z) the trigamma function. By inserting this expression of c n into Eq. (6), we can determine the relative wavefunction. This can be achieved by making use of the generating function of the Laguerre polynomials i.e. ∞ n=0 t n L n (x) = 1 1−t e − tx 1−t . Thus, the relative wavefunction takes the form [33] where U (a, b, z) refers to the confluent hypergeometric function of the second type (also known as Tricomi's function) and 2ν i + 1 is the energy of the i = 0, 1, . . . interacting eigenstate [73]. In what follows we will drop the subscript rel and denote these relative coordinate states by |Ψ νi . It is important to note at this point that this relative wavefunction ansatz solves also the problem of three one-dimensional harmonically trapped bosons interacting via three-body forces, see e.g. Ref. [60] for more details. To find the energy spectrum of H rel , we employ Eq. (7) along with the form of c n,i = . Note that in order to determine the right hand side of Eq. (7), we make use of the behavior of the relative wavefunction (9) close to ρ = 0. In this way, we obtain the following algebraic equation regarding the energy of the relative coordinates [20,21], 2ν i + 1, where ψ(x) is the digamma function. Note here that a different form of the algebraic Eq. (10) can be found in [20] and stems from a different definition of the scattering length a [21]. It is also important to emphasize that the energy spectrum given by Eq. (10) is independent of the form of the pseudo-potential, V pp (r), i.e. independent of Λ, A, or any short range potential, as long as its range is much smaller than the harmonic oscillator length [21]. Denoting a 0 ≡ a 2 e γ , the algebraic Eq. (10) can be casted into the simpler form ψ(−ν i ) = ln 1 . Also, we define the interparticle interaction strength [5,15,20,21,74,75] to be g = 1 The energy E rel of the two bosons as a function of the interparticle interaction strength is presented in Fig. 1. As it can be seen, for g = 0 E rel has the simple form E rel,n = 2n + 1, and thus we recover the non-interacting energy spectrum of a 2D harmonic oscillator with zero angular momentum [19,72]. In this case the energy spacing between two consecutive eigenenergies is independent of n, i.e. ∆E = E rel,n+1 − E rel,n = 2. For repulsive (attractive) interactions, the energy is increased (lowered) with respect to its value at g = 0. Also and in contrast to the one-dimensional case, there are bound states |Ψ ν0 , namely eigenstates characterized by negative energy, in both interaction regimes. Note that herein we shall refer to these eigenstates with negative energy as bound states (ν 0 ) whilst the corresponding eigenstates with positive energy in increasing energetic order will be denoted e.g. as the first (ν 1 ), second (ν 2 ) etc eigenstates and called ground, first excited state etc. The presence of these bound states can be attributed to the existence of the centripetal term − 1 4r 2 , in the 2D radial Schrödinger equation [72], which supports a bound state even for weakly attractive potentials, in contrast to the 3D case [14,76]. These energy states, ν 0 , correspond to the molecular branch of two cold atoms in two dimensions. This is clearly captured by the lowest energy branch of Fig. 1, as has been demonstrated in Ref. [33]. Note that due to a different definition of the coupling constant compared to Ref. [33], which possesses a bijective mapping to our definition of the coupling strength [75], the molecular branch maps to the bound states (ν 0 ) herein in both the repulsive and the attractive interaction regime. To further appreciate the influence of these bound states we also provide in the insets of Fig. 1 their radial probability densities 2πρ|Ψ| 2 [14] for various interaction strengths as well as the radial probability density of the ground state |Ψ ν1 at g = 0.3. In the repulsive regime of interactions (right panel) the fullwidth-at-half-maximum of 2πρ|Ψ| 2 is smaller than the one of the attractive regime (left panel). This behavior is caused by the much stronger energy of the bound state at g > 0 compared to the g < 0 case. For large interaction strengths, |g| > 8, the widths of 2πρ|Ψ| 2 tend to be the same. Another interesting feature of the 2D energy spectrum is the occurrence of a boundary signifying a crossover from the bound to the ground state (ν 0 → ν 1 ) at g = −0.51, see the corresponding vertical line in Fig. 1. This means that the negative eigenenergy of |Ψ ν0 crosses the zero energy axis and becomes the positive eigenenergy of |Ψ ν1 at g = −0.51. This crossover is captured, for instance, by 2πρ|Ψ| 2 which changes from a delocalized [e.g. at g = 0.3] to a localized [e.g. at g = −1] distribution. The existence of this boundary affects the labeling of all the states and therefore ν i becomes ν i+1 as it is crossed from the repulsive side of interactions. We note here that with |Ψ ν1 [|Ψ ν0 ] we label the ground [bound] state and with |Ψ νi , i > 1, the corresponding excited states. For repulsive interactions the energy of the bound state diverges at g = 0 as −1/a 2 0 [26,76] or as −2e 1/g in terms of the interparticle strength, while it approaches its asymptotic value for very strong interactions [see Fig. 1]. The two bound states share the same asymptotic value E rel = −1.923264 at g → ±∞. We remark that this behavior of the bound state in the vicinity of g = 0 is the same as the one of the so-called universal bound state of two cold atoms in two dimensions in the absence of a trap [26]. We also note that the states |Ψ νi with i = 0, approach their asymptotic values faster (being close to their asymptotic value already for g = 2) than the bound states. The asymptotic values are determined via the algebraic equation ψ(−ν i ) = 0. Moreover, it can be shown that approximately the positive energy in the infinite interaction limit is given by the formula E rel ≈ 2n + 1 − 2 ln(n) + O (ln n) −2 when n 1 [73]. B. Time-evolution of basic observables To study the dynamics of the two harmonically trapped bosons, we perform an interaction quench starting from a stationary state of the system, |Ψ in νi (0) , at g in to the value g f . Let us also remark in passing that the dynamics of two bosons in a 2D harmonic trap employing an analytical treatment has not yet been reported. The time-evolution of the system's initial wavefunction reads where |Ψ f νj denotes the j-th eigenstate of the postquench HamiltonianĤ with energy (2ν f j + 1). Note that the indices in and f indicate that the corresponding quantities of interest refer to the initial (prequench) and final (postquench) state of the system respectively. Moreover, the overlap coefficients, Ψ f νj |Ψ in νi (0) , between the initial wavefunction and a final eigenstate |Ψ f νj determine the degree of participation of this postquench eigenstate in the dynamics. Recall also here that the center-of-mass wavefunction, Ψ CM (R), is not included in Eq. (12) since the latter is not affected by the quench [see also Sec. II A] and therefore does not play any role in the description of the dynamics. It can be shown that initializing the system in the eigenstate |Ψ in νi at g in , the probability to occupy the eigenstate |Ψ f νj after the quench is given by with G p,q m,n z a 1 , . . . a p b 1 , . . . b q being the Meijer G-function [77]. Remarkably enough, the coefficients d ν f j ,ν in i can also be expressed in a much simpler form if we make use of the ansatz of Eq. (6). Indeed, by employing the orthonormality properties of the non-interacting eigenstates ϕ n (ρ) and the explicit expression of the expansion coefficients appearing in the ansatz (6), the overlap coefficients between a final and the initial eigenstate reads . It should be emphasized here that this is a closed form of the overlap coefficients and the only parameters that need to be determined are the energies, which are determined from the algebraic equation (10). As a result in order to obtain the time-evolution of |Ψ in νi (0) we need to numerically evaluate Eq. (12) which is an infinite summation over the postquench eigenstates denoted by |Ψ f νj . In practice this infinite summation is truncated to a finite one with an upper limit which ensures that the values of all observables have been converged with respect to a further adding of eigenstates. Having determined the time-evolution of the system's wavefunction [Eq. (12)] enables to determine any observable of interest in the course of the dynamics. To inspect the dynamics of the system from a single-particle perspective we monitor its one-body density In this expression, the total wavefunction of the system is denoted byΨ(r 1 , [78]. To arrive at the second line of Eq. (15) we have expressed the relative, ρ 2 = 1 2 (r 2 1 + r 2 2 − 2r 1 · r 2 ), and the center-of-mass coordinates, R 2 = 1 2 (r 2 1 + r 2 2 + 2r 1 · r 2 ), in terms of the Cartesian coordinates (r 1 , r 2 ) and integrated out the ones pertaining to the other particle. In particular, we adopted the notation r 1 = (x, y) and r 2 = (z, w) for the coordinates that are being integrated out. Moreover, the integral I ν f j ,ν f k appearing in the last line of Eq. (15) can be further simplified by employing the replacements z = x − z ,w = y − w and then express the new variables in terms of polar coordinates. The emergent angle integration can be readily performed and the integral with respect to the radial coordinate becomes Here, I 0 (x) is the zeroth order modified Bessel function of the first kind [73,77]. Another interesting quantity which provides information about the state of the system on the two-body level is the radial probability density of the relative wavefunction It provides the probability density to detect two bosons for a fixed time instant t at a relative distance ρ. It can be directly determined by employing the overlap coefficients of Eq. (14). Moreover, the corresponding radial probability density in momentum space reads Here, the relative wavefunction in momentum space is obtained from the two dimensional Fourier transform where J 0 (x) is the zeroth order Bessel function. To estimate the system's dynamical response after the quench we resort to the fidelity evolution F (t). It is defined as the overlap between the time-evolved wavefunction at time t and the initial one [79], namely Evidently, F (t) is a measure of the deviation of the system from its initial state [45]. In what follows, we will make use of the modulus of the fidelity, |F (t)|. Most importantly, the frequency spectrum of the modulus of the fidelity |e iωt grants access to the quench-induced dynamical modes [63,64,[80][81][82]. Indeed, the emergent frequencies appearing in the spectrum correspond to the energy differences of particular postquench eigenstates of the system and therefore enable us to identify the states that participate in the dynamics (see also the discussion below). Another observable of interest is the two-body contact D. The latter is defined from the momentum distribution in the limit of very large momenta i.e. C(k, t) and captures the ocurrence of short-range twobody correlations [83][84][85]. Moreover, this quantity can be experimentally monitored [86,87] and satisfies a variety of universal relations independently of the quantum statistics, the number of particles or the system's dimensionality [85,[88][89][90]. Having at hand the eigenstates of the system, we can expand the time evolved contact after a quench from |Ψ in νi at g in to an arbitrary g f in terms of the contacts of the postquench eigenstates [91]. Namely The contacts D j of the postquench eigenstates |Ψ f νj can be inferred by employing the behavior of the eigenstates [Eq. (9)] close to zero distance, ρ → 0, between the atoms By plugging Eq. (22) into Eq. (19) and restricting ourselves to small ρ values we obtain the contact from the leading order term (∼ 1/k 2 ) of the resulting expression. The contact for the postquench eigenstates |Ψ f νj reads Note that in order to capture the quench-induced dynamical modes that participate in the dynamics of the contact, we employ its corresponding frequency spectrum i.e. Having analyzed the exact solution of the two bosons trapped in a 2D harmonic trap both for the stationary and the time-dependent cases, we subsequently explore the corresponding interaction quench dynamics. In particular, we initialize the system into its ground state |Ψ in ν1 for attractive interactions and perform interaction quenches towards the repulsive regime (Sec. III) and vice versa (Sec. IV). III. QUENCH DYNAMICS OF TWO ATTRACTIVE BOSONS TO REPULSIVE INTERACTIONS We first study the interaction quench dynamics of two attractively interacting bosons confined in a 2D isotropic harmonic trap. More specifically, the system is initially prepared in its corresponding ground state |Ψ in ν1 at g in = −1. At t = 0 we perform an interaction quench towards the repulsive interactions letting the system evolve. Our main objective is to analyze the dynamical response of the system and identify the underlying dominant microscopic mechanisms. A. Dynamical response To examine the dynamical response of the system after the quench we employ the corresponding fidelity evolution |F (t)| [see Eq. (20)] [92]. Figure 2 (a) shows |F (t)| for various postquench interaction strengths g f . We observe the emergence of four distinct dynamical regions where the fidelity exhibits a different behavior. In region I, Fig. 2 (b)] and therefore the system remains essentially unperturbed. Note that the oscillation period is slightly smaller than π [see also the discussion below], e.g. see Fig. 2 (b) for g f = −0.5. Entering region II, −0.27 < g f < 0.8, the system departs significantly from its initial state since |F (t)| exhibits large amplitude oscillations in time (see the blue lobes in Fig. 2 (a) within region II) deviating appreciably from unity [see also Fig. 2 (b) at g f = 0.5]. A more careful inspection of |F (t)| reveals that it oscillates with at least two frequencies, namely a faster and a slower one. Indeed, |F (t)| oscillates rapidly (fast frequency) within a large amplitude envelope of period π (slow frequency). Within region III, 0.8 < g f < 2.7, the oscillation amplitude of |F (t)| becomes smaller when compared to region II. Most importantly, we observe the appearance of irregular minima and maxima in |F (t)| being shifted with time [ Fig. 2 (b) at g f = 1]. For strong interactions, 2.7 < g f < 10, we encounter region IV in which |F (t)| > 0.9 performs small amplitude oscillations that resemble the ones already observed within region I [ Fig. 2 ]. An important difference with respect to region I is that the oscillations of |F (t)| are faster and there is more than one frequency involved, compare |F (t)| at g f = −0.5 and g f = 7 in Fig. 2 To gain more insights onto the dynamics, we next resort to the frequency spectrum of the fidelity F (ω), shown in Fig. 3 (a) for a varying postquench interaction strength. This spectrum provides information about the contribution of the different postquench states that participate in the dynamics. Indeed, the square of the fidelity [see Eq. (20)] can be expressed as where d ν Note also that the amplitudes of the frequencies [encoded in the colorbar of Fig. 3 (a)] mainly depend on the product of their respective overlap coefficients, i.e. |d ν f Finally, the values of the frequencies ω νj ,ν k along with the coefficients |d ν f Fig. 3 (b)] determine the dominantly participating postquench eigenstates [45,63,64,80]. Focusing on region I we observe that in F (ω) there are two frequencies, hardly visible in Fig. 3 (a). The most dominant one corresponds to ω ν1,ν0 for −1 < g f < −0.51 and to ω ν2,ν1 for −0.51 < g f < −0.27. It is larger than 2 giving thus rise to a period of |F (t)| smaller than π. The fainter one corresponds to ω ν2,ν1 for −1 < g f < −0.51 and to ω ν3,ν2 for −0.51 < g f < −0.27. For reasons of clarity let us mention that each of these frequencies, of course, coincide with the corresponding energy difference between the respective eigenstates of the system's eigen-spectrum [ Fig. 1]. Recall that at g f = −0.51 indicated by the vertical line in Fig. 3 [see also Fig. 1], the labeling of the eigenstates changes and e.g. the frequency ω ν1,ν0 becomes ω ν2,ν1 . As it can be seen from Fig. 3 (a) ω ν1,ν0 decreases for increasing g f which is in accordance with the behavior of the energy gap ω ν1,ν0 = 2(ν f 1 − ν f 0 ) in the system's eigenspectrum [ Fig. 1]. Turning to region II, a multitude of almost equidistant frequencies appears. This behavior is clearly captured in the vicinity of g f = 0, where the energy difference between consecutive eigenenergies exhibits an almost equal spacing of the order of ∆E 2 [see also Fig. 1]. To characterize the observed frequency branches in terms of transitions between the system's eigenstates we determine the corresponding overlap coefficients d ν f j ,ν in 1 shown in Fig. 3 (b) and also the respective eigenstate energy differences known from the eigenspectrum of the system [ Fig. 1]. In this way, we identify the most prominent frequency ω ν2,ν1 appearing in F (ω) which is near ω ≈ 2. Additionally, a careful inspection of Fig. 3 (b) reveals that there is a significant decrease of |d ν f 2 ,ν in 1 | 2 for a larger g f and subsequently energetically higher excited states come into play, e.g. |Ψ f ν3 . These latter contributions give rise to the appearance of energetically higher frequencies in F (ω). Indeed the bound state, |Ψ f ν0 , possesses a non-negligible population already for g f > 0.27 [ Fig. 3 (b)] giving rise to the frequency branch ω ν1,ν0 that at g f ≈ 0.54 has a quite large value of approximately 14.9 and decreases rapidly as g f increases. Of course, this behavior stems directly from the energy gap between the bound, |Ψ f ν0 , and the ground, |Ψ f ν1 , states as it can be easily confirmed by inspecting the eigenspectrum [ Fig. 1]. In the intersection between regions II and III, ω ν1,ν0 becomes degenerate with the other frequency branches [see the black circles in Fig. 3 (a)], e.g. ω ν4,ν1 in the vicinity of g f = 1 and ω ν3,ν1 close to g f = 3 [ Fig. 3 (a)]. The aforementioned frequency branches are much fainter when compared to ω ν1,ν0 , since the overlap coefficients between the relevant eigenstates are small, e.g. Fig. 3 (b)]. Finally in region IV, there are mainly two dominant frequencies, namely ω ν1,ν0 and ω ν2,ν1 , that acquire constant values as g f increases. Indeed, in this region |d ν f are the most significantly populated coefficients [ Fig. 3 (b)], which in turn yield these two frequencies. B. Role of the initial state To investigate the role of the initial eigenstate in the dynamical response of the two bosons, we consider an interaction quench from g in = −1 to g f = 1 but initializing the system at energetically different excited states i.e. |Ψ in ν k , k > 1, and the bound state |Ψ in ν0 . In particular, Fig. 4 (a) illustrates |F (t)| with a prequench eigenstate being the bound state, the first, the third, the fifth and the seventh excited state. In all cases, |F (t)| exhibits an irregular oscillatory motion as in the case of |Ψ in ν1 , see also Fig. 2 (b). Evidently, for an energetically higher initial eigenstate (but not the bound state) |F (t)| takes larger values and therefore the system is less perturbed. However, when the two bosons are prepared in the bound state, |Ψ in ν0 , of the system then |F (t)| drops to smaller values as compared to the case of energetically higher initial states and the system becomes more perturbed. The impact of the initial state on the oscillation amplitude of |F (t)| is reflected on the values of the corresponding overlap coefficients that appear in the expansion of the fidelity in Eq. (24). More precisely, when an overlap coefficient possesses a dominant population with respect to the others then |F (t)| exhibits a smaller oscillation amplitude than in the case where at least two overlap coefficients possess a non negligible population. For convenience and in order to identify the states that take part in the dynamics, we provide the relevant overlap coefficients, |d ν f j ,ν in k | 2 , for the quench g in = −1 → g f = 1 in Table I for various initial eigenstates |Ψ in ν k . Indeed, an initial energetically higher-lying excited state results in the dominant population of one postquench state while the other states exhibit a very small contribution, e.g. see the last column of Table I. For this reason an initially energetically higher excited state leads to a smaller oscillation amplitude of |F (t)|. Moreover, the large frequency oscillations appearing in |F (t)| are caused by the presence of several higher than first order eigenstate transitions as e.g. ω ν6,ν4 , ω ν7,ν4 , ω ν4,ν0 in the case of starting from |Ψ in ν4 [ Fig. 4 (b)]. The transition mainly responsible for these large frequency oscillations of |F (t)| involves the bound state |Ψ f ν0 . Indeed, by inspecting |F (t)| of different initial configurations shown in Fig. 4 (a) we observe that starting from energetically higher excited states such that ν j > ν 4 the respective contribution of |Ψ f ν0 diminishes [see also Table I] leading to a decay of the amplitude of these large frequency oscillations of |F (t)|. The aforementioned behavior becomes evident e.g. by comparing |F (t)| for ν in 2 and ν in 8 in Fig 4 (a). On the other hand, in order to unveil the participating frequencies in the dynamics of |F (t)| we calculate its spectrum |F (ω)|, shown in Figs. 4(b), (c). We observe that starting from an energetically higher excited state several frequencies, referring to different eigenstate transitions, are triggered. Most of these frequencies which refer to different initial states almost coincide e.g. ω ν5,ν4 with ω ν9,ν8 , since the energy gap of the underlying eigenstates is approximately the same [see also Fig. 1]. They possess however a distinct amplitude. Additionally, there are also distinct contributing frequencies e.g. compare ω ν4,ν0 with ω ν8,ν0 . The latter are in turn responsible for the dependence of the oscillation period of |F (t)| on the initial eigenstate of the system. Finally, let us note that if the system is quenched to other final interaction strengths (not shown here for brevity reasons), across the four dynamical regions identified in Fig. 2(a), then |F (t)| follows a similar pattern as discussed in Fig. 4 (a). 5. (a)-(f) Time-evolution of the one-body density following an interaction quench from g in = −1 to g f = 1. The system of two bosons is initialized in its ground state, |Ψ in ν 1 , trapped in a 2D harmonic oscillator. (g)-(j) The corresponding one-body densities for the pre-and postquench eigenstates (see legends) whose overlap coefficients are the dominant ones for the specific quench. C. One-body density evolution To monitor the dynamical spatial redistribution of the two atoms after the quench at the single-particle level, we next examine the evolution of the one-body density ρ (1) (x, y, t) [Eq. (15)]. Figures 5 (a)-(f) depict ρ (1) (x, y, t) following an interaction quench from g in = −1 to g f = 1 when the system is initialized in its ground state configuration |Ψ in ν1 . Note that the shown time-instants of the evolution lie in the vicinity of the local minima and maxima of the fidelity [see also Fig. 2 (b)], where the system deviates strongly and weakly from its initial state respectively. Overall, we observe that the atoms undergo a breathing motion manifested as a contraction and expansion dynamics of ρ (1) , and the densities of the three most significant, in terms of the overlap coefficients, final states namely |Ψ f ν1 , |Ψ f ν0 and |Ψ f ν2 . Comparing these ρ (1) (x, y, t = 0) with the ρ (1) (x, y, t) we can deduce that during evolution the one-body density of the system is mainly in a superposition of the |Ψ f ν1 and the |Ψ f ν0 . The excited state |Ψ f ν2 has a smaller contribution to the dynamics of ρ (1) (x, y, t) [e.g. see Fig. 5 (e)] compared to the other states. D. Evolution of the radial probability density In order to gain a better understanding of the nonequilibrium dynamics of the two bosons, we also employ the time-evolution of the radial probability density of the relative wavefunction B(ρ, t) [Eq. (17)]. Recall that this quantity provides the probability density of finding the two bosons at a distance ρ apart for a fixed time-instant. The dynamics of B(ρ, t) after a quench from g in = −1 to g f = 1, starting from |Ψ in ν1 , is illustrated at selected time-instants in Fig. 6 (a). We can infer that the emergent breathing motion of the two bosons is identified via the succession in time of a single [e.g. at t = 0.46, 1.31] and a double peak [e.g. at t = 0.84, 2.63] structure in the dynamics of B(ρ, t). Here, the one peak is located close to ρ = 0 and the other close to the harmonic oscillator length (unity in our choice of units). Moreover, by comparing B(ρ, t) [ Fig. 6 (a)] with ρ (1) (x, y, t) [ Fig. 5] suggests that a double peak structure in B(ρ, t) refers to an expansion of ρ (1) (x, y, t) [e.g. at t = 6.09], while a single peaked B(ρ, t) corresponds to a contraction of ρ (1) (x, y, t) [e.g. at t = 1.31]. Indeed, for a double peak structure of B(ρ, t), its secondary maximum always occurs at slightly larger radii than the maximum of a single peak distribution of B(ρ, t), possessing also a more extended tail. This further testifies the expanding (contracting) tendency of the cloud in the former (latter) case. To reveal the microscopic origin of the structures building upon B(ρ, t) we also calculate this quantity [see the inset of Fig. 6 (a)] for the states |Ψ in ν1 , |Ψ f ν1 , |Ψ f ν0 and |Ψ f ν2 that primarily contribute to the dynamics in terms of the overlap coefficients [see also Fig. 3 (b)]. Indeed, comparing B(ρ, t) [ Fig. 6 Fig. 6 (a)], enables us to deduce that B(ρ, t) resides mainly in a superposition of the ground (|Ψ f ν1 ), the bound (|Ψ f ν0 ) and the first excited (|Ψ f ν2 ) eigenstates. Also, it can be clearly seen that the main contribution stems from the ground state, while the other two states possess a smaller contribution. In particular, the participation of the bound state can be inferred due to the existence of the peak close to ρ = 0, which e.g. for t = 0.84 becomes prominent, whereas the presence of the excited state |Ψ f ν2 is discernible from the spatial extent of the B(ρ, t) e.g. at t = 2.63 [ Fig. 6 (a)]. (a)] with B(ρ) of the stationary eigenstates [inset of To showcase the motion of the two atoms in momentum space we invoke the evolution of the radial probability density in momentum space C(k, t) [93] illustrated in Fig. 6 (b) for the quench g in = −1 → g f = 1 starting from |Ψ in ν1 . We observe that in the course of the dynamics a pronounced peak close to k = 0 and a secondary one located at values of larger k appear in C(k, t). Moreover, the breathing motion in momentum space is manifested by the lowering and raising of the zero momentum peak accompanied by a subsequent enhancement or reduction of the tail of C(k, t), as shown e.g. at t = 0.84, 6.09. Note also that the tail of C(k, t) decays in a much slower manner compared to the tail of B(ρ, t). Indeed, the latter decays asymptotically as ∼ e −ρ 2 [see also Eq. (9)] while by fitting the tail of C(k, t) we observe a decay law ∼ 1/k 3 (not shown here for brevity reasons) [83][84][85]94]. Additionally, in order to unveil the corresponding superposition of states that contribute to the momentum distribution, the inset of Fig. 6 (b)]. As it can be seen, the bound state (|Ψ f ν0 ) exhibits a broad momentum distribution with a tail that extends to large values of k, while C(k) of the ground state (|Ψ f ν1 ) contributes the most and has a main peak around k = 0. On the other hand, the excited state (|Ψ f ν2 ) contributes to a lesser extent, and its presence is mainly identified when the momentum distribution exhibits two nodes, e.g. at t = 2.63. E. Evolution of the contact Subsequently we examine the contact D(t)/D(0) in the course of the evolution after a quench from g in = −1 to g f = 1, see Fig. 7 (a). Recall that the contact reveals the existence of short-range two-body correlations. Evidently D(t)/D(0) exhibits an irrregular oscillatory behavior containing a variety of different frequencies. Indeed, by inspecting the corresponding frequency spectrum depicted in Fig. 7 (b), a multitude of frequencies appear. The most predominant frequencies possessing the largest amplitude originate from the energy difference between the bound state, |Ψ ν0 and energetically higher-lying states, such as ω ν1,ν0 , ω ν2,ν0 and ω ν3,ν0 . Also here ω ν2,ν1 has a comparable value to ω ν3,ν0 and thus contributes non-negligibly to the dynamics of D(t)/D(0). Moreover, there is a multitude of other contributing frequencies e.g. ω ν8,ν0 having an amplitude smaller than ω ν3,ν0 . These frequencies indicate the presence of higher-lying states in the dynamics of the contact. The above-described behavior of D(t)/D(0) is expected to occur since the contact is related to short-range two-body correlations, and as such its dynamics involves a large number of postquench eigenstates, giving rise to the frequencies observed in Fig. 7 (b). IV. QUENCH DYNAMICS OF TWO REPULSIVE BOSONS TO ATTRACTIVE INTERACTIONS As a next step, we shall investigate the interaction quench dynamics of two initially repulsive bosons towards the attractive side of interactions. In particular, throughout this section we initialize the system in its ground state configuration |Ψ in ν1 at g in = 1 (unless it is stated otherwise) and perform an interaction quench to the attractive side of the spectrum. A. Dynamical response In order to study the dynamical response of the system, we invoke the fidelity evolution [Eq. (20)] [92] shown in Fig. 8 (a) with respect to g f . We observe the appearance of three different dynamical regions, in a similar fashion with the response of the reverse quench scenario discussed in Section III A. Within region I, 0.35 < g f < 1, |F (t)| undergoes small amplitude oscillations [see also Fig. 8 (b)] and the system remains close to its initial state. However, in region II characterized by −2.36 < g f < 0.35 the system becomes significantly perturbed since overall |F (t)| oscillates between unity and zero. For instance, see |F (t)| in Fig. 8 (b) at g f = −0.2 where e.g. at t π/2, 3π/2 |F (t)| 0.07. Region III where −10 < g f < −2.36 incorporates the intermediate and strongly attractive regime of interactions. Here, |F (t)| oscillates with a small amplitude, while its main difference compared to region I is that the oscillation period is larger. Another interesting feature of |F (t)| is that as we enter deeper into region III the oscillation amplitude decreases and the corresponding period becomes smaller (see also the discussion below). To identify the postquench eigenstates that participate in the nonequilibrium dynamics of the two bosons, we next calculate the fidelity spectrum F (ω) [ Fig. 9 (a)] as well as the most notably populated overlap coefficients Fig. 9 (b)] for a varying postquench interaction strength. In region I we observe the occurrence of a predominant frequency, namely ω ν2,ν1 , in F (ω). This frequency is associated with the notable population of the coefficients |d ν f 1 ,ν in 1 | 2 and |d ν f Fig. 9 (b)]. Recall that the amplitude of the frequency peaks appearing in F (ω) depends on the participating overlap coefficients, as it is explicitly displayed in Eq. (24). Entering region II there is a multitude of contributing frequencies, the most prominent of them being ω ν2,ν1 . The appearance of the different frequencies is related to the fact that in this regime |d ν f 1 ,ν in 1 | 2 drops significantly for more attractive interactions accompanied by the population of other states such as |Ψ f ν2 and |Ψ f ν3 [see Fig. 9 (b)]. It is important to remember here that at the vertical line g f = −0.51 [see also Fig. 1] there is a change in the labeling of the eigenstates, resulting in the alteration of the frequencies from ω νj ,ν k to ω νj−1,ν k−1 when crossing this line towards the attractive regime. In region III there are essentially two excited frequencies, namely ω ν1,ν0 and ω ν2,ν1 . The former is the most dominant since here the mainly contributing states are |Ψ f ν1 , |Ψ f ν0 as it can be seen from Fig. 9 (b). Note also that ω ν1,ν0 increases for decreasing g f , a behavior that reflects the increasing energy gap in the system's energy spectrum [ Fig. 1]. On the other hand, the amplitude of ω ν2,ν1 is weaker and essentially fades away for strong attractive interactions. This latter behavior can be attributed to the fact that the contribution of the |Ψ f ν2 state in this region decreases substantially. B. Role of the initial state In order to expose the role of the initial state for the two-boson dynamics, we explore interaction quenches from g in = 1 towards g f = −1 but initializing the system in various excited states |Ψ in ν k , k > 1, or the bound state |Ψ in ν0 . The emergent dynamical response of the system as captured via |F (t)| is depicted in Fig. 10 (a) starting from the bound, the first, the third, the fifth and the seventh excited state. Inspecting the behavior of |F (t)| we can infer that the system becomes more perturbed when it is prepared in an energetically lower excited state since the oscillation amplitude of |F (t)| increases accord- 0138 TABLE II. The most significantly populated overlap coefficients, |d ν f j ,ν in k | 2 , for the quench from g in = 1 to g f = −1 initializing the system at various initial states. Only the coefficients with a value larger than 0.9% are shown. ingly, compare for instance |F (t)| for ν in 2 and ν in 6 . Moreover, starting from the bound state the system is significantly perturbed compared to the previous cases and |F (t)| showcases an irregular oscillatory behavior. This pattern is maintained if the quench is performed to other values of g f which belong to the attractive regime (not shown here for brevity reasons). Recall that a similar behavior of |F (t)| occurs for the reverse quench process, see Sec. III B and also Fig. 4 (a). The above-mentioned behavior of the fidelity evolution can be understood via employing the corresponding overlap coefficients |d ν f j ,ν in k | 2 , see also Eq. (24). As already discussed in Sec. III B, the fidelity remains close to its initial value in the case that one overlap coefficient dominates with respect to the others and deviates significantly from unity when at least two overlap coefficients possess a notable population. The predominantly populated overlap coefficients, |d ν f j ,ν in k | 2 , are listed in Table II when starting from different initial eigenstates |Ψ in ν k . A close inspection of this Table reveals that starting from an energetically higher excited state leads to a lesser amount of contributing overlap coefficients with one among them becoming the dominant one. This behavior explains the decreasing tendency of the oscillation amplitude of |F (t)| for an initially energetically higher excited state, e.g. compare |F (t)| of |Ψ in ν2 and |Ψ in ν6 in Fig. 10 (a). Accordingly, an initially lower (higher) lying excited state results in a larger (smaller) amount of excitations and thus to more (less) contributing frequencies. The latter can be readily seen by resorting to the fidelity spectrum |F (ω)| show in Figs. 10 (b) and (c) when starting from |Ψ in ν4 and |Ψ in ν8 respectively. D. Evolution of the radial probability density As a next step, we examine the evolution of the radial probability density B(ρ, t) [Eq. (17)] presented in Fig. 12 (a) for a quench from |Ψ in ν1 and g in = 1 to g f = −0.2. Note that the snapshots of B(ρ, t) depicted in Fig. 12 (a) correspond again to time-instants at which the fidelity evolution exhibits local minima and maxima [see also Fig. 8 (b)]. We observe that when |F (t)| is minimized, e.g. at t = 1.50, 4.00, 7.74, B(ρ, t) shows a double peak structure around ρ ≈ 0.5 and ρ ≈ 2 respectively. However, for times that correspond to a maximum of the fidelity, e.g. at t = 3.1, 6.17, B(ρ, t) deforms to a single peak distribution around ρ ≈ 1.2. To relate this alternating behavior of B(ρ, t) with the breathing motion of the two bosons we can infer that when B(ρ, t) possesses a double peak distribution the cloud expands while in the case of a single peak structure it contracts, see also Fig. 11. It is also worth mentioning here that for the times at which B(ρ, t) exhibits a double peak structure there is a quite significant probability density tail for ρ > 1.5. This latter behavior is a signature of the participation of energetically higher-lying excited states as we shall discuss below. Indeed, the inset of Fig. 12 (a) depicts B(ρ) of the initial (|Ψ in ν1 ) and the postquench (|Ψ f ν1 and |Ψ f ν2 ) states that have the major contribution for this specific quench in terms of the overlap coefficients [see also Fig. 9 (b)]. Comparing B(ρ, t) with B(ρ) we can deduce that mainly the ground, |Ψ f ν1 , and the first excited, |Ψ f ν2 , states of the postquench system are imprinted in the dynamics of the relative density. More specifically, |Ψ f ν2 gives rise to the enhanced tail of B(ρ, t) [ Fig. 12 (a)], while the participation of |Ψ f ν1 (possessing also the major contribution) leads to the central peak of B(ρ, t) close to ρ = 0. The radial probability density in momentum space [93], C(k, t), is shown in Fig. 12 (b) for selected time instants of the evolution following the quench g in = 1 → g f = −0.2. We observe that C(k, t) exhibits always a two peak structure with the location and amplitude of the emergent peaks being changed in the course of the evolution. In particular, when the atomic cloud contracts e.g. at t = 3. 10, 9.19, see also Figs. 11 (b), (f), C(k, t) has a large amplitude peak around k ≈ 0.1 and a secondary one of small amplitude close to k ≈ 0.4. However, for an expansion of the two bosons e.g. at t = 1.50 [Figs. 11 (a)] the radial probability density in momentum space shows a small and a large amplitude peak around k ≈ 0.05 and k ≈ 0.3 respectively. Moreover, the momentum distribution during evolution is mainly in a superposition of the ground |Ψ f ν1 and the first excited state |Ψ f ν2 , see in particular the inset of Fig. 12 (b) which illustrates C(k) of these stationary states. As it can be readily seen, |Ψ f ν2 is responsible for the secondary peak of C(k, t) at higher momenta, while the ground state contributes mainly to the peak close to k = 0. E. Dynamics of the contact To unravel the emergence of short-range two-body correlations we next track the time-evolution of the rescaled contact D(t)/D(0) after an interaction quench from g in = 1 to g f = −1, see Fig. 13 (a). As it can be seen, the rescaled contact exhibits an irregular multifrequency oscillatory pattern in time. It is also worth mentioning that here the involved frequencies in the dynamics of D(t)/D(0) are smaller when compared to the ones excited in the reverse quench scenario, see in particular Fig. 13 (b) and Fig. 7 (b). By inspecting the corresponding frequency spectrum presented in Fig. 13 (b), we can deduce that the most prominent frequency ω ν1,ν0 ≈ 2.5 corresponds to the energy difference between the bound and the ground state. Moreover this predominant frequency is smaller than the corresponding dominant frequency ω ν1,ν0 ≈ 7.5 occuring at the reverse quench process [ Fig. 7 (b)]. There is also a variety of other contributing frequencies which signal the participation of higher-lying states in the evolution of the contact, such as ω ν7,ν0 , ω ν2,ν1 , ω ν3,ν1 and ω ν2,ν0 , exhibiting however a much smaller amplitude as compared to ω ν1,ν0 . These frequencies are essentially responsible for the observed irregular motion of D(t)/D(0). V. QUENCH FROM ZERO TO INFINITE INTERACTIONS Up to now we have discussed in detail the interaction quench dynamics of two bosons trapped in a 2D harmonic trap for weak, intermediate and strong coupling in both the attractive and the repulsive regime. Next, we aim at briefly analyzing the corresponding interaction quench dynamics from g in = 0 to g f = ∞. We remark here that when the system is initialized at g in = 0 the formula of Eq. (14) is no longer valid and the overlap coefficients between the eigenstates |Ψ in νi and |Ψ f νj are given by . (25) The dynamical response of the system after such a quench [g in = 0 → g f = ∞] as captured by the fidelity evolution [Eq. (20)] is illustrated in Fig. 14 when considering different initial states |Ψ in ν k . Evidently, when the system is initialized in its ground state |Ψ in ν1 , |F (t)| performs large amplitude oscillations. The latter implies that the time-evolved wavefunction becomes almost orthogonal to the initial one at certain time intervals and as a consequence the system is significantly perturbed. Also, it can directly be deduced by the fidelity evolution that when the system is prepared in an energetically higher excited state it is less perturbed since the oscillation amplitude of |F (t)| is smaller, e.g. compare |F (t)| for |Ψ in ν1 and |Ψ in ν5 . This tendency which has already been discussed in Secs. III B and IV B can be explained in terms of the distribution of the amplitude of the overlap coefficients, see also Eq. (24). Indeed, if there is a single dominant overlap coefficient then |F (t)| ≈ 1, while if more than one overlap coefficients possess large values |F (t)| deviates appreciably from unity. Here, for instance, the first two most dominant overlap coefficients when starting from |Ψ in ν1 and |Ψ in ν5 are |d ν f be seen for the time intervals that |F (t)| is minimized [ Fig. 14], e.g. at t = 0.78, 2.42, 5.61, B(ρ, t) exhibits a pronounced peak close to ρ = 0 and a secondary one at a larger radii ρ ≈ 1.5. However, when |F (t)| ≈ 1 (t = 1.62, 3.13, 8.04) B(ρ, t) shows a more delocalized distribution. To explain this behavior of B(ρ, t) we next calculate B(ρ) of the initial state (i.e. |Ψ in ν1 ) and of the postquench eigenstates that possess the most dominant overlap coefficients, namely |Ψ f ν0 , |Ψ f ν1 and |Ψ f ν2 , following the above-described quench scenario [see the inset of Fig. 15 (a)]. Comparing B(ρ, t) with B(ρ) we observe that the bound state, |Ψ f ν0 , gives rise to the prominent peak close to ρ = 0 [see Fig. 15 (a)]. Moreover, the states |Ψ f ν1 and |Ψ f ν2 are responsible for the emergent spatial delocalization of B(ρ, t). Of course, the ground state (|Ψ f ν1 ) plays a more important role here than the first excited state (|Ψ f ν2 ), since |d ν f Turning to the dynamics in momentum space, Fig. 15 (b) presents C(k, t) at specific time-instants for the quench g in = 0 → g f = ∞ starting from the ground state |Ψ in ν1 . We observe that when the system deviates notably from its initial state (i.e. t = 0.78, 2.42, 5.61) meaning also that |F (t)| 1, then C(k, t) shows a two peak structure with the first peak located close to k = 0 and the second one at k ≈ 0.4. Notice also here that the tail of C(k, t) has an oscillatory behavior. On the other hand, if |F (t)| is close to unity (e.g. at t = 1.62, 3.13, 8.04) where also B(ρ, t) is spread out [ Fig. 15 (a)], the corresponding C(k, t) has a narrow momentum peak close to zero and a fastly decaying tail at large k. The inset of Fig. 15 (b) illustrates C(k) of the initial eigenstate and some specific postquench ones which possess the largest contributions for the considered quench according to the overlap coefficients. It becomes evident that both the bound state, |Ψ f ν0 , and the ground state, |Ψ f ν1 , of the postquench system are mainly imprinted in C(k, t). Indeed, the bound state has a broad momentum distribution whereas the ground state possesses a main peak close to k = 0. On the other hand, the first excited state (|Ψ f ν2 ) has a smaller contribution compared to the previous ones and its presence can be discerned in Fig. 15 (b) from the oscillatory tails of C(k, t) at large momenta. Finally, we examine the dynamics of the rescaled contact D(t)/D(0) illustrated in Fig. 16 (a) following a quench from g in = 0.2 to g f = ∞. Note here that we choose g in = 0.2, and not exactly g in = 0, since the contact is well-defined only for interacting eigenstates [88]. Evidently D(t)/D(0) undergoes a large amplitude multifrequency oscillatory motion. The large amplitude of these oscillations stems from the fact that the system is quenched to unitarity and therefore the built up of short-range two-body correlations is substantial especially when compared to the correlations occuring for finite interactions as e.g. the ones displayed in Fig. 7 and Fig. 13 (a). We remark that similar large amplitude oscillations of the contact, at the frequency of the twobody bound state, have already been observed in Ref. [95] during the interaction quench dynamics of a three dimensional homogeneous BEC from zero to very large interactions. Regarding the participating frequencies identified in the spectrum of the contact shown in Fig. 16 (b), we can clearly infer that the dominant frequencies refer to the energy differences between the bound state, |Ψ ν0 and higher-lying states e.g. ω ν1,ν0 , ω ν2,ν0 . The existence of other contributing frequencies in the spectrum, such as ω ν2,ν1 and ω ν3,ν0 , has also an impact on the dynamics of the contact and signal the involvement of higher-lying states. VI. CONCLUSIONS We have explored the quantum dynamics of two bosons trapped in an isotropic two-dimensional harmonic trap, and interacting via a contact s-wave pseudo-potential. As a first step, we have presented the analytical solution of the interacting two-body wavefunction for an arbitrary stationary eigenstate. We also briefly discuss the corresponding two-body energy eigenspectrum covering both the attractive and repulsive interaction regimes, showcasing the importance of the existing bound state. To trigger the dynamics we consider an interaction quench from repulsive to attractive interactions and vice versa as well as a quench from zero to infinite interactions. Having the knowledge of the stationary properties of the system the form of the time-evolving two-body wavefunction is provided. Most importantly, we showcase that the expansion coefficients can be derived in a closed form and therefore the dynamics of the two-body wavefunction can be obtained by numerically determining its expansion with respect to the eigenstates of the postquench system. In all cases, the dynamical response of the system has been analyzed in detail and the underlying eigenstate transitions that mainly contribute to the dynamics have been identified in the fidelity spectrum together with the system's eigenspectrum. We have shown that initializing the system in its ground state, characterized by either repulsive or attractive interactions, it is driven more efficiently out-ofequilibrium, as captured by the fidelity evolution, when performing an interaction quench towards the vicinity of zero interactions. However, if we follow a quench towards the intermediate or strong coupling regimes of either sign, then the system remains close to its initial state. As a consequence of the interaction quench the two bosons undergo a breathing motion which has been visualized by monitoring the temporal evolution of the single-particle density and the radial probability density, in both real and momentum space. The characteristic structures building upon the above-mentioned quantities enable us also to infer about the participation of energetically higher-lying excited states of the postquench system. To inspect the dependence of the system's dynamical response we have examined also quenches for a variety of different initial states such as the bound state or an energetically higher excited state in both the repulsive and attractive interaction regimes. It has been found that starting from energetically higher excited states, the system is perturbed to a lesser extent, and a fewer amount of postquench eigenstates contribute in the emergent dynamics. A crucial role here is played by the bound state of the postquench system, both in the attractive and the repulsive regime, whose contribution is essentially diminished as we initialize the two bosons at higher excited states. On the other hand, when the quench is performed from the bound state, independently of the interaction strength, the system is driven out-of-equilibrium in the most efficient manner than any other initial state configuration. Additionally, upon quenching the system from zero to infinite interactions starting from its ground state the time-evolved wavefunction becomes even orthogonal to the initial one at certain time intervals. Again here, if the two bosons are prepared in an energetically higher excited state then the system becomes more unperturbed. Inspecting the evolution of the radial probability density we have identified that it mainly resides in a superposition of the bound and the ground state alternating from a two peaked structure to a more spread distribution. To unveil the emergence of short-range two-body correlations we have examined the dynamics of the Tan's contact in all of the above-mentioned quench scenaria. In particular, we have found that the contact performs a multifrequency oscillatory motion in time. The predominant frequency of these oscillations refers to the energy difference between the bound and the ground states. The participation of other frequencies possessing a comparable smaller amplitude signals the contribution of higherlying states in the dynamics of the contact. Moreover, upon quenching the system from weak to infinite interactions, the oscillation amplitude of the contact is substantially enhanced indicating the significant development of short-range two-body correlations as compared to the correlations occuring at finite postquench interactions. There is a variety of fruitful directions to follow in future works. An interesting one would be to consider two bosons confined in an anisotropic two-dimensional harmonic trap and examine the stationary properties of this system in the dimensional crossover from two-to onedimensions. Having at hand such an analytical solution would allow us to study the corresponding dynamics of the system upon changing its dimensionality e.g. by considering a quench of the trap frequency in one of the spa-tial directions which enable us to excite higher than the monopole mode. Also one could utilize the spectra with respect to the different anisotropy in order to achieve controllable state transfer processes [61,62]. Besides the dimensionality crossover, it would be interesting to study the effect of the presence of the temperature in the interaction quench dynamics examined herein. Finally, the dynamics of three two-dimensional trapped bosons requires further investigation. Even though the Efimov effect is absent in that case [96], the energy spectrum is rich possessing dimer and trimer states [33] and the corresponding dynamics might reveal intriguing dynamical features when quenching from one to another configuration.
2019-05-24T16:26:00.000Z
2019-05-24T00:00:00.000
{ "year": 2019, "sha1": "47c2e303066170505b573c940cfcdc06be808cbe", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1905.10320", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ead8da3fbf6171aee5b616a5a65918e34b77c4fc", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
225533947
pes2o/s2orc
v3-fos-license
Illustrative Application of the 2 nd -Order Adjoint Sensitivity Analysis Methodology to a Paradigm Linear Evolution/Transmission Model: Point-Detector Response This work illustrates the application of the “Second Order Comprehensive Adjoint Sensitivity Analysis Methodology” (2 nd -CASAM) to a mathematical model that can simulate the evolution and/or transmission of particles in a heterogeneous medium. The model response is the value of the model’s state function (particle concentration or particle flux) at a point in phase-space, which would simulate a pointwise measurement of the respective state function. This paradigm model admits exact closed-form expressions for all of the 1 st - and 2 nd -order response sensitivities to the model’s uncertain parameters and domain boundaries. These closed-form expressions can be used to verify the numerical results of production and/or commercial software, e.g., particle transport codes. Furthermore, this paradigm model comprises many uncertain parameters which have relative sensitivities of identical magnitudes. Therefore, this paradigm model could serve as a stringent benchmark for in-ter-comparing the performances of all deterministic and statistical sensitivity analysis methods, including the 2 nd -CASAM. Illustrative Application of the 2 nd -Order Adjoint Sensitivity Analysis Methodology to a Paradigm Linear Evolution/Transmission Model: Point-Detector Response. Introduction The application of the general second-order adjoint sensitivity analysis metho-dology presented in [1] is illustrated in this work by means of a simple mathematical model which expresses a conservation law of the model's state function. This paradigm model is representative of transmission of particles and/or radiation through materials [2] [3], chemical kinetics processes [4] [5], radioactive decay modeled by the Bateman equation, etc. Although the model is simple, it comprises a large number of model parameters, thereby involving a correspondingly large number of sensitivities (i.e., functional derivatives) of the model's responses to the model parameters. Furthermore, the model has been deliberately designed so that a large number of relative response sensitivities display identical values. The fact that the model has a large number of parameters and the fact that all but a few relative sensitivities have identical values would make it very difficult, if not impossible, to use statistical methods to compute the first-and second-order sensitivities of the responses to all of the parameters of this model, since the computational costs would be prohibitive. Of course, statistical methods would not be able to compute the exact values of these first-and second-order sensitivities. For such models, involving many parameters but relatively few responses, the Second-Order Comprehensive Adjoint Sensitivity Analysis Methodology (2 nd -CASAM) for Linear Systems, presented in Part I [1], is best suited for computing exactly and efficiently the first-and second-order response sensitivities. This work is organized as follows: Section 2 presents the paradigm evolution model. Section 3 presents the application of the 2 nd -CASAM [1] for efficiently computing the exact closed-form expressions of the first-and second-order sensitivities of a "point-type" response to both model and boundary parameters. The concluding remarks offered in Section 4 highlight the comprehensive verification mechanism which is inherently built into the 2 nd -CASAM [1] to ensure that the second-level adjoint functions are derived and computed correctly. All in all, the exact expressions of the 1 st -and 2 nd -order sensitivities presented in this work provide stringent benchmarks for the verification of the accuracies of any other methods, deterministic and/or statistical, for performing sensitivity analysis. Mathematical Modeling of a Paradigm Evolution/Transmission Benchmark Problem The general 2 nd -CASAM methodology presented in [1] is applied in this work to a simple paradigm model, admitting a closed-form analytic solution for convenient verification of all results to be obtained, which simulates a typical evolution or attenuation of a quantity that will be denoted as ( ) The simple evolution system represented by Equations (1) and (2) occurs in the mathematical modeling of many physical systems. For example, the dependent variable ( ) t ρ could represent [2] [3] the evolution of the concentration of a substance in a homogeneous mixture of N materials, from an imprecisely known initial quantity, denoted as in ρ , measured at an initial-time value t β =  towards an imprecisely known final-time value u t β = . The quantities i n and i σ would represent various imprecisely known material (e.g., chemical) properties of the i th -material ( ) Alternatively, ( ) The following functional, denoted as ( ) 1 ; , R ρ α β , can represent mathematically such a measurement: δ − denotes the well-known Dirac-delta (impulse) functional. In Equation (3), the vector α denotes the "vector of model parameters" and defined as follows: Similarly, the vector β denotes the "vector of boundary parameters" and is defined as follows: ( ) subject to uncertainties, the actual probability distributions of these parameters are not known in practice. Usually, only the "nominal" (or "mean") values and the respective variations from the nominal values (e.g., standard deviations) of the respective components are known. The nominal values will be denoted using the superscript "zero" so that the vector comprising the nominal values of the model parameters, denoted as 0 α , will be defined for the system under consideration as follows: † 0 0 0 0 0 0 0 0 0 1 1 1 , , , , , , , , , Similarly, the vector comprising the nominal values of the boundary parameters is denoted as 0 β and is defined for the system under consideration as follows: Altogether, the physical system modeled by Equations (1) through (7) For subsequent verification of the expressions that will be obtained for various response sensitivities, the closed-form solution of Equations (1) and (2) Using Equation (9) in Equation (3) Application of the 2 nd -CASAM for Computing Exactly and Efficiently the 1 st -and 2 nd -Order Response Sensitivities of a "Point Detector" Response to Uncertain Model and Boundary Parameters The variations between the true and the nominal values of the model and boundary parameters will be considered to constitute the components of the vectors δα and δβ , respectively, defined as follows: Since the state function is related to the model and boundary parameters α and β through Equations (1) and (2), it follows that the variations and δβ in the model and boundary parameters will cause a corresponding variation in The total first-order sensitivity of the response Equation (3) is provided [6] by the 1 st -order total sensitivity (G-differential) ( ) The variation Since the closed-form solution represented by Equation (9) is not available in practice, the direct effect term, ( ) . This sequence of steps yields the following relation: The following sequence of operations is performed next using Equation ( 4) Insert the boundary condition provided in Equation (17) into Equation (19). The result of the above sequence of operations is the following expression for where the first-level adjoint function ( ) ( ) In terms of the first-level adjoint function ( ) ( ) 1 t ψ , the partial sensitivities of ( ) 1 ; , R ρ α β with respect to the variations in the model parameters are the quantities in Equation (20) that multiply the respective parameter variations, namely: Recalling the expression of the direct effect term, ( ) 1 dir R δ , defined in Equation (15), yields the following additional first-order sensitivity: Since neither the direct-effect nor the indirect-effect terms depend on the variation It is evident from Equations (23) through (27) that the sensitivities of the response ( ) 1 ; , R ρ α β can be computed by fast quadrature methods applied to the integrals appearing in these expressions, after the 1 st -level adjoint function has been obtained by solving once the 1 st -LASS, which comprises Equa-tions (21) and (22). Notably, the 1 st -LASS needs to be solved once only since the 1 st -LASS does not depend on any variations in the model parameters or state functions. Particularly important is the response sensitivity to the "initial condition" in ρ since, as Equation (25) indicates, the value of the 1 st -level adjoint  is proportional to the response sensitivity to the "initial condition". Since the value of the 1 st -level adjoint  can be obtained only after computing the entire evolution of ( ) ( ) to the "initial-time" 0 t β =  , it becomes apparent that response sensitivities to initial conditions provide a stringent verification procedure for assessing the accuracy of the solution of the 1 st -LASS. Solving the 1 st -LASS, cf. Equations (21) and (22), yields the following expression for the 1 st -level adjoint function ( ) ( ) is the customary Heaviside unit-step functional, defined as Inserting the result from Equation (29) into Equations (23)-(26), respectively, yields the following expressions: The magnitudes of the 1 st -order relative sensitivities provide a quantitative measure for ranking the importance of the respective parameters in affecting the response (e.g., the importance of the respective parameter's uncertainty in contributing to the overall uncertainty in the response). For the paradigm illustrative evolution problem considered in this work, Equations (23) and (24) indicate the important fact that the relative sensitivities of the response to the parameters i σ , ( )( ) , and the relative sensitivities of the response to the parameters i n , ( )( ) , respectively, happen to be identical, for all of these 2N model parameters, since Therefore, statistical methods that use a priori screening techniques to reduce the number of model parameters that are actually considered in the respective statistical uncertainty/sensitivity analysis will very likely fail to achieve their goal for problems that have many parameters with identical relative sensitivities, as is the case shown in Equation (36). Hence, this illustrative paradigm problem, which has many model parameters that have identical relative sensitivities, would be a prime candidate for testing the various statistical methods for sensitivity and uncertainty analysis. In contrast, a single large-scale computation for obtaining the adjoint function ( ) ( ) The results for the 1 st -order response sensitivities obtained in Section 2.1 can also be verified by noting that the solution of the 1 st -LFSS, comprising Equations (16) and (17), has the following expression: The starting point for obtaining expressions of the 2 nd -order response sensitivities is provided by the G-differentials of the expressions shown in Equations (23)-(27). To keep the notation as simple as possible, the superscript "zero" will henceforth be omitted (except where stringently needed) when denoting "nominal values," since it will be clear from the derivations to follow that all 1 st -and 2 nd -order sensitivities are to be evaluated at the nominal values of parameters. Results for the 2nd-Order Response Sensitivities Corresponding to The first-order G-differential of Equation (23) yields: d , The direct-effect term defined by Equation (39) Therefore, the need for solving these equations (which depend on parameter variations) will be circumvented by expressing the indirect-effect term defined in Equation (40) in an alternative way so as to eliminate the appearance of . The inner product between two elements will be denoted as and is defined as follows: Writing Equations (16) and (41) in matrix form, as follows: and using the definition given in Equation (43), we now construct the inner product of Equation (44) with a square integrable two-component function H to obtain the following relation: Integrating by parts the left-side of Equation (45) so as to transfer the differential operations on The last two terms on the right-side of Equation (46) The boundary conditions for Equations (47) and (48) Using the conditions given in Equations (17) The 2 nd -order sensitivities shown in Equations (52)-(57) can be computed after having determined the 2 nd -level adjoint function , it follows that the right-sides of Equations (47) and (48) also depend on this index. Strictly speaking, therefore, the 2 nd -level adjoint sensitivity function Hence, in the most unfavorable situation, the 2 nd -LASS, comprising Equations (47)-(49) would need to be solved numerically for each distinct value i n , for a total of N-times. Even in such a "worse-case scenario," however, only the right sides (i.e., "sources") of Equations (47) and (48) would need to be modified, which is relatively easy to implement computationally. The left-sides of these equations remain unchanged, since they are independent of the index The components of the 2 nd -level adjoint function ( ) ( ) (58), to obtain the following expressions for the components of the 2 nd -level adjoint function Using Equations (64) and (65) in Equations (52)-(57) and performing the respective operations yields the following results for the respective partial 2 nd -order sensitivities: As before, the right-sides of expressions shown in Equations (66)-(71) are to be evaluated at the nominal values for the parameters, but the superscript "zero," which indicates "nominal values," has been omitted, for notational simplicity. Results for the 2nd-Order Response Sensitivities Corresponding to Computing the first-order G-differential of Equation (24) , satisfies the following 2 nd -LASS: The sources on the right-sides of the 2 nd -LASS defined by Equations (76)-(78) are to be evaluated at the nominal values for the parameters, but the superscript "zero," which indicates "nominal values," has been omitted, for notational simplicity. Comparing Equations (76)-(78) to Equations (47)-(49) and recalling Equations (59)-(61) indicates that the components of the 2 nd -level adjoint function have the following expressions: Adding the direct-effect term defined in Equation (73) Inserting the expressions obtained in Equations (79) and (80) for the components of the 2 nd -level adjoint function Results for the 2nd-Order Response Sensitivities Corresponding to The 2 nd -order response sensitivities corresponding to ( ) 1 ; , in R ρ ρ ∂ ∂ α β will be calculated in this Section by taking the G-differential of Equation (25). Since the model responses need to be written in the form of an inner product in order to apply the adjoint sensitivity analysis methodology, Equation (25) is re-written in the following form: Taking the G-differential of Equation (93) yields and The direct-effect defined in Equation (95) in the forward function, as in Sections 2.2.1 and 2.2.2. Therefore, the 2 nd -level adjoint function that would be needed to recast the indirect-effect term defined in Equation (96), by following the same general procedure as used in Sections 2.2.1 and 2.2.2, would be a one-component (as opposed to a "two-component" vector) function. Thus, the 2 nd -LASS needed to recast the indirect-effect term defined in Equation (96) is constructed by following a procedure similar to the one that was used in Section 2.1, by applying the definition provided in Equation (18) to construct the inner product of a square-integrable function ( ) ( ) ( ) ( ) with Equation (41) and integrating the left-side of the resulting equation by parts once, so as to transfer the differential operation from This sequence of steps yields the following relation: The last term on the right-side of Equation (97) is now required to represent the indirect-effect term defined in Equation (96). This is accomplished by requiring that Adding the direct-effect term defined in Equation (95) The closed-form solution of the 2 nd -LASS provided in Equations (98) and (99) has the following expression: Replacing the result for the 2 nd -level adjoint function obtained in Equation (106) The following sequence of operations is now performed using Equation (114) The solution of Equations (116) and (117) is: In terms of the 2 nd -level adjoint function ( ) ( ) where ( ) ( ) ( ) ( The last two terms on the right-side of Equation (138) will represent the indirect-effect term defined in Equation (136) by requiring that Using Equations (137)- (141) and (17) in Equation (136) yields the following expression for the indirect-effect term defined in Equation (136):
2020-07-23T09:06:26.799Z
2020-07-21T00:00:00.000
{ "year": 2020, "sha1": "04d646d01f858176af76d1666fcd9ad18b7dd27a", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=101648", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "36fa046777852e8c506ffc4dce02e4cffea638e2", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Mathematics" ] }
56561332
pes2o/s2orc
v3-fos-license
An insight in the surroundings of HR4796 HR4796 is a young, early A-type star harbouring a well structured debris disk, shaped as a ring with sharp inner edges. It forms with the M-type star HR4796B a binary system, with a proj. sep. ~560 AU. Our aim is to explore the surroundings of HR4796A and B, both in terms of extended or point-like structures. Adaptive optics images at L'-band were obtained with NaCo in Angular Differential Mode and with Sparse Aperture Masking (SAM). We analyse the data as well as the artefacts that can be produced by ADI reduction on an extended structure with a shape similar to that of HR4796A dust ring. We determine constraints on the presence of companions using SAM and ADI on HR4796A, and ADI on HR4796B. We also performed dynamical simulations of a disk of planetesimals and dust produced by collisions, perturbed by a planet located close to the disk outer edge. The disk ring around HR4796A is well resolved. We highlight the potential effects of ADI reduction of the observed disk shape and surface brightness distribution, and side-to-side asymmetries. No planet is detected around the star, with masses as low as 3.5 M_Jup at 0.5"(58 AU) and less than 3 M_Jup in the 0.8-1"range along the semi-major axis. We exclude massive brown dwarfs at separations as close as 60 mas (4.5 AU) from the star thanks to SAM data. The detection limits obtained allow us to exclude a possible close companion to HR4796A as the origin of the offset of the ring center with respect to the star; they also allow to put interesting constraints on the (mass, separation) of any planet possibly responsible for the inner disk steep edge. Using detailed dynamical simulations, we show that a giant planet orbiting outside the ring could sharpen the disk outer edge and reproduce the STIS images published by Schneider et al. (2009). Introduction Understanding planetary systems formation and evolution has become one of the biggest challenges of astronomy, since the imaging of a debris disk around β Pictoris in the 80's (Smith and Terrile 1984) and the discovery of the first exoplanet around the solar-like star 51 Pegasi during the 90's (Mayor and Queloz 1995). Today, about 25 debris disks have been imaged at optical, infrared, or submillimetric wavelengths (http://astro.berkeley.edu/kalas/disksite). Debris disks trace stages of system evolution where solid bodies with sizes significantly larger than the primordial dust size (larger than meters or km sized) are present to account for through collisions, the presence of short lived dust. They are thought to be privileged places to search for planets. This is particularly true for those showing peculiar structures (e.g. rings with sharp edges) or asymmetries, spirals even though other physical effects not involving planets could also lead to the formation of similar structures. Takeuchi and Artymowicz 2001 for instance showed that relatively small amounts (typ. 1-a few Earth masses) of gas can shape the dust disk through gas-dust interactions into rings (see below). It is remarkable however that all the stars around which relatively close (separations less than 120 AU) planets have been imaged are surrounded by debris disks: a ≤ 3-M Jup planetary companion was detected in the outskirts of Fomalhaut's debris disk (119 AU from the star; Kalas et al. 2008), four planetary companions of 7-10 M Jup were imaged at 15, 24, 38 and 68 AU (projected separations) from HR 8799 (Marois et al. 2008, Marois et al. 2010). Using VLT/NaCo L'-band saturated images, we detected a 9 + − 3 M Jup planet in the disk of β Pictoris (≃ 12 Myr) with an orbital radius of 8-12 AU from the star Chauvin et al. 2012). More recent studies at Ks show that β Pic b is located in the inclined part of the disk , conforting the link between disk morphology and the presence of a planet (Mouillet el al. 1997;Augereau et al. 2001). β Pic b also confirms that giant planets form in a timescale of 10 Myr or less. Interestingly, β Pic b and maybe also HR8799e could have formed in situ via core accretion, in contrast with the other, more remote, young companions detected with high-contrast imaging. If formed in situ, the latter probably formed through gravitational instabilities within a disk, or through the fragmentation and collapse of a molecular cloud. There are many exciting questions regarding disks and planets: could different planet formation processes be at work within a given disk? Disks and planets are known to exist in binary (2) systems; (how) do massive companions impact on the dynamical evolution of inner planets and disks? In a recent study, Rodriguez and Zuckerman 2011 showed that for a binary system to have a disk, it must either be a very wide binary system with disk particles orbiting a single star or a small separation binary with a circumbinary disk. Such results can help the search for planets if one relates debris disks with planet formation. However, another question, already mentioned, is to which extent and how can debris disks indicate the presence of already formed planets? A particularly interesting system in the present context is HR4796A, consisting of an early-type (A0), young, closeby star (see Table 1) surrounded by dust, identified in the early 90's (Jura 1991) and resolved by Koerner et al. 1998 andJayawardhana et al. 1998 at mid-IR from the ground, and at near-IR with NICMOS on the HST (Schneider et al. 1999) as well as from the ground, coupling coronagraphy with adaptive optics (Augereau et al. 1999). The resolved dust shapes as a narrow ring, with steep inner and outer edges. The steepness of the inner edge of the dust ring has been tentatively attributed to an unseen planet (Wyatt et al. 1999); however, none has been detected so far. The disk images + SED modeling required at least two populations of grains, one, narrow (a few tens AU) cold ring, located at ≃ 70 AU, and a second one, hotter and much closer to the star (Augereau et al. 1999), although the existence of an exozodiacal dust component is debated (Li and Lunine 2003). Wahhaj et al. 2005, argued that in addition to the dust responsible for the ring-like structure observed at optical/near IR wavelengths, a wider, low-density component should be present at similar separations to account for the thermal IR images. Recently, higher quality (SN/angular resolution) data were obtained with STIS (Schneider et al. 2009) and from the ground , the latter using performant AO system on a 8 m class telescope, as well as Angular Differential Imaging (ADI, see below). With the revised distance of HR4796 with respect to earlier results, the ring radius is at about 79 AU, and has a width of 13 AU (STIS 0.2-1 µm data). Furthermore, these authors show a 1.2-1.6 AU physical shift of the projected center of the disk wrt the star position along the major axis and Thalmann et al. 2011 moreover measures a 0.5 AU shift along the disk minor axis. Finally, and very interestingly in the context of planetary system formation, HR4796 has also a close-by (7.7 arcsec, i.e. about 560 AU projected separation), M-type companion (Jura et al. 1993), plus a tertiary one, located much further away (13500 AU in projection; Kastner et al. 2008). The closest companion may have played a role in the outer truncature of the disk, even though, according to Thebault et al. 2010, it alone cannot account for the sharp outer edge of HR4796 as observed by STIS. ADI (Marois et al. 2006) is a technics that has proved to be very efficient in reaching very high contrast from the ground on point-like objects. It has also been used to image disks around HD61005 (the Moth disk, Buenzli et al. 2010), HD32297 β Pic (Lagrange et al. 2011), but it should be used with care when observing extended structures, as the morphology of these structures may be strongly impacted by this method (Milli et al, 2012, in prep). We present here new high contrast images of HR4796 obtained with NaCo on the VLT at L' band, both in ADI and Sparse Aperture Mode (SAM, ) aiming at exploring the disk around HR4796A, and at searching for possible companions around HR4796A as well as around HR4796B. SAM and ADI are complementary as the first give access to regions in the 40-400 mas range from the star, and ADI further than typically 300 mas. Pedagogical examples of SAM performances and results are reported in Lacour et al. 2011. Log of observations VLT/NaCo (Lenzen et al. 2003;Rousset et al. 2003) L'-band data were obtained on April, 6 and 7th, 2010, in ADI mode and with SAM. We used the L27 camera which provides a pixel scale of ≃ 27 mas/pixel. The log of observations is given in Table 2. The precise detector plate scale as well as detector orientation were measured on an θ 1 Ori C field observed during the same run, and with HST (McCaughrean and Stauffer 1994 (with the same set of stars TCC058, 057, 054, 034 and 026). We found a platescale of 27.10 + − 0.04 mas per pixel and the North orientation offset was measured to be -0.26 + − 0.09 • if we do not consider a systematics in the North position, or -0.26 + − 0.3 • otherwise (see Lagrange et al. 2011 for a detailed discussion on the absolute uncertainty on the detector orientation). ADI data The principle of ADI imaging is given in Marois et al. 2006 (see also Lafrenière et al. 2007). Here, a typical ADI sequence consisted in getting sets of saturated images (datacubes of NDIT images) at different positions on the detector, followed and precedented by a series of un-saturated PSF images recorded with a neutral density filter (ND Long). These unsaturated images are used to get an estimate of the PSF shape for calibration purposes (photometry, shape), and fake planet simulation. On April 5/6, a few tests were made with different offsets patterns (star centered on either 2 or 4 positions on the detector) so as to test the impact of HR4796B (which rotates on the detector during the ADI observations) on the final image quality 1 . This is important as the field of view (FoV) rotation was fast. On April 6/7, the saturated images were recorded with two offsets corresponding to two opposite quadrants on the detector. Both nights, the atmospheric conditions were good on average, but variable, and the amplitude of field rotation was larger than 80 • (see Table 2). The comparison between the PSFs taken prior and after the saturated images does not reveal strong variations. SAM data Sparse aperture masking is obtained on NaCo by insertion of a mask in the cold pupil wheel of the camera (Lacour et al. 2011). The mask acts as a Fizeau interferometer. It forms in the focal Table 2. Log of observations. "Par. range" stands for the parallactic angles at the start and end of observations; "EC mean" for the average of the coherent energy and "t0 mean" for the average of the coherence time during the observations. plane of the camera interference fringes which are used to recover the complex visibilities of the astronomical object. Of the four available masks , we used the 7 holes mask which gives the highest throughput (16%). It is made of 1.2 meters wide circular apertures (scaled on M1) positioned in a non-redundant fashion over the pupil. Minimum and maximum baseline lengths are respectively 1.8 and 6.5 meters. Each mask offered by SAM can be used in addition to almost all the spectral filters offered by Conica. The principle of SAM is based on its ability to facilitate the deconvolution of phase aberrations. Phase errors are introduced by i) atmospheric residuals and ii) instrumental aberrations (also called non-common path errors). We used integration times of the order of the typical coherence time of the phase errors. It permits a partial deconvolution of the remaining atmospheric perturbation not corrected by the AO. But most importantly, it gives an excellent correction of the slowly changing instrumental aberrations. This later point is the important factor which makes aperture masking competitive with respect to full aperture AO. In practice, L' SAM data on HR4796 were obtained on June 2011. The adopted DIT was 100 ms, equivalent to a few τ 0 in the L' band. Each set of observation consists in 8 ditherings of the telescope to produce 8 datacubes of 500 frames on 8 different positions on the detector. Each dither moves the star by 6 arcsec in X or in Y on the windowed detector (512 by 512 pixels, equivalent to 14 arcsec on sky). After 8 dithers, the telescope is offset to the K giant star HD110036 for calibration, where the very same observation template is repeated. Four star-calibrator pairs were obtained totalizing 64 datacubes, requiring a total observation time of 2 hours (including overheads). Over this time, the object has rotated by 50 degrees (the variation of the parallactic angle). ADI data Each individual ADI image was bad pixel-corrected and flatfielded as usual. Background subtraction was made for each cube using the closest data cubes with the star at a different offset. Data selection was also made, within each data cube and also for each data cube. Recentering of the images was done using the offsets measured by Moffat fitting of the saturated PSF. The data cubes were then stacked (averaged) and then reduced with different procedures that are described in details in Lagrange et al. 2011 and reference there-in: cADI, sADI, rADI and LOCI. These procedures differ in the way the star halo is estimated and subtracted. We recall here the differences between these various procedures, as well as new ones developed to limit the disk self-subtraction in cADI and/or LOCI: -In cADI, the PSF is taken as the mean or median of all individual recentered ADI saturated images. -To remove as much as possible the contribution of the disk from the PSF in the cADI images, we tested two slightly modified cADI reductions. In the first one, we start as usual, i.e. build a PSF from the median of all data, subtract this PSF to all data and rotate back the obtained residual images to align their FoV. The data are then combined (median) to get a first image of the disk. Then, to remove the disk contribution to the PSF, we rotate the disk image back to the n different FoV orientations corresponding to those of the initial images and subtract the median of these rotated disk images to the PSF. We obtain thus a PSF corrected (to first order) from the disk contribution. This disk-corrected PSF is then subtracted to the individual initial images; the individual residuals are then rotated back to be aligned and stacked (median) to get a new disk image (corresponding to one iteration). This ADI procedure is referred to as cADI-disk. In the second one, we mask the disk region in each file when used to compute the reference 2 . We will call this method mcADI. This method will be described in details in a forthcoming paper (Milli et al, 2012, in prep). -The rADI procedure (identical to Marois et al. 2006 ADI) selects for each frame a given number of images that were recorded at parallactic angles separated by a given value in FWHM (the same value in FWHM for each separation), to build a PSF to be subtracted from the image considered. -In the LOCI approach, for each given image and at each given location in the image, we compute "parts" of "PSFs", using linear combinations of all available data, with coefficients that allow to minimize the residuals in a given portion of the image. -To limit the impact of the disk self-subtraction on the LOCI images, we also modified our LOCI approach, masking the disk in each file whenever the disk appears in the optimization zone (see Milli et al, 2012, in prep.). We will call this method mLOCI. The parameters used for the rADI and LOCI procedures are the following : -LOCI/mLOCI ∆r = 1.4 × FWHM below 1.6" and 5.6 × FWHM beyond (radial extent of the subtraction zones); g = 1 or 0.5 (radial to azimuthal width ratio), N A = 300; separation criteria 1. × FWHM. -rADI: separation criteria: 1.5 × FWHM; number of images used to compute each "PSF" : 20, ∆r = 1.4 × FWHM below 1.6 arcsec and 5.6 × FWHM beyond (radial extent of the psf reconstruction zones) For comparison purposes, we also performed a zero-order reduction (hereafter referred to as "nADI") which consists in, for each image, 1) computing an azimuthal average of the image (with the star position as the center of the image); we get then a 1-D profile, 2) circularizing this 1-D profile to get a 2-D image centered on the star position, 3) subtracting the obtained image to the initial image to get corrected image. We then derotate and stack all the "corrected" images. nADI clearly does not benefit from the pupil stabilization and is not to be considered as a real ADI reduction procedure, but can help in some cases disentangling artefacts produced by ADI reductions from real features. The data obtained on the 6th and 7th were reduced separately and then averaged. As they happened to have similar S/N ratio, a simple averaging was made. SAM data The first step to reduce the SAM data is to clean the frames. This can be done in the same way as any classical imaging method in the infrared. In practice, we flatfielded the data and subtracted the background. The background was estimated by taking the median value of the 8 datacubes of a single observation set. As any interferometric facility, the observable parameters of SAM are fringes. The information lays in the contrast (which, once normalized, is called visibility) and the phase. Contrasts and phases are obtained by least square fitting of the diffraction pattern. Since the fitting of sinusoidal curves is a linear least square problem, a downhill algorithm to find the maximum likelihood was not required. Instead, inversion was done by projection of the datacubes on a parameter space defined by each complex visibility fringes. The matrix used for projection is determined by singular value decomposition of a model of the fringes. In the end, we checked that it gives exactly the same result as a least square minimization algorithm of the kind of conjugate gradient (but much faster). The fringes are modeled by cosines of given frequency multiplied by the PSF of the Airy pattern of a single hole. Wavelength bandwith is accounted for by smearing the pattern over the filter bandpass. As a result, we get a single complex value for each baseline and each frame. They are used to compute the bispectrum, which is summed over the 8 datacubes which correspond to a single acquisition. Then, the closure phases are obtained by taking the argument of the bispectrum. One set of closure phase is obtained for an observation set which takes around 8 minutes. Over that time, the parrallactic angle changes less than 6 degrees, which effect is neglected (baselines rotation during an observation set is not accounted for). The final step consists in calibrating the closure phase of HR4796 by subtracting the corresponding values obtained for the red giant (HD110036). Data simulations Obviously, ADI affects the resulting disk shape because of disk self-subtraction. This effect is expected to be more important as the disk inclination with respect to line-of-sight decreases. Also, the different ADI reduction procedures will impact differently the disk shape. A general study of the impact of ADI on disk reductions will be presented in a forthcoming paper (Milli et al, 2012, in prep.). In this paper, we concentrate on the HR4796 case and we monitor this impact using fake disks, as done in Lagrange et al. 2011. Assumptions To simulate the HR4796 disk, we assumed, following Augereau et al. 1999 a radial midplane number density distribution of grains ∝ ((r/r 0 ) −2α in + (r/r 0 ) −2α out ) −0.5 . We chose r 0 = 77.5 AU, α in =35 to ensure a very sharp inner edge, and α out = -10, as assumed by Thalmann et al. 2011. The vertical distribution is given by: is 1 AU at 77.5 AU. The disk flaring coefficient is β=1 and the coefficient γ = 2 ensures a gaussian vertical profile. The disk is inclined by 76 degrees (a pole-on disk would have an inclination of 0 degree), and we assumed an isotropic scattering (g=0), as Hinkley et al. 2009 polarimetric measurements indicate a low value for g (0.1-0.27). The disk was simulated using the GRaTer code (Augereau et al. 1999;Lebreton et al. 2012). It will be referred to as HR4796SD. The ring FWHM thus obtained is 0.14" (before reduction) under such hypothesis. We also considered another disk, with all parameters identical to those of HR4796SD, but with α out = -4; this disk (referred to as HR4796blowoutSD) is representative of the outer density distribution that would be observed if the outer brightness distribution was dominated by grains expelled by radiation pressure as in the case of β Pic (Augereau et al. 2001). Simulated disk images The flux of the simulated disk is scaled so as to have the same number of ADU (at the NE ansae) as in the real disk, once both simulated and real data are reduced by cADI. When brighter disk are needed, a simple scaling factor is applied. The simulated projected disks are then injected in a datacube, at each parallactic angle, corresponding to each real data file, and are then convolved either by a theoretical PSF matching the telescope and instrument response, or the average of the real PSFs taken prior to and after the saturated images. Each image is added to each frame of the original data cube, with a 130 • or 90 • offset in PA with respect to the real disk, so as to minimize the overlap between both disks. The datacubes are then processed by nADI, CADI, mcADI, LOCI and mLOCI. The HR4796A disk 4.1. Disk images: qualitative view Figure 1 shows the images obtained when combining the data obtained on the 6th and 7th of April. Images resulting from the ADI reductions described above are showed: cADI, cADI-disk, mCADI, rADI, LOCI, and mLOCI. We also show for comparison the image resulting from nADI reduction. . The pixel scale is 27 mas/pixel. From top to bottom, the same data reduced with nADI, cADI, cADI-disk, mcADI, rADI, LOCI, and mLOCI (see text). Note that the color codes are identical for all cADI reductions reductions on the one hand, and LOCI reductions on the other hand, to enable comparisons within a given method, but different cuts are used for cADI, rADI and LOCI reductions. The disk is clearly detected at "large" separations from the star with the nADI reduction, and is, expectedly, lost in the Airy rings closer to the star. The parts closer to the star in projection are revealed only by the real ADI reductions, and actually, the disk is more completely detected than in previously published images, in particular the west side is almost continuously detected. The masking greatly improves the image quality of the LOCI image; the impact of mcADI with respect to cADI is, expectedly, less important, even though the flux restitution is increased. Nevertheless, the dynamical range of our images is lower than that of the recently AO published images. This is because the present data are obtained at L', with a higher Strehl ratio, whereas the previous ones were obtained at shorter wavelengths, with lower Strehl ratio, but with detectors that have much lower background levels. The ring appears very narrow in our L'-data, barely resolved: the FWHM measured on the PSF is 4.1 pixels (0.11"), while the ring FWHM is ≃ 5.7 pixels (0.15") (NE) and resp. 5.0 pixels (0.13") (SW) on the cADI data. We will however see below that the ADI reduction has an impact on the observed width. Thalmann et al. 2011 report a relative enhancement of the disk brightness in the outer part of the ring, along the semimajor axis, that they describe as streamers emerging from the ansae of the HR 4796 disk. Our images do not show such strong features. To test whether such features could be due to the ADI reduction and/or data characteristics, we built cADI and LOCI images of a bright 3 HR4796SD-like disk, inserted into our data cube, convolved by a theoretical PSF and, in the case of LOCI, reduced with Thalmann et al. LOCI parameters, assuming a 23 degrees FoV rotation. The input disk images are shown in Fig.2, as well as the reduced ones. The reduced images clearly show features similar to those observed by Thalmann et al. 2011 in their Figures 1 and 3. We also show in Fig.2 the same simulations of a similar disk inserted into a data cube matching the parallactic angle excursions, but assuming no noise. Note that for the LOCI reduction of noiseless data, we took the coefficients derived from the previous LOCI reduction (taking noise into account); this is necessary to avoid major artefacts when using LOCI on noiseless data. The later simulations (no noise) highlight the reduction artefacts. Other simulations are provided in Milli et al. (2012, in prep.). Hence, we conclude that the features indicated as "streamers" are in fact artefacts due to the data characteristics (FoV rotation amplitude, number of data, SN) and data reduction. The fact that we do not see them in the present data is due to our relatively lower dynamical range, and to the larger FoV rotation amplitude. To check this, we injected a fake disk (HR4796SD), with a flux similar to the observed one in the actual datacubes (with a 130 • offset in PA to avoid an overlap of the disks). We processed the new data cubes as described above with cADI, mcADI, LOCI and mLOCI. The resulting images are shown in Figure 3; the artefacts are not detectable. This is also true when considering a fake disk with blowout (HR4796blowoutSD). Finally, we note a small distorsion in the SW disk towards the inner region of the cADI and LOCI images, at (r,PA) between (19 pix, 235 • ) and (28 pix, 220 • ). The feature, indicated by a green arrow in Figure 1, is however close to the noise level. If we go back to the individual images taken on April, 6th and April, 7th, we see that this feature is barely detectable on the April 6th eventhough in both cases, there is a very faint signal inside the disk ellipse (see Figure 4 for the cADI images). Hence, in the present data, this feature could be due to noise. However, it seems to be at the same position as that pointed by Thalmann et al. 2011 in their data as well as Schneider et al. 2009 data as well as, in L'-data obtained at Keck by C. Marois and B. McIntosh (priv. com.). In Thalmann et al. 2011 data, it appears as a loss of flux in the annulus. In the L' data, we see rather a distorsion in the disk and a possible very faint additional signal at the inner edge of the disk. However, the ring does not appear as azimuthally smooth in the Thalmann et al. 2011 or our data, due to ADI reduction and limited SN, so it is not excluded at this stage that this feature might be an artefact. Clearly, new data are needed to confirm this structure, and if confirmed, to study its shape as a function of wavelength. If confirmed, its origin should be addressed. In the context of the HR4796 system, an interesting origin to be considered is the presence of a planet close to the inner edge of the disk. To test the impact of the ADI reduction procedure on a disk + close planet system at L', we inserted a fake point-like source close to our fake disk HR4796SD (rotated by 130 • with respect to the real disk, convolved by the observed PSF, and inserted in the datacube) inner edge, and processed the data as described previously. For these simulations, we assumed a disk about 10 times brighter than the real one. We run several simulations with Fig. 2. Left) From top to bottom 1) simulated HR4796SD-like disk (projected, no noise); 2) simulated cADI reduction of this disk once inserted into our data cube, and convolved by a theoretical PSF. We assumed a 23 degrees FoV rotation as Thalmann et al (2011). 3) idem with LOCI reduction. Right) Idem without noise. For some values of the planet position and flux, we were able to reproduce a disk appearent distorsion, especially in the LOCI images. A representative example is given in Figure 5; in this case, the planet flux would correspond to a 2 MJup mass for a disk brightness similar to the HR4796 one. Disk geometry We fitted the observed disk by an ellipse, using the maximum regional merit method, as in Buenzli et al. 2010 and. The resulting semi-major axis a, semiminor axis b, disk center position along the semi-major axis (xc), and the semi-minor axis (yc), and inclination (defined as arccos(b/a)) are provided in Table 3, and the fit is showed in Figure 6. These parameters are derived from the selection of the best fits, defined as those with parameters within 5% of the best fit (best merit coefficient). The uncertainties associated to these measurments take into account the dispersion within this 5% range, and the other sources of uncertainties that are described hereafter. To estimate the impact of the PSF convolution and ADI process on the ellipse parameters, we used our simulated disk HR4796SD (without noise) and fitted the disk with an ellipse before and after the PSF convolution and the ADI reduction. For (a,b), differences of (-0.03;0.12) pixels were found with cADI, and (-0.06;-0.3) pixels with LOCI. For (xc, yc), no significant differences were found with cADI and very small with LOCI. Fig. 5. cADI, mcADI, rADI, LOCI and mLOCI images of simulations of fake disk + fake point-like source (indicated by a green arrow). No significant difference was measured on the inclination with cADI while a difference of -0.4 • was found with LOCI. Finally, no significant differences were found on the PA. We corrected the measured values on the disk from these biases. We also inserted a model disk HR4796SD in the data cube (at 90 degrees), and processed the data. The differences found between the parameters of the injected disk and the ones of the recovered disk are compatible with the ones obtained in the case "witout noise". Besides, the imperfect knowledge of the PSF center may also affect the results. To estimate this impact quantitatively, we first estimated the error associated to the PSF center, as in Lagrange et al. 2011. The error was found to be [0.,0.27] pixel on the x-axis and [-0.06, +0.04] pixel on the y-axis of the detector. It appears that this imperfect knowledge on the PSF center does not significantly affect the values of (a,b). It impacts the uncertainty of the ellipse center by up to 0.2 pixel along the major and minor axis, and the disk PA (0.24 • ). The uncertainty on the PA measurement is found to be 0.15 • in cADI. Finally, the PA measurement is also impacted by the uncertainty on the true North Position (0.3 • ; see a discussion related to this last point in Lagrange et al. 2011). Our data show an offset from the star center of ≃ 22 mas on the cADI images (≃ 20 mas on the LOCI data) of the center of the fitted ellipse along the major axis, and to the South. Given the uncertainty associated to this measurement, 7 mas, we conclude that the observed offset is real. This offset along the major axis is in agreement with previous results of Schneider et al. 2009 (19 + − 6 mas) and Thalmann et al. 2011 on their LOCI images (16.9 + − 5.1 mas). The latter detected moreover an offset of 15.8 + − 3.6 mas along the minor axis, which was not detected by Schneider et al. 2009; the measured offset on our cADI images is about 8 mas + − 6 mas; hence very close to the error bars, so the present data barely confirm the offset found by Thalmann et al. 2011. Figure 7 shows the observed radial surface brightness distributions (SBD) for the HR4796A disk at L' along the major axis after cADI, and LOCI reductions. The SBD extraction was made using a 5 pixel vertical binning. The dynamical range of our data is small (factor of 10); it is improved with masking technics, thanks to a lesser disk self-subtraction (see also Figure 7). Yet, the disk being only slightly resolved, we cannot perform meaningful slope measurements on our data. Indeed, the slope of the surface brightness distribution depends on several parameters: the PSF, the amplitude of the FoV rotation, the ADI procedure, the binning used for the extraction of the SBD, and the separation range on which the slopes are measured. In the present case, the separation range is too small to allow a proper measurement of the slope: we run simulations of the HR4796SD disk without noise and checked that indeed, measuring the slopes between the maximum flux and the threshold corresponding to the noise on the actual data gave slopes very different (much higher) from the slope measured on a larger separation range. Brightness distribution The ring shape seems nevertheless to indicate a sharp outer edge, but we need to check the impact of the ADI reduction procedure on the final shape of the disk. Figure 8 illustrates the evolution of the SBD along the semi-major axis, starting from the fake disk HR4796SD, then once the disk is inserted in the real data cubes at 130 • (see the corresponding images in Figure 3), convolved by the observed PSF, and finally when the datacube is reduced with cADI and LOCI. The SBD shape is clearly impacted. We note that the effect is stronger in the inner region that in the outer one. To test whether we can discriminate between a steep and a less steep outer profile, we consider the fake disks HR4796SD and HR4796blowoutSD inserted in the real data cubes at 130 • (see the corresponding images in Figure 3), and convolved with the real PSF. The SBD profiles after convolution and reduction along the semi-major axis are given in Figure 9. We note that the slight shift between the observed and simulated disks SBD is due to an unperfect assumption on the ring position, and is not relevent here. The observed SBD profile appears to be more similar in shape to the ones corresponding to the HR4796SD case rather than the HR4796blowoutSD one. We conclude then that even when taking into account the possible biases, the data indicate a very steep outer edge, compatible with α out = -10, as found by Thalmann et al. 2011 rather than a less steep one. We cannot provide precise values to the outer NE and SW slopes with the present data, but they are in any case different from the ones measured in the case of the other A-type stars such as β Pictoris (typ. between -4 and -5; Golimowski et al. 2006) and HD32997 , and that are expected from a disk which outer part is dominated by grains blown out by radiation pressure from an A-type star (see below). Disk width Our data indicate a ring width of about 0.154" after cADI reduction for the NE side and 0.136" for the SW side (data binned over 5 pixels). With mcADI, these values are only marginally changed: 0.147" and 0.135" respectively. However, as seen above, the SBD, especially inner to the ring is impacted by the PSF convolution, the amplitude of the FoV rotation, the ADI procedure, the binning used for the extraction of the SBD, the noise level as well as the zero flux level after reduction. We made several test with fake disks to estimate the impact of these steps on the FWHM. Also, we tested the impact of the evaluation of the zero level after reduction. It appears that the disk width is mainly affected in the present case by the PSF convolution and the zero level. Taking all these parameters into account, we cannot conclude that the disk is significantly narrower than the size found by Schneider et al. (2009) 0.197", which once corrected from the broadening by the STIS PSF became 0.18" (13 AU) at shorter wavelengths. SAM detection limits The detection limits from the SAM dataset are derived from a 3D χ2 map. This map has on each axis the three parameters used to model a binary system: the separation, the position angle, and the relative flux. This model, fitted on the closure phase, is detailed in Lacour et al. 2011. Visibilities are discarded. Fig. 12 is showing the detection limits as a function of right ascension and declination. It is obtained by plotting the 5 σ isocontour of the 3D map (the isocontour level is given by a reduced χ 2 of 25). Table 3. Ellipse parameters of the observed disk: semi-major axis (a, mas), semi-minor axis (b, mas), position of the center of the ellipse with respect to the star ((xc, yc), expressed in mas), disk PA (deg), and inclination (i, • ). Fig. 7. Observed SBD for the HR4796A disk at L' along the major axis (after a binning of 5 pixels perpendicular to the major axis): cADI and mcADI reductions (left); LOCI and mLOCI reductions (right). We note that the peaks of the SBD on the NE and SW do not coincide which is due to the shift of the ellipse center along the major axis. Fig. 9. Left: simulated radial brightness distributions for the HR4796ASD disk (green) and HR4796ASBD disk with blowout (red) at L' along the major axis (log scale) once inserted in the data cube and after cADI reduction has been applied. For comparison, observed SBD (black). Right: simulated radial brightness distributions for the HR4796ASD disk (green) at L' along the major axis (log scale) once inserted in the data cube and after LOCI reduction has been applied. For comparison, observed SBD (black). We did not account for the presence of the disk in the model fitted. We considered that it did not affect the visibilities (because very faint), and did not affect the closure phase (because quasi point-symmetric). Nevertheless, it is not impossible that some of the structures present in Fig. 12 may be caused by a second order effect of the disk on the closure phase. Thus, neglecting the influence of the disk means that we are conservative on the detection limit map. Given these values, and assuming V-L' = 0 for this A0-type star, and an age of 8 Myr for the system, we derive the 2D detection limits expressed in Jupiter masses, using DUSTY models ( Fig. 12; right). At a separation of about 80 mas (6 AU), we exclude the presence of companions with masses larger than 29 M Jup . At a separation of 150 mas, the limit becomes M = 40 M Jup (DUSTY). In both cases, COND03 models give similar limits within 1 MJup. At 60 mas, the detection limit is M = 50 M Jup (DUSTY) and would be 44 MJup with COND03. Such values fall in the mass range of brown dwarfs and represent unprecedented mass limits for this range of separations. ADI detection limits We computed the detection limits using the data obtained on April 6th and 7th, with different reduction methods. To estimate them, we took into account the flux losses due to the ADI reductions, either injecting fake planets in cubes of empty frames Fig. 11. 5-σ detection limit of point-like structures around HR4796A, along the major axis in the NE and SW directions using SAM and ADI data. wide box along any given PA. We checked the obtained detection limits by inserting fake planets with fluxes corresponding to the 5σ limit at different separations, and processed again the data, and mesured the resulting S/N ratio on the planets. The S/N ratio were close (or sometimes slightly larger than 5) which shows that our limits are properly estimated. The 2D-detection limits are shown in Figure 12 for cADI, expressed either in contrasts or in masses, using the COND03 models (Baraffe et al. 2003) or BT-settl models (Allard et al. 2011) and assuming an age of 8 Myr. Similar (not better) limits were obtained with rADI and LOCI. To check the robustness of the detection limits obtained, we injected fake planets with fluxes corresponding to the 5σ level as well as the fake disks and processed the data cubes as before. The resulting images (see Fig. 13) revealed the planets with at least a 5σ level. Table 4. A few detection limits along the semi-major axis of HR4796A (NE side). First lines (top) values are derived from SAM data, using DUSTY models (note that COND models agree within 1-2 MJup in most cases). The other values (bottom) are derived from ADI data, using COND03 or BT-SETTLE models. The 1-D limits along the major axis, at a PA of 26 degrees (NE side of the disk) are showed in Figure 11. Similar values are obtained in the SW direction. A few values expressed in jovian masses are given in Table 4. The detection limits are better than the SAM ones further than ≃ 0.25-0.3", with a value of about 7.5 M Jup at 0.25-0.3"; they are below 3.5 M Jup for separations in the range 0.5-1", and well below further than 1.5". Alltogether, these are to our knowledge the best detection limits obtained in the close surroundings of HR4796A. Companions around HR4796B HR4796B has the following magnitudes: V = 13.3, H = 8.5, K = 8.3. Using the observed contrast between HR4796A and HR4796B (2.6 mag) on the present data, we find L'≃ 8.4 for HR4796B, hence L'abs ≃ 4.1. This value is in agreement with the Lyon's group model (Baraffe et al. 2003), which, given the near-IR colors, predicts an absolute L' magnitude of 3.8. We give in Figure14 the 2D map of the detection limits (5σ), both in terms of contrast magnitudes and Jupiter masses. At 0.3", masses as low as 2 M Jup could be detected, and at 0.5", the detection limit is below 1M Jup . The inner disk sharp edge One of the most remarkable features of the HR4796 disk is certainly the offset of the disk center with respect to the star (also observed in the case of HD141569, and Fomalhaut). Two explanations are 1) the presence of a close, fainter companion (in such a case, the disk would be a circumbinary disk and orbit around the binary center of mass), and 2) the presence of a companion close to the disk inner edge on an eccentric orbit that induces a forced eccentricity to the disk ring by secular gravitational interaction, an explanation which was proposed to explain the eccentricity of the Fomalhaut disk (Quillen et al. 2006, Kalas et al. 2005, Kalas et al. 2008, Chiang et al. 2009). We first investigate whether this offset could be due to the presence of a close companion. In such a case, the ellipse center would mark the center of mass of the binary system. Using the center of mass definition, it appears that the mass of a body necessary to shift the center of mass at the observed position of the ellipse center would be much larger than the detection limit obtained with SAM between 40 and 400 mas or between 400 mas and 1 arcsec (ring position) with the ADI data. A companion located between 23 and 40 mas would have a mass larger or comparable to that of HR4696A; such a scenario must be excluded as under such conditions, the photometric center of the system would also be shifted. Then, the most plausible explanation to the offset is a light eccentric planet close to the inner edge of the disk. We now try to use the detection limits found in this paper to constrain the properties of an inner planet that could be responsible for the steep inner edge observed with HST/STIS data, which, conversely to ADI data, is not impacted by ADI reduction effects. Wisdom 1980 showed that in the case of a planet and particles on circular orbits, we have the relation δa/a = 1.3.(Mp/Ms) 2/7 where Mp and Ms are the planet and star masses, a is the orbital radius of the planet and δ a the distance between the planet and the disk inner edge. Hence if a planet sculpts the inner edge, its mass and distance from the inner edge must satisfy this relation. Assuming an inner edge located at 77 AU, we can derive the mass of the planet necessary to produce this sharp edge, as a function of its distance to the edge, and test whether such a planet would have been detected or not. This is done in Figure15 where we show the region, inside the yellow ellipse, that, given the present detection limits, have to be excluded. Hence the only possible location of the planets responsible for the inner edge is between the yellow ellipse and the red one (which traces the inner edge of the disk). We see that along the major axis, only the planets closest to the inner edge (less than ≃ 10 AU) remain out of the present detection capabilities. Hence if a planet is responsible for the inner edge sculpting and is located along or close to the major axis, then it should be a low mass planet, and located further than 63 AU, ie within about 15 AU from the edge. Along the minor axis, due to the projection effects, the presence of planets is much less constrained: only planets at more than 26 AU from the edge would have been detected. The previous constrains were obtained assuming the planet and the perturbated bodies are both on circular orbits. The actual disk eccentricity beeing very small, about 0.02, this assumption is reasonable. As an exercice, we investigate the impact of a higher eccentricity, using the results of Mustill and Wyatt 2012, who revisited this scenario, assuming the perturbating bodies were on an eccentric orbit; the relation becomes : δ a/a = 1.8.e 1/5 .(Mp/Ms) 1/5 . With the same reasonning, and assuming an eccentricity of 0.1, we provide in Figure15 the possible locations of a planet responsible for the inner edge, with the same color conventions as in the circular case. Again, comparison with Figure12 shows that if a planet was responsible for the inner edge sculpting, and located along or close to the semi-major axis, then it would have to be located less than ≃ 25 AU from the edge of the disk. The location of planets along or close to the minor axis would not be significantly constrained. This case, even though not adapted to the present case as the disk eccentricity is very small, illustrates the impact of this parameter on the planet detection capabilities. The outer disk sharp edge Another striking feature of the system is the very steep disk outer edge. Radiation pressure from A-type stars induces surface brightness distributions in the outer part of disks with slopes of -3.5 to -5, depending on the assumptions related to the production laws of small grains (see for instance Lecavelier Des Etangs andVidal-Madjar, 1996, Thebault andWu 2008 andref. there-in). In particular, Thebault and Wu 2008 modeled the outer parts of collision rings with an initially steep outer edge, following the motion of the small grains produced through collisions and submitted to radiation pressure and showed that, as already proposed, the profile of the resulting SBD in the outer part of the disk followed a r −3.5 law. They showed that unless the disks are extremely and unrealistically dense, and prevent the small grains from escaping, the disks have to be extremely "cold" (with an average free eccentricity of ≤ 0.0035) to explain an outer power law of r −6 (which was the value adopted at this time for the HR4796 profile). AO data suggest that the situation could be even more radical with an even steeper outer edge than previously thought. Also, if confirmed, the fact that we possibly find at 3.8 µm a disk width different from that found in the optical (0.2-1µm) with STIS would argue against an extremely dense disk, as, in such a case, large bodies and small grains would be in the same regions. The cold disk scenario can nonetheless be also problematic as, in such a case small grains should be underabundant (Thebault and Wu 2008) and the optical/near-IR fluxes would be produced by large particules (typ. sizes 50 µm). We should expect then a color index different from that observed (see Debes et al. 2008). The latter rather predict that the scattered light flux is dominated by 1.4 µm dust, which seems difficult to explain within the dynamically-cold-disk scenario. Could gas be responsible for the observed steep outer edge? Takeuchi and Artymowicz 2001 investigated the impact of the gas in such debris disks, and showed that even small amount of gas, 1-a few Earth masses, could partly balance the effects of gravitation, radiation pressure and Poynting-Robertson drag and alter the grains dynamics differentially, and lead to grains spatial distributions, that, depending on the grain sizes, could be different from those expected in a disk-free gas. Under such processes, the gas could be responsible for ring-like structures at distances depending on the dust size considered. In their attempt to investigate disks roughly similar to HR4796 and HD141569, assuming 1 MEarth gas, they showed that, conversely to large grains which occupy the whole gas disk, grains with sizes (≃ 10-200 µm) tend to concentrate in the outer gas disk, where the gaseous density sharply decreases. Hence these grains would form a narrow ring which position traces the change in the radial distribution the gaseous disk. Under this a priori attractive scenario, grains with sizes 1-10 µm would still be blown away. The main problem with this hypothesis is that it requires a gas disk with a relatively sharp outer edge, and thus an explanation for such an edge. Another issue is that Takeuchi and Artymowicz 2001 did not explore the SBD profile beyond the main dust ring, so that it remains to be see whether slopes in the -10 range are possible. Finally, it is worth noting that so far no circumstellar gas has been detected either in atomic species through absorption spectroscopy (but the system being inclined, the non detection is not a strong constraint) or molecular species, either CO (Liseau, 1999) or H2 (N(H2)≤ 10 15 cm −2 Martin-Zaiedi et al. 2008). In any case, such a scenario can be tested in the forthcoming years with high angular resolution observations on a wide range of wavelengths. We finally study the possibility that the outer disk is sculpted by massive bodies. The first candidate we might think of is HR4796B. Thebault et al. 2010 investigated the possibility that the disk could be sculpted by HR4796B, if orbiting on a rather eccentric orbit (e≥ 0.45) but again showed that, even under such conditions the outer profile would not be so steep. This is mainly because the companion star is not able to dynamically remove small grains from the outer regions at a pace that can compensate for their steady collisional production in the parent body ring. An alternative explanation could be the presence of a close, unseen outer planet. We investigate this scenario using the new code developed by Thebault (2012) to study perturbed collisionally active debris discs. The code computes the motion of planetesimals submitted to the gravitational perturbation of a planet; and follows the evolution of small dust realeased through collisions among the planetesimals and submitted to radiation pressure and Poynting-Robertson effect (note that in the present case, radiation pressure largely drives the grains dynamics once produced). The configuration we consider is a narrow ring of large parent bodies, a birth ring of width ∼ 8AU centered on 71 AU. The collisional production and destruction rates of small grains (those which contribute to the luminosity beyond the main ring) in this parent body ring is parameterized by the average vertical optical depth within the main ring, taken to be τ = 5 × 10 −3 4 . The resulting SBD (case face-on) is derived assuming grey scattering. It can be directly compared to the curve presented in Figure 6 of Schneider et al (2009) paper (de-projected curve 5 ). For the perturbing planet's mass, we consider 3 different values: 8M Jup , 5M Jup and 3M Jup , which are consistent with the constraints imposed by our observational non-detection. Note that 8M Jup is only marginally possible in a very narrow region along the disc's semi-minor axis, but we have to keep in mind that the detection limits are derived from masses-brightness relationships that are debated at young ages. We assume a circular orbit for the planet (the most favourable case for cleaning out the region beyond the ring, see Thebault (2012)) and place it as close as possible to the parent body ring, i.e., so that the outer edge of the observed ring (around 75 AU) corresponds to the outer limit for orbital stability imposed by planetary perturbations. This places the planet at a distance to the central star comprised between 92 and ∼ 99 AU depending on its mass. In Figure 16, we show the SBD obtained for such a configuration. Note that, for each planet mass, we are not showing an azimutal average but the "best" radial cut, i.e. the one that gives the closest match to the deprojected NE side SBD obtained in Figure 6 of Schneider et al. 2009. As can be clearly seen, the 8M Jup case provides a good fit to the observed profile: the maximum of the SBD roughly corresponds to the outer edge of the parent body disk and is followed by a very sharp brightness decrease, with a slope ≤ -10 between 75 and 95 AU, i.e. between brightnesses of 1 and 0.1, a range corresponding approximately to the dynamical range accessible to the available images. This is significantly steeper than the one that would be expected if no planet was present (-3.5 according to Thebault and Wu 2008) and is fully compatible with the observed sharp luminosity decrease. Longwards 95 AU, the flux level is lower (≤ 0.1 ADU) and the SBD is flatter (slope ≃ -3.8). We also note a plateau inside the parent body ring at a level of ∼ 0.2, due to both the inward drift of small grains because of the Poynting-Robertson effect and to the dynamical injection of particles after close encounters with the planet. Of course, not too much significance should be given to the SBD obtained inwards of the disc since our simulations (focused on the outer regions) do not consider any inner planet shaping the inner edge of the disc. Nevertheless, they show that, should "something" have truncated the disc at around 67 AU in the past, then the effect of one external planet on such a truncated disc could lead to an SBD compatible with observations in the inner regions. For the 5M Jup case, the fit of the observed SBD is slightly degraded, but mostly in the region beyond 90 AU where flux levels are close to the 0.1 threshold. For the 3M Jup case, however, the fit gets very poor for almost the whole outer region (keeping in mind that we are here showing the best radial cut). We conclude that a 8M Jup planet located on a circular orbit at ∼ 25 AU from the main ring provides a satisfying fit (especially considering the non-negligeable uncertainties regarding flux values far from the main ring) to the observed SBD. According to the derived detection limits, such a massive planet would have been detected almost everywhere except in a very narrow region along the disc's semi-minor axis; however, we remind the uncertainties inherent to the models used to link planet masses and luminosities as a function of the system's age. In any case, even a less massive perturber of e.g. 5 MJup would still give an acceptable fit of the observed luminosity profile. The external planet scenario thus seems the most likely one for shaping the outer regions of the disc. Of course, these results are still preliminary and should be taken with caution. A more thorough numerical investigation should be carried out, exploring a much wider parameter space for planet masses and orbit, as well as deriving other outputs that can be compared to observations, such as 2-D synthetic images. Such a large scale numerical study exceeds the scope of the present work and will be the purpose of a forthcoming paper. Note also that the more general issue of how planets shape collisionally active debris disks will be thoroughly investigated in a forthcoming paper (Thebault, 2012b, in prep.). Summary and future prospects In this paper, we have provided the first high-resolution images of the HR4796A disk at L' band. They allow us to see a narrow disk at almost all PA. As the technics used, Angular Differential Imaging is expected to impact the final disk shape and appeareance, we have developped simulations to investigate quantitatively the impact of the reduction procedures on the disk parameters. We conclude that the information on the inner part of the disk is significantly impacted, and that the procedure may in some cases (depending on the amplitude of rotation of the field of view), produce important artefacts. This is specially true for LOCI reduction, while classical ADI affects the data to a lower extent. We showed in particular that the streamers detected by Thalmann et al. 2011 at the outer edge of the disk are probably due to such artefacts. Using both ADI and SAM data, we have derived unprecedented lower limits to the presence of planets/companions down to 25 mas from the star. The present data allowed then to put first interesting constrains on the location of the possible planet that could produce the inner edge of the disk. We showed that the planet responsible for the inner edge must be closer than 15 AU from the ring if located along or close to the semi-major axis. The forthcoming high dynamics instruments such as SPHERE on the VLT and GPI on GEMINI will allow to test this hypothesis with much more accuracy, and be able to actually detect this planet in most cases. We have discussed several hypotheses to explain the sharp outer edge of the disk: gaseous disk, dynamically cold disk, planet on the outer edge. Using detailed simulations, we showed that a planet located outside the planetesimal ring could nicely reproduce the STIS data. Further simulations will help to better constrain the planet and parent bodies characteristics. In any case, this work shows how disks characteristics can help constraining possible planet properties. A very important information can be brought by the dependance of the disk properties (ring width, SBD) as a function of wavelength. Resolved images in the future will be crucial to further understand this system. Fig. 16. Synthetic surface brightness profiles obtained, using Thebault (2012)'s numerical model, for 3 different masses of a putative perturbing outer planet: 8M Jup , 5M Jup and 3M Jup . For each case, the planet has a circular orbit and is placed as close as possible to the main ring of large parent bodies in order to truncate it at about 75AU without destroying it (see text for more details). The observed profile, derived from Schneider et al. 2009, is shown for comparison. For each planet mass, we show the radial cut (i.e., for one position angle along the disc) that provides the best fit to this observed profile. The horizontal line delineates approximately the part (above this line) of the SBD accessible to the observations assuming a dynamical range of 10.
2012-07-09T08:40:42.000Z
2012-07-09T00:00:00.000
{ "year": 2012, "sha1": "88dd70c0079dc690ecf93abfbe8f33f9dd0fce57", "oa_license": null, "oa_url": "https://www.aanda.org/articles/aa/pdf/2012/10/aa19187-12.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "88dd70c0079dc690ecf93abfbe8f33f9dd0fce57", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Physics" ] }
232405028
pes2o/s2orc
v3-fos-license
Unscented Particle Filter Algorithm Based on Divide-and-Conquer Sampling for Target Tracking Unscented particle filter (UPF) struggles to completely cover the target state space when handling the maneuvering target tracing problem, and the tracking performance can be affected by the low sample diversity and algorithm redundancy. In order to solve this problem, the method of divide-and-conquer sampling is applied to the UPF tracking algorithm. By decomposing the state space, the descending dimension processing of the target maneuver is realized. When dealing with the maneuvering target, particles are sampled separately in each subspace, which directly prevents particles from degeneracy. Experiments and a comparative analysis were carried out to comprehensively analyze the performance of the divide-and-conquer sampling unscented particle filter (DCS-UPF). The simulation result demonstrates that the proposed algorithm can improve the diversity of particles and obtain higher tracking accuracy in less time than the particle swarm algorithm and intelligent adaptive filtering algorithm. This algorithm can be used in complex maneuvering conditions. Introduction The problem of nonlinear filtering is a hot topic in signal processing and control theory [1,2]. It has wide applications in many fields, such as radar tracking [3][4][5], signal processing [6][7][8], mobile robot [9,10], and navigation [11,12]. The optimal nonlinear filter equation was developed in the mid-1960s, but the problem of the integration involved is still difficult to handle. In a linear Gaussian system, the Kalman filter (KF) [13] is the best estimate. However, in actual application, most systems are nonlinear/non-Gaussian. For such systems, KF will become invalid. In order to solve this problem, a method of approaching the nonlinear state-space model with the Kalman filter is proposed, namely, the extended Kalman filter (EKF), which uses Taylor series expansion instead of state transition equation and measurement equation [14][15][16], but, for strong non-linear systems, this method will bring large truncation errors [17,18]. Meanwhile, a complicated computation process of dealing with the Jacobian matrix is also related to the EKF. Then, the unscented Kalman filter (UKF) is proposed, which uses several sigma points to recursively calculate the mean and covariance [19]. The problems in the EKF have been solved, but the UKF still can only use the Gaussian distribution to approach the true posterior distribution [20,21]. In the late 1990s, based on the sequential importance sampling (SIS) [22,23], Gordon proposed a particle filter (PF) algorithm [24,25] by combining the resampling technique with Monte Carlo importance sampling. This algorithm is an optimal regression algorithm, combining Monte Carlo thought [26] and recursive Bayesian filtering [27], and it has a good estimation effect when dealing with nonlinear/non-Gaussian systems [28][29][30]. However, particle degradation and particle shortage occur during particle sampling, which seriously affects the accuracy of the PF. In order to solve the above-mentioned problems mentioned, the UKF and PF are merged and the UPF method is introduced to implement state estimation. However, when the dynamic system comes across the interferences of abnormal observation and serious model noise, degradation of the particle will still occur [31,32]. Most studies are devoted to improving particle resampling steps to solve this problem. The authors of [33][34][35] took the concept of adaptive robust filtering into the UPF, improving the degeneracy of particles. Then, Wei et al. [36] proposed a new filter, combining adaptive filtering and square-root filtering This not only has the advantages of the adaptive filtering and square-root filtering, it also has a higher tracking accuracy. Then, [37] proposed a UPF which applies particle swarm optimization (PSO) to UKF, which further improved the filter performance. Liu et al. [38] proposed an improved UPF based on a genetic algorithm (GA-UPF). The GA algorithm is used to optimize the particles, which eliminates the blind optimization of particles in the re-sampling process and solves the problem of particle impoverishment. Ramazan Havangi [39] developed an intelligent adaptive unscented particle filter (IAUPF), which uses an adaptive UKF filter to generate the proposal distribution and uses the genetic operators to increase the diversity of particles. When the noise statistics are unknown, the IAUPF has good performance. Sample impoverishment was improved in the mentioned algorithm. However, when dealing with the maneuvering target tracking, due to the inconsistency of maneuvering modes and intensities in different target directions, the state space will have sparse particle distribution in some regions, which is difficult to cover uniformly. With the increase in the complexity of the target motion model, the performance of the algorithm decreases more obviously. It is often necessary to increase the number of particles to ensure that the coverage and the operation time are longer, which does not guarantee real-time tracking and good tracking accuracy. Aiming to solve the above problems, the divide-and-conquer algorithm is introduced to UPF in this paper. Applying optimization algorithms to UPF can also solve the problems mentioned above. However, from the perspective of the algorithm, the divide-and-conquer algorithm has more advantages than the optimization algorithm [40][41][42]. The particle swarm algorithm in the swarm intelligence algorithm is applied to find the optimal investment allocation of the stocks [40] . Based on this method, we can obtain a more accurate estimation, but this algorithm performs a global optimization, which leads to a long running time. The divide-and-conquer algorithm divides the problem into many sub-problems, which not only guarantees the accuracy of the information, but also reduces the running time. The meta-heuristic algorithm is used to optimize the control parameters of the given chaotic systems [41]. The algorithm has been improved in terms of running time, but it is easy to prematurely fall into the local optimum during the optimization process. The acquisition of the optimal solution of the divide-and-conquer algorithm is merging the optimal solutions of each sub-problem, and this will not fall into the local optimal situation. In [42], an improved heuristic algorithm-tabu search (TS) algorithm is used to deal with disturbances and variations in the nonlinear systems. This TS algorithm improves the shortcomings in [41], but it has a strong dependence on the initial solution, and the iterative process is serial to ensure that the algorithm runs longer, which is a common problem with most optimization algorithms. In the divide-and-conquer algorithm, the sub-problems at the same level can be processed in parallel, which reduces the running time.Through the above comparison, we can find that the divide-and-conquer algorithm has a greater advantage in terms of accuracy and computational performance, and can better solve the problem of sparse particle distribution in certain regions in the UPF. This paper proposes a target tracking algorithm based on DCS-UPF. This algorithm solves the problem of sparse particle distribution in state space by decomposing state space, thus reducing the impact of particle degradation and particle shortage on the tracking performance. At the same time, it reduces the dimensionality of the motion space, which simplifies the algorithm-processing process and decreases the running time, thereby ensuring real-time tracking and good tracking accuracy. Experiments and a comparative analysis were carried out to comprehensively analyze the performance of DCS-UPF. The structure of this paper is organized as follows. Section 2 defines the tracking model which is applied in this paper. Section 3 gives an overview of the fundamentals of the UPF. Section 4 introduces the algorithm of DCS-UPF. Section 5 presents the results of the simulation, which are used to demonstrate the effectiveness of DCS-UPF. In the end, Section 6 provides a conclusion. General Tracking Models Considering the model of target state as follows where X k is the state vector at time k, F k is the system state transition function, and w k is the input process noises, which are entirely unrelated to the past and current states. Meanwhile, supposing w k is already known, which means that the probability density function is giving at first. It is obvious that Equation (1) provides the process of a first Markov. The target state includes position, velocity, acceleration, etc. In this paper, the vector X k is defined as: When the state vector X k is known, the measurement can compute via the measurement model. The measurement model is defined as follows where Z k is the measurement vector at time k, H k is the measurement function, and v k is another measurement noise vector which is also entirely unrelated to the past and current states. Meanwhile, v k is already known, which means that the probability density function is giving at first. In the motion space, the vector Z k is defined as: In practical applications, the observation data are based on the polar or spherical coordinates obtained by the radar sensor, including radial distance r, azimuth angle b, and pitch angle e. When the system observations are known Z = [r, b] T , obtaining the observations in Cartesian coordinates Supposing there is a coordinate transformation between two coordinates, Φ = h −1 and h = [h r , h b ] . The real observation data in the Cartesian coordinate system after conversion can be expressed as where v x and v y are the measurements of noise in the radial distance and azimuth angle. In the following section, suppose that these rules are true: (i) The system state transition function and the measurement function are practicable; (ii) The states are related to a Markov process and the measurements which are independent correspond to the states; (iii) The probability density functions of w k and v k are already known. Fundamental of Unscented Particle Filter UKF obtains a set of sigma sampling particles via the method of unscented transformation (UT), which can be taken as the posterior probability distribution. It is also a recursive Bayesian estimation method. Under the framework of PF, the basic idea of the UPF algorithm is to use UKF to generate the proposed distribution to guide PF sampling, and then to use the PF algorithm to predict the state and obtain the state estimation. The iterative calculation of the UPF makes full use of the measurement information at the latest time in every step, and the sampled particles can better approach the true value of the poster distribution. Meanwhile, UPF inherits the flexibility of the PF, which can change the estimation accuracy by adjusting the number of particles. The following shows the UPF algorithm: Step 1: Initialization: when k = 0, based on the initial state variable X 0 , the particle set X 1 0,i i = 1, 2, · · · , N are generated from the initial distribution P(X 0 ). The initial weight of each particle is 1/N, and N represents the total number of particles; Step 2: When k > 0, the particle set generated from step (k − 1) is updated to obtain the particle set of step k via formulas (1) and (2), and then the posterior probability distribution of step k is approximately described by the updated particle set. This process includes: using UKF to generate importance density function, importance sampling, calculating importance weight and normalizing processing, judging whether resampling is needed, outputting estimation results, and so on. These processes are described in detail as follows: • Generate the importance density function for each particle by UKF; • Constructe the sigma sampling point set and weight value for each particle, and the UT transformation is realized by the symmetric sampling strategy where χ i k−1 stands for the ith sigma point, W j,m stands for the weight of the jth sigma point, L is the dimension of state variable, α is a proportional correction factor, ranging from 10 −4 to 1. For the parameters β, the optimal number is 2 under the Gaussian distribution, κ is a secondary sampling factor which usually uses 0 or (3 − L). λ = α 2 (L + κ) − L is the fine-tuning parameter, and P i k−1 is the state covariance matrix for each particle; • Calculate predicted mean and covariance of each particle via a one-step prediction of sigma sampling points • Reconstructe the sigma point set based on the predicted value and predicted covariance mentioned in Equations (9) and (10) • Calculate the self-covariance and mutual-covariance • Calculate the Kalman gain, and the particles are updated with the latest measurement to produce the importance density function • Sample particles from the importance density function • Calculate the weight of each particle in Formula (19) and normalize them • Compare N eff with N th and set N th to N/3. If N eff ≤ N th , perform re-sampling, otherwise skip this step and continue. Resampling includes random resampling, polynomial resampling, system resampling, residual resampling, etc. This paper uses random resampling to realize resampling. • Output estimation results: • Go to step 2 for the next iteration . The Method of Divide-and-Conquer Sampling For the PF algorithm, the higher the dimension of state space,the higher the number of particles required, thus ensuring that the spatial coverage of particles is wide enough. With the increase in state dimension, the complexity of the algorithm increases exponentially. These characters also hold in the UPF algorithm. The basic idea of the divide-and-conquer algorithm is to decompose a problem of scale N into K smaller-scale sub-problems, which are independent of each other and have the same properties as the original problem. When the solution to the sub-problem is found, you can obtain the solution to the original problem. It is noted that, in the coordinate system, the motion states of the target in each direction of motion space are independent from each other and not affected by the motion in other directions. In the other words, the motion space is orthometric. According to the idea of motion decomposition and synthesis, the whole motion state of the target can be expressed as the superposition of motion in each direction. The flowchart of the spatial divide-and-conquer sampling method is shown in Figure 1. The motion space is divided into two directions: X and Y. The state is estimated by sampling in the subspace. Then, the state of each subspace is predicted with the measurement information. Finally, the subspace information is merged. Motion space Divide-and-conquer sampling respectively Estimate substate Measurements X-direction predict Y-direction predict State synthesis :system estimation The idea of the divide-and-conquer method is introduced to segment motion space. Assuming that the particles are sampled in the one-dimensional subspace and the random samples N x , N y are sampled in each direction, then the total number of particles is (N x + N y ), and the total particle diversity performance reaches (N x * N y ). When sampling directly in the state space, there will be only (N x + N y ) species. It is obvious that the spatial coverage of the former sampling particles is wider, and the former can reduce the particle degradation phenomenon to some extent and improve the prediction accuracy. The complexity of the algorithm is analyzed below. According to Section 2 of this paper, the dimension of the state vector is 4 and the dimension of the observation vector is 2. Let the total number of particle sampling be N, the state one-step prediction time in the filtering algorithm is T f , and the measurement one-step prediction time is T h . When the state space is decomposed into two independent subspaces where the same number of samples are taken, the prediction time of state one-step is reduced to T f /4, and the prediction time of measurement one-step is reduced to T h /2. For the same model, the complexity of traditional sampling filtering method is N*(T f + T h ) and that of divide-and-conquer sampling filtering is N*(T f + 2T h )/4. The complexity of the two methods is the same order, but the operation time of the divide-and-conquer sampling method is smaller than that of the UPF. Unscented Particle Filter Tracking Algorithm Based on Divide-And-Conquer Sampling The basic idea of the maneuvering target-tracking algorithm based on DSC-UPF is to decompose the motion space into two independent, one-dimensional state subspaces according to the Cartesian coordinates system. The final output state is obtained using an UPF algorithm according to the optimal sampling strategy. The step of a UPF tracking algorithm based on divide-and-conquer sampling is described as follows, and the flowchart is shown in Figure 2. (1) Initialization: When k = 0, the subspace is decomposed according to the state space independence, and the one-dimensional subspace particle set X i x0 , w j x0 are generated by the prior probability P(x 0 ); (2) Predict: When k = 1, 2, ..., T, the UPF algorithm is run for each particle set, and the state estimation in each subspace X xk ,Ŷ yk is obtained; (3) Output Synthesis state: According to each sub-state estimate, obtaining the total state estimation. Simulation Results And Analysis In order to verify the effectiveness of the algorithm, simulation experiments are carried out. In the simulation experiment, the algorithm is compared with the standard PF and UPF at first, and then compared with the unscented particle filter based on the particle swarm optimization algorithm (PSO-UPF) and the unscented particle filter based on an intelligent adaptive algorithm (IA-UPF). The goal of the target tracking is to obtain the moving target's position via measurements. The state of the target at time k consists of the position and velocity. The state vector and measurement vector are defined as Setting the sensor's sampling interval is 1 s, and the total time of sampling is 60 s. Meanwhile, both process noise and measurement noise obey Gaussian distribution (N(0, 1)). The initial state of the target is X 0 = [1m, 1m, 1m/s, 1m/s]. The motion model is the variable speed motion model This simulation study is performed in MATLAB 2014b coding environment on a desktop computer with Intel Core i5-4700, 3.6 GHz, 64-bit Windows 7 operating system. The simulation starts at time k = 0, when the first measurement of the target is obtained. This may be viewed as the time when the target is first detected. Figures 3 and 4 demonstrate the performance using PF, UPF, and DCS-UPF to track the target in each subspace. In Figure 3 ( Figure 4),the red curve represents the tracking curve of DSC-UPF. The green (blue) curve represents the tracking curve of the UPF. The blue (green) curve represents the tracking curve of the PF. The true state value is shown by the black curve. DCS-UPF has produced the best tracking performance. Figure 3 describes the tracking performance of three filters in the X-direction. For most states, it is clear that DCS-UPF tracks the target more accurately than PF and UPF. Meanwhile, the curve of DCS-UPF almost overlaps with the true state values, which means that DCS-UPF can be used to represent the true state value in some cases. The tracking performance of three filters in the Y direction is shown in Figure 4. It can be seen that PF and UPF estimate the state of position incorrectly in many cases. However, DCS-UPF provides much more accurate estimation results. The reason for this phenomenon is that, firstly, particle degradation seriously affects the tracking accuracy of PF. This is because PF takes the importance density function (recommended distribution) as equal to the prior distribution, the latest measurement information is not considered, and the sampled particles cannot effectively approach the true value of the posterior probability distribution. In addition, the full prior knowledge of noise statistics is not provided. Then, UPF weakens particle degradation by generating important functions and resampling using UT. However, UPF struggles to completely cover the target state space when handling the maneuvering target tracking problem, and the tracking performance can be affected by the low sample diversity. The divide-and-conquer sampling algorithm is introduced into the UPF to solve the above problems. It successfully solves the problem of sparse particle coverage by sampling in subspace alone. Simulation results show that this method is effective in improving tracking performance. Looking at a comprehensive measurement of these two pictures, DSC-UPF has better tracking performance. The simulation results of Figures 3 and 4 verified that the divide-and-conquer sampling algorithm can improve the poor tracking performance caused by the uneven distribution of particles in each subspace. The tracking errors of the three filters are shown in Figures 5 and 6. In Figure 5, the black curve represents the error of the DSC-UPF. The green curve represents the error of the UPF. The blue curve represents the error of the PF. The tracking errors of three filters in the X-direction are described in Figure 5. It can be observed that the tracking error of DCS-UPF is smaller than that of PF and UPF. Calculating the error variance, DCS-UPF is also smaller than the others. The tracking error of three filters in the Y direction is shown in Figure 6. The conclusion is the same as that in the X direction. In Figures 5 and 6, the performance of DSC-UPF is measured from the perspective of tracking error, and the target tracking accuracy of DSC-UPF is higher. The results of the simulation show that the divide-and-conquer sampling algorithm can improve the tracking performance of UPF. (Figure 9), it can be seen that the performance of the filter in Figure 7 ( Figure 9) has been improved. On the other hand, the tracking errors of filters shown in Figures 8 and 10 indicate an improvement in the tracking performance. The simulation results show that the tracking accuracy of filters can be improved by changing the values of Q and R. Therefore, we can find suitable Q and R by constantly trying the corresponding values in the simulation process. This process can make the filter more accurate. For the UPF algorithm, the accuracy of the trajectory tracking depends on the number of particles. Within a certain range, they are directly proportional. Figures 11 and 12 demonstrate the number of effective particles in PF, UPF, and DCS-UPF. The number of effective particles in PF, UPF and DSC-UPF are represented by the blue curve, black curve and green curve, respectively. We can also find that the DSC-UPF has more effective particle numbers. The reason for this phenomenon is that particles are sampled separately in each subspace, so the number of particles distributed in each subspace becomes increasingly uniform. Meanwhile, it solves the problem of sparse particle distribution in the subspace. Therefore, DSC-UPF has much higher accuracy than both PF and UPF. For comparison analysis, trials based on the above experimental design were conducted by using UPF, PSO-UPF, IA-UPF, DSC-UPF, respectively. Figures 13 and 14 show the simulation result. It can be seen that UPF has a poor tracking performance as the UKF uses only second-order moments, which may not be sufficient for some nonlinear systems. Moreover, the number of sigma points is small and may not represent complicated distributions. In addition, the resampling step leads to a loss of diversity among the particles, reducing the estimation accuracy. Although PSO-UPF enhances the tracking accuracy of UPF, the enhanced tracking accuracy is still limited. This is because particle swarm optimization (PSO) easily falls into the local optimum, which leads to low convergence accuracy and difficult convergence. Because IA-UPF reduces the loss of particle diversity caused mainly by the particle degradation in the resampling step and incorrect a priori knowledge of process and measurement noise. IA-UPF has much higher accuracy than both UPF and PSO-UPF. However, its filtering accuracy is significantly degraded when the distribution of particles in some areas is sparse. The emergence of DSC-UPF solves this problem, and the simulation results show that DSC-UPF has much higher accuracy than UPF, PSO-UPF and IA-UPF. Table 1 lists the average RMSE of UPF, PSO-UPF, IA-UPF and DSC-UPF. It can be seen that the average RMSE of DSC-UPF is minimal, which also illustrates that the tracking performance of DSC-UPF is more accurate. The one-step running times of PF, UPF, PSO-UPF, IA-UPF and DCS-UPF are shown in Figure 15. It can be seen that PF has the minimum one-step running time. This is because the simulation process of the PF is simpler and does not require UKF to generate the proposed distribution function. The PSO-UPF has the longest single-step running time, because PSO takes a lot of time to update the velocity and position of each particle in the process of UKF generating the proposed distribution. The single-step running time of IA-UPF is between PSO-UPF and UPF. The IAUPF uses an adaptive UKF to generate the proposal distribution and uses the genetic operators to increase the diversity of particles, so that the single-step running time of IA-UPF is higher than UPF. Moreover, the execution process of IA-UPF is relatively simple, and takes less time than PSO-UPF. The single-step running time of the proposed algorithm in this paper is only second to PF, because, with the decomposition of the motion space, the time of single-step iteration of the proposed algorithm on the basis of UPF is reduced. Table 2 shows the computational performances of PF, UPF, PSO-UPF, IA-UPF and DSC-UPF. T f represents the state one-step prediction time in the filtering algorithm, T h represents the measurement one-step prediction time, and A < C < B. As shown in Table 2, the computational time of UPF, PSO-UPF, IA-UPF and DSC-UPF are notably larger than PF. This is because the computational processes of these four filters are more complex, involving the use of UKF to generate the proposed distribution, etc. Thus, they require more computational time and CPU utilization. The total running time and CPU utilization of the DSC-UPF are smaller than that of the UPF, PSO-UPF and IA-UPF. The reason for this appearance is that the divide-and-conquer algorithm divides the problem into many sub-problems, which not only guarantees the accuracy of the information but also reduces the running time. In addition, after the dimension reduction processing of the motion space, the running process of program is not as complicated as UPF, PSO-UPF and IA-UPF. In sum, the computational complexity of DSC-UPF is reduced. Combined with Figure 15, this conclusion can be confirmed further. As the complexity of the target state model increases, the algorithm performs better than UPF, PSO-UPF and IA-UPF. Conclusions This paper proposed a tracking algorithm based on DSC-UPF. By decomposing the independent state subspace, the reduction in dimension is realized, which solves the problem of sparse particle distribution and reduces the impact of particle degradation and particle shortage on the tracking performance. Compared with the standard UPF, PSO-UPF and IA-UPF, the simulation results verify that the algorithm proposed in this paper has significant advantages in tracking performance and computing performance. The reasons are as follows: firstly, it reduces the dimensionality of the motion space, which simplifies the algorithm processing process and decreases the running time, thereby ensuring real-time tracking and good tracking accuracy. Secondly, the particles are extracted from each subspace to estimate the state of this subspace, which solves the problem of particle shortage caused by uneven particle distribution in certain directions in the state space. In conclusion, the use of divide-and-conquer sampling algorithm in UPF greatly improves the tracking accuracy of the filter and reduces the complexity of the algorithm. Future work is to consider the improvement in the resampling strategy, real-time performance, and robustness, as well as applying it to more fields.
2021-03-30T05:11:26.153Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "e3309793fa1941c58024e0831c856102bbecac99", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/21/6/2236/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e3309793fa1941c58024e0831c856102bbecac99", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
53720391
pes2o/s2orc
v3-fos-license
Dynamics of Health Agency Response and Public Engagement in Public Health Emergency: A Case Study of CDC Tweeting Patterns During the 2016 Zika Epidemic Background: Social media have been increasingly adopted by health agencies to disseminate information, interact with the public, and understand public opinion. Among them, the Centers for Disease Control and Prevention (CDC) is one of the first US government health agencies to adopt social media during health emergencies and crisis. It had been active on Twitter during the 2016 Zika epidemic that caused 5168 domestic noncongenital cases in the United States. Objective: The aim of this study was to quantify the temporal variabilities in CDC’s tweeting activities throughout the Zika epidemic, public engagement defined as retweeting and replying, and Zika case counts. It then compares the patterns of these 3 datasets to identify possible discrepancy among domestic Zika case counts, CDC’s response on Twitter, and public engagement in this topic. Methods: All of the CDC-initiated tweets published in 2016 with corresponding retweets and replies were collected from 67 CDC–associated Twitter accounts. Both univariate and multivariate time series analyses were performed in each quarter of 2016 for domestic Zika case counts, CDC tweeting activities, and public engagement in the CDC-initiated tweets. Results: CDC sent out >84.0% (5130/6104) of its Zika tweets in the first quarter of 2016 when Zika case counts were low in the 50 US states and territories (only 560/5168, 10.8% cases and 662/38,885, 1.70% cases, respectively). While Zika case counts increased dramatically in the second and third quarters, CDC efforts on Twitter substantially decreased. The time series of public engagement in the CDC-initiated tweets generally differed among quarters and from that of original CDC tweets based on autoregressive integrated moving average model results. Both original CDC tweets and public engagement had the highest mutual information with Zika case counts in the second quarter. Furthermore, public engagement in the original CDC tweets was substantially correlated with and preceded actual Zika case counts. Conclusions: Considerable discrepancies existed among CDC’s original tweets regarding Zika, public engagement in these tweets, and actual Zika epidemic. The patterns of these discrepancies also varied between different quarters in 2016. CDC was much more active in the early warning of Zika, especially in the first quarter of 2016. Public engagement in CDC’s original tweets served as a more prominent predictor of actual Zika epidemic than the number of CDC’s original tweets later in the year. (JMIR Public Health Surveill 2018;4(4):e10827) doi:10.2196/10827 Introduction The World Health Organization (WHO) has stated that health is one of the most fundamental human rights [1].Social media have increasingly become critical venues for the public to seek, share, and discuss information about health and diseases.Owing to their low cost, easy access, and broad reach, social media have also been increasingly adopted by health professionals and agencies to enhance public health communication [2].For example, social media have been utilized to monitor food safety and food-borne pathogen outbreak, such as Escherichia coli O157 [3,4], to develop Web-based campaigns to quit smoking in different countries and regions (United States, Canada, and Hong Kong) with various social media platforms (Facebook, Twitter, and WhatsApp [5]); promote exercise, fitness, and healthy lifestyle (WeChat health campaign in China [6]; fitness campaign in New Orleans, LA [7]); raise public awareness and engagement regarding air quality and pollution [8]; and understand and monitor public discussion of controversial topics such as antimicrobial resistance [9]. Many government agencies and health officials (eg, WHO and US Centers for Disease Control and Prevention, CDC, as well as other local health departments) have also been adopting and utilizing social media to disseminate information, communicate with the public, and understand public opinions and concerns, especially during health emergency and crisis.Europe has developed a Web-based media and crisis communication framework for influenza [10].The WHO and CDC utilized Twitter and Instagram during the Zika outbreak [11].New York City monitored Zika, Hepatitis A, and Ebola discussion in social media and conducted risk communication with the general public [12]. Evidently, for many infectious disease epidemics, it has been demonstrated that Web-based discussion in social media can be an imperative indicator of the actual disease severity and help health officials to more accurately evaluate the time-sensitive epidemic situation when actual case counts are still being gathered and verified [13][14][15].Time series analysis is a versatile and powerful modeling framework to link Web-based discussion and reveal the disease dynamics, as demonstrated by the extant research on various epidemics [16][17][18]. The 2016 Zika epidemic provides a great opportunity to investigate and evaluate the CDC's role and responsiveness on social media.Zika was a relatively new infectious disease, which affected men and women, fetuses, and infants with multiple transmission routes.However, the general public usually had very little knowledge and understanding about it.In 2016, Zika caused 5168 confirmed noncongenital cases in the 50 states and Washington DC in the United States, and much higher case number in US territories [19].Twitter is the major social media outlet for the CDC, with a total of 67 official CDC-associated Twitter accounts covering a wide variety of health-and disease-related topics.Former CDC director Dr. Tom Frieden was active on Twitter and hosted live Twitter chats with general public [20], including a recent 1-hour live chat regarding Zika in February 2016.Despite CDC's prominent Web-based presence and efforts, inaccurate information regarding Zika proliferated on social media and outperformed the CDC (and other legitimate sources such as the WHO) by a large margin [21].Studies have shown a substantial topic discrepancy between public concern and the CDC's response to Zika on Twitter [22][23][24][25].Another less addressed aspect is the low rate of public engagement (measured by the number of retweets and replies) on social media, where social media should be a Web-based platform for public engagement and interaction [26], not just one-directional news outlet [8,27,28].Furthermore, currently there is no study on the temporal variability in the CDC's response to different epidemic stages of Zika for the entire year of 2016, its potential impact on public engagement, and quantification of information dissemination, as the CDC did not finalize and publish the complete 2016 Zika case counts in the entire United States until March 2018 [19]. Thus, there is a substantial knowledge gap in quantifying and understanding the interaction among Zika epidemic, the CDC's dynamic response on social media (Twitter), and public engagement to the CDC's effort, as well as potential discrepancy among these hierarchies during different stages of the Zika epidemic.More specifically, original CDC-initiated tweets regarding Zika represent the government agency's responsiveness to the Zika epidemic.Retweets and replies to CDC's original tweets quantify public engagement in the discourse about Zika in Twitter.Between the 2, retweets enhance Zika-related news and information discourse by replying information to other users, whereas replies imply more in-depth cognitive processing of this topic and contribute to the direct interaction with CDC [29]. To address these issues, this study aims to quantify the CDC's responsiveness on Twitter and corresponding public engagement during different stages of the 2016 Zika epidemic.We then identify potential discrepancy among them using time series analysis and information theory measurements.The results and insights gained from this study will reveal the effectiveness of CDC's efforts in disseminating information on social media and help develop more effective Web-based communication strategies to inform public and combat fake information in health-related topics. Data Collection and Preparation We collected all English tweets with the keyword "Zika" published between January 1, 2016 and December 31, 2016, using the Gnip Twitter application program interface.Corresponding retweets and replies received by these tweets were also collected.In addition, all tweets from 67 accounts affiliated with CDC in 2016 were collected.Zika case counts in the 50 US states and territories during the entire 2016 have been retrieved from the official CDC Zika case report website [29] and CDC's final report of the 2016 Zika epidemic in the United States [19]. Four time series were extracted from the original tweets (both Zika-related and all tweets initiated by CDC), retweets, and replies (only to Zika-related CDC-initiated tweets).In addition, 2 additional time series of US Zika case counts (both 50 states and 50 states plus territories) were obtained [19].Given that the dates of tweets, retweets, replies, and case counts were not entirely consistent (eg, the CDC may not tweet about Zika every day and may not publish case count on a regular basis), these time series were first standardized into weekly basis.The data were aggregated in weekly periods to ensure that each time series has the same 52 data points for further analysis and comparison.Monthly resolution was not adequate to perform successive time series analyses (because each quarter only had 3 data points) while daily resolution required an extra step of data interpolation (because each day did not necessarily had Zika tweets and case reports), and weekly basis was well balanced and should provide the highest signal-to-noise ratio in this study.To establish a baseline scenario, we computed the weekly number of tweets with any topic from all CDC accounts and identified the top topics tweeted by CDC in 2016.Using these data, we could calculate the ratio between weekly tweets with the keyword of Zika and all tweets from the CDC, which demonstrated the relative importance of Zika on the CDC's social media agenda.This estimate also helped reveal and assess the CDC's responsiveness to Zika at different stages of the epidemic. Univariate Time Series Analysis Original Zika tweets from the CDC, corresponding retweets and replies, and Zika case time series were plotted, visualized, and examined for stationarity.After the initial screening, we discovered a substantial temporal variability in the number of original tweets, retweets, and replies, as well as Zika cases.None of these time series was stationary.To characterize such large temporal heterogeneity, we divided the entire year of 2016 into 4 quarters and performed further analysis within each quarter.Furthermore, we calculated the ratio between Zika tweets and all tweets from the CDC as a measurement to quantify the relative importance of Zika among various health-related topics from the CDC's perspective. These quarterly time series were first modeled as autoregressive integrated moving average (ARIMA) models to reveal any potential temporal characteristics such as linear trend, seasonality, or temporal autocorrelation [16].The following equation: shows the form of an ARIMA model with variable X t , difference term L, and parameters (p, d, q) (Equation 1).The 3 parameters p, d, and q corresponded to autoregressive, differencing/integrated (L), and moving average components of the ARIMA model, respectively.The optimal model was then chosen by minimizing the Akaike Information Criteria (AIC) value among all possible competing models with different parameters.The Zika case counts were excluded from this analysis because most of the domestic Zika cases in 2016 were travel-related and could not be well characterized by the ARIMA model, and modeling the temporal dynamics of Zika was not an aim of this study. Multivariate Time Series Analysis We calculated the lagged correlation between 2 time series using the cross-correlation function (CCF) at different stages represented by 4 quarters in 2016 to identify and quantify the potential temporal discrepancy among Zika case counts, CDC's original tweets, and public engagement in these tweets (ie, retweets and replies to CDC's tweets).Specifically, we compared time series of Zika case counts with that of original CDC tweets to understand the CDC's responsiveness to the disease outbreak.In addition, time series of Zika case counts and that of retweets and replies were compared with discovered different levels of public engagement in reaction to the Zika epidemic.Their respective CCFs were computed for each of the 4 quarters in 2016.Given that the original CDC tweets were always highly correlated with retweets and replies, we also evaluated the dynamic change of public engagement by calculating the ratio between the number of CDC's original Zika tweets and the number of retweets or replies across different stages.In addition, we calculated the mutual information between 2 time series using Dirichlet-multinomial pseudo count Bayesian estimate of Shannon entropy, a more informative metric than the CCF to reveal the potential mutual information between 2 time series and quantify whether the number of original CDC tweets about Zika and retweets and replies received by them had adequate mutual information with actual Zika case counts. We constructed the ARIMA with External Variable (ARIMAX) model for original CDC tweets, retweets, and replies in each quarter of 2016, respectively.The ARIMAX model was a multivariate extension of the ARIMA model and incorporated an effective external variable (ie, Y t , representing a time series of Zika case counts in this study): The univariate ARIMA model and multivariate ARIMAX model were then compared to see whether including external variable actually increased the model performance by decreasing the AIC value.The ARIMAX model was constructed on the basis of the corresponding optimal ARIMA model in the univariate time series analysis section.In other words, ARIMAX and ARIMA models should have exactly the same p, d, and q parameter values to correctly assess the effect of the external variable.This revealed whether public engagement in CDC's original tweets significantly corresponded to the domestic Zika epidemic.We then tested whether the number of original CDC tweets, retweets, or replies could serve as an imperative indicator of actual Zika case (or vice versa) in different stages by applying the Granger causality test.The terms that needed to be first differenced in the Granger test were determined from the Descriptive and Univariate Time Series Analysis Results Among all tweets sent by the CDC in 2016, Zika was the third most tweeted health topic, totaling >6000 tweets (including 4000 original tweets and another 2000 retweets by other CDC-associated Twitter accounts), and was just behind HIV/AIDS and sexually transmitted disease in entire 2016 (Figure 1).As there might be overlap between topics (eg, Zika/sexually transmitted disease, Zika/Vaccine, HIV AIDS/Pre-exposure prophylaxis, HPV/Vaccine, etc), a specific tweet could belong to multiple topics.Thus, Zika was a highly ranked and important health topic in 2016 according to the CDC.Among all 67 CDC-associated Twitter accounts, 21 tweeted about Zika in 2016.More than 60% (3663/ 6104) of Zika-related tweets were posted by @CDCgov, @CDCTravel, @CDCGlobal, and @CDCEmergency; these 4 were also the most active Twitter accounts that disseminated Zika-related information consistently through all 4 quarters in 2016.Although Zika was one of the hot topics tweeted by the CDC, there was substantial temporal heterogeneity in the CDC's tweeting pattern regarding Zika.More than 84.0%(5130/6104) of all Zika tweets were published in the first quarter of 2016, with 5.6% (342/6104), 7.5% (458/6104), and 2.4% (146/6104) for the subsequent quarters, respectively (Figure 2).The top left of Figure 2 shows the number of all tweets sent from all CDC-associated Twitter accounts during 2016 (solid black line) and Zika-related tweets (dashed blue line); the top right shows the number of Zika-related tweets (solid black line) and Zika case counts in 50 states and DC (solid red line); the bottom left shows retweets to CDC's Zika tweets; and the bottom right shows replies to CDC's Zika tweets.As a comparison of the temporal dynamics, domestic Zika case percentages in 50 states and DC were 10.8% (560/5168), 26.0% (1343/5168), 52.8% (2728/5168), and 10.4% (535/5168) in the 4 quarters, and case percentages in 50 states, DC, and overseas territories were 1.70% (662/38,885), 5.91% (2298/38,885), 58.46% (22,732/38,885), and 33.92% (13,189/38,885) in the 4 quarters (Figure 3).Data were obtained from the CDC Morbidity and Mortality Weekly Report [19].Thus, the Zika epidemic dynamics was substantially different from the CDC's tweeting dynamics in 2016, as Zika case counts were actually the lowest in the first quarter of 2016.Zika was unequivocally the most tweeted health topic of the CDC in the first quarter and was mentioned in almost 50.0%(3052/6104) of all tweets in that quarter, dwarfing both HIV/AIDS-and sexually transmitted disease-related tweets; this substantial temporal heterogeneity was also demonstrated by the distinct ARIMA models in each quarter (see Table 1, the first column for original tweets).The optimal ARIMA model in the first quarter was with parameter p, d, q=2, 0, 3, indicating that the optimal time series model with the minimized AIC value for original tweets did not need differencing (d=0, order of differencing being 0, that is, already stationary and does not need further differencing), and with autoregressive and moving average term p=2 (indicating autoregressive time lag of 2) and q=3 (indicating moving average order of 3), respectively.The parameters associated with optimal ARIMA models in the next 3 quarters were p, d, q=2, 1, 3 (second quarter), 1, 1, 1 (third quarter), and 2, 0, 3 (fourth quarter), respectively. Retweets of and replies to the original Zika tweets from the CDC generally followed the similar temporal characteristics, where the first quarter had the largest number of both retweets and replies (Figure 2, lower left and lower right, respectively).The optimal ARIMA models were again distinct across the 4 quarters in 2016, for both retweets (Table 1, the second column) and replies (Table 1, the third column).The only similarity was retweets in the first and the second quarter, both of which had the same parameterization (p, d, q=2, 1, 3).Comparing among ARIMA models for original tweets, retweets, and replies, there were only 2 pairs with the same model parameterization-original and retweets in the second quarter (both with p, d, q=2, 1, 3) and retweets and replies in the third quarter (both with parameter values p, d, q=2, 1, 2).These results revealed a substantial temporal variability across different quarters of 2016 and among original tweets, retweets, and replies. Multivariate Time Series Analysis Results As shown in Figure 4, strong temporal correlations were discovered between original Zika tweets from the CDC and retweets, as well as between original Zika tweets from the CDC and replies in all quarters of 2016.Most retweets and replies were centered at zero, indicating that general public's interaction with original CDC tweets was usually synchronized.Figures 5-7 provide the plots of the CCF between Zika case and each of the following variables: original Zika tweets from the CDC, retweets, and replies in each quarter of 2016, respectively.For original Zika tweets and Zika case counts, strong temporal correlations were observed in the first, second, and fourth quarter.In the first quarter, CDC's tweets regarding Zika preceded actual case counts for approximately 7-10 days, indicated by the substantial lag of 7, 8, 9, and 10 (Figure 5, top left).In the second quarter, CDC's tweets were ahead of the case for approximately 2 weeks (Figure 5, top right).In the fourth quarter, CDC's tweets were behind Zika case for approximately 1-3 days (Figure 5, bottom right).In the third quarter, there was no substantial correlation between the 2 time series.These results revealed that the CDC was very active during the early stage of the Zika epidemic (especially February 2016) on social media when the actual case number was low (Figure 2, top right). RenderX The similar pattern was also observed between retweets and Zika cases (Figure 6).The first quarter demonstrated a strong temporal correlation between the 2, whereas there was no substantial correlation in the fourth quarter.In other words, the general public was more engaged in retweeting to help disseminate the information during the first half of 2016. The correlation between replies and Zika cases was also explored and demonstrated (Figure 7).Replies preceded case counts for about a week in the first quarter, indicating the general public's strong interests in discussing Zika and interacting with the CDC on Twitter; this active engagement decreased as time went by.By the fourth quarter of 2016, replies were about 10 days behind actual cases. In addition, we calculated the mutual information to explore mutual dependence between Zika cases and each of these activities on Twitter-original Zika tweets from the CDC, retweets, and replies, from an information perspective (Table 1).In the first quarter, replies had the highest mutual information (0.09) with Zika cases, which was even higher than original Zika tweets from the CDC (0.04) and retweets (0.01).Nevertheless, all these mutual information (ie, Shannon information entropy) were low, indicating a potential discrepancy between the discussion of Zika on Twitter and actual epidemic.In the second quarter, replies, retweets, and original Zika tweets from the CDC had 0.29, 0.17, and 0.13 mutual information with Zika cases, respectively, serving as the highest mutual information of all 4 quarters in 2016.In the third quarter, retweets had the highest mutual information with Zika cases (0.08), followed by both original tweets and replies tied at 0.02.In the fourth quarter, retweets got the highest mutual information again (0.07), followed by original tweets and replies with very low mutual information (0.01).In general, retweets and replies had even more mutual information with Zika cases compared with CDC's original Zika tweets.Thus, the CDC's tweeting pattern was an inferior indicator of the Zika epidemic than public engagement in its tweets as illustrated by the patterns of retweets and replies. The mutual information does not consider potential temporal characteristics such as lag or trend.Therefore, we further quantified whether including an external variable of Zika case counts could increase the ARIMA model performance (Table 1).The analysis results showed that in the first quarter, all ARIMAX models outperformed their ARIMA counterparts by a large margin (difference of AIC [dAIC]=-2.25,-1.88, and -1.21 for original Zika tweets, retweets, and replies, respectively; dAIC was the difference of AIC values between ARIMAX and ARIMA models, and negative dAIC value indicated better performance of the ARIMAX model, that is, including an external variable increased the model predictability).Although Zika case counts were the lowest in the first quarter, they still highly correlated with the temporal dynamics of Web-based discussion of Zika.Including Zika case counts only improved the ARIMAX model for retweets (dAIC=-0.88) in the second quarter, for replies (dAIC=-0.62) in the third quarter, and for original Zika tweets from CDC (dAIC=-0.59) in the fourth quarter.These findings provided further evidence to confirm the large temporal variability and differences in the CDC's response to Zika and public engagement in their responses on Twitter. In addition, we evaluated whether Zika case could be Granger cause of original CDC tweets, retweets, and replies, or vice versa.The Granger causality test revealed that case count was not Granger cause for original Zika tweets from the CDC in any quarter, and vice versa.Thus, the correlation between CDC's Zika tweets and actual Zika cases was not strong.Retweets, however, could serve as Granger cause of Zika cases for order from 1 to 5 (P=.05, .04,.02,.01,and .04,respectively) in the first quarter; this coincided with previous findings that retweets had a very high correlation with Zika cases in the first quarter (Figure 6).Similarly, replies also served as Granger cause in the first quarter for order 3, 4, and 5 (P=.03, .01,and <.001, respectively).Furthermore, replies served as Granger cause again in the fourth quarter for order 1 (P=.04).In contrast, Zika case counts in the third quarter could be Granger cause for replies with order 2 and 3 (P<.001for both orders) but not vice versa.This was the only exception when Zika cases served as Granger cause for Twitter discussion.It is important to note that Granger causality only provided statistical evidence for potential causality and did not guarantee actual causality.For example, replies as Granger cause in the first quarter did not mean replies to CDC's tweets "caused" Zika cases in the United States.Therefore, we should interpret that replies preceded Zika cases and had a strong association with Zika case counts at selected orders.Furthermore, the temporal heterogeneity in Granger test results showed variability across different quarters in 2016. Discussion This study is the first of its kind that specifically investigates the temporal variability in CDC's tweeting activities regarding Zika.More importantly, it links the temporal variability of Zika cases in the United States to that of CDC's social media responses and public engagement in those social media messages.In general, we discovered substantial discrepancy among CDC's tweets regarding Zika, public engagement, and actual Zika epidemic in different stages of the epidemic in 2016.As shown by our findings, there was a substantial discrepancy between CDC's response to Zika in Twitter and the Zika epidemic.When Zika case counts were low in the United States during the first quarter of 2016, CDC was very active in disseminating information about Zika by sending out >84.0% (5130/6104) of all its 2016 Zika tweets.The CDC and its former director Dr Frieden even hosted 1-hour Twitter chat on February 16, 2016.All these activities correlated with active public engagement, as retweets and replies were also the highest among all quarters.Thus, the CDC was effective in the early warning of the upcoming epidemic of Zika and successfully gained public attention during the first quarter of 2016.However, when Zika case counts started to increase sharply in the second and third quarters of 2016, CDC's Zika-related tweets decreased substantially and did not catch up with the Zika case counts.Nevertheless, public engagement in discussion of Zika on social media could be influenced by some other factors such as news source, personal familiarity with the disease, and potential opinion leaders who may not necessarily be health-related.All these could be future directions to expand this study. While public engagement in CDC's Zika tweets (ie, retweets and replies) also decreased dramatically in the second and third quarters of 2016, it was significantly associated with Zika cases, as revealed by the performance of corresponding ARIMAX models (compared with the original ARIMA models).When more case counts (including both transmitted cases and travel-related cases) were reported in Florida since late July and from Summer Olympics in Brazil between August 5 and 21, 2016, retweets and replies to CDC's Zika tweets increased again substantially, demonstrating public's growing and recurrent awareness of this emerging health issue.The dynamic public engagement in CDC's Zika tweets was generally different among quarters and was also substantially influenced by and usually preceded the Zika epidemic.Therefore, public engagement in CDC's Zika tweets was generally a more prominent predictor of the actual Zika epidemic than CDC's tweets later in the year. Different from previous studies that have used social media discussion trend to predict and adjust the actual disease dynamics [13,16,18,[30][31][32][33], this study used Zika case counts and epidemic to infer the Twitter discussion dynamics and revealed dynamic changes throughout the year; we made this decision because the majority of domestic Zika cases in the United States were travel-related and highly stochastic [19].Therefore, they could not be accurately captured by statistical models such as ARIMA or ARIMAX.Using social media discussion to predict the actual disease dynamics is, thus, more XSL • FO RenderX useful for locally transmitted diseases, such as influenza, rather than travel-related diseases. This study has several limitations.First, we did not investigate the actual content and user identities of retweets and replies.One of the future directions is to investigate the content of these messages by using topic modeling [24] and natural language processing [34].It will be especially valuable to examine the patterns of replies to understand the public's responses toward the original tweets.For example, it will be interesting to examine if public responses are neutral, synergistic, or antagonistic.Another potential route was to investigate retweeting or replying network, identify potential opinion leaders, and assess their roles in disseminating health-related information from legitimate sources such as the CDC and WHO. In this study, we focused on public engagement in CDC's tweets (ie, retweets and replies).Nevertheless, it represents a relatively small portion of public engagement in the general topic of Zika compared with all Zika-related tweets.An extension of this study could investigate the temporal dynamics of all Zika-related retweets and replies and compare them with public engagement in CDC's Zika tweets.Similarly, the number of original Zika tweets from the CDC were relatively low especially after the first quarter in 2016, which might influence time series analysis results (and it was also the reason we chose weekly but not a daily resolution in this study).A potential remedy was to include the temporal dynamics of all Zika-related tweets as a reference in the future study and contrast that with the CDC's tweeting pattern. Figure 1 . Figure 1.The top 15 most tweeted health topics by the Centers for Disease Control and Prevention (CDC) in 2016.STD: sexually transmitted disease TB: tuberculosis; CVD: cardiovascular disease; PreP: Pre-exposure prophylaxis; HPV: Human papillomavirus. Figure 2 . Figure 2. The time series of Zika tweets from the Centers for Disease Control and Prevention (CDC), corresponding retweets, replies, and all original tweets from the CDC in 2016. Figure 3 . Figure 3. Noncongenital Zika virus disease cases in 50 states/DC and both 50 states/DC and territories in 2016.CDC: Centers for Disease Control and Prevention. b dAIC: difference in Akaike information criterion.c Negative dAIC value indicates better performance of the ARIMAX model compared with its corresponding ARIMA model; hence, including Zika case counts improves the model performance. Figure 5 . Figure 5.The cross-correlation function (CCF) between original Centers for Disease Control and Prevention (CDC) Zika tweets and domestic Zika cases in 4 quarters of 2016.ACF: autocorrelation function. Figure 6 . Figure 6.The cross-correlation function (CCF) between retweets to Centers for Disease Control and Prevention (CDC) Zika tweets and domestic Zika cases in 4 quarters of 2016.ACF: autocorrelation function. Figure 7 . Figure 7.The cross-correlation function (CCF) between replies to Centers for Disease Control and Prevention (CDC) Zika tweets and domestic Zika cases in 4 quarters of 2016.ACF: autocorrelation function. Table 1 . Mutual Shannon information entropy, Autoregressive Integrated Moving Average or Autoregressive Integrated Moving Average with External Variable model parameters, and Akaike Information Criteria values in different quarters of 2016.
2018-12-02T16:19:45.290Z
2018-11-22T00:00:00.000
{ "year": 2018, "sha1": "53c4187550ff46f721bde91e59340cc87e021171", "oa_license": "CCBY", "oa_url": "https://publichealth.jmir.org/2018/4/e10827/PDF", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b66f586d93dcc8599850aeaec5bd5536409c0b62", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Geography" ] }
3744411
pes2o/s2orc
v3-fos-license
Are Registration of Disease Codes for Adult Anaphylaxis Accurate in the Emergency Department? Purpose There has been active research on anaphylaxis, but many study subjects are limited to patients registered with anaphylaxis codes. However, anaphylaxis codes tend to be underused. The aim of this study was to investigate the accuracy of anaphylaxis code registration and the clinical characteristics of accurate and inaccurate anaphylaxis registration in anaphylactic patients. Methods This retrospective study evaluated the medical records of adult patients who visited the university hospital emergency department between 2012 and 2016. The study subjects were divided into the groups with accurate and inaccurate anaphylaxis codes registered under anaphylaxis and other allergy-related codes and symptom-related codes, respectively. Results Among 211,486 patients, 618 (0.29%) had anaphylaxis. Of these, 161 and 457 were assigned to the accurate and inaccurate coding groups, respectively. The average age, transportation to the emergency department, past anaphylaxis history, cancer history, and the cause of anaphylaxis differed between the 2 groups. Cutaneous symptom manifested more frequently in the inaccurate coding group, while cardiovascular and neurologic symptoms were more frequently observed in the accurate group. Severe symptoms and non-alert consciousness were more common in the accurate group. Oxygen supply, intubation, and epinephrine were more commonly used as treatments for anaphylaxis in the accurate group. Anaphylactic patients with cardiovascular symptoms, severe symptoms, and epinephrine use were more likely to be accurately registered with anaphylaxis disease codes. Conclusions In case of anaphylaxis, more patients were registered inaccurately under other allergy-related codes and symptom-related codes rather than accurately under anaphylaxis disease codes. Cardiovascular symptoms, severe symptoms, and epinephrine treatment were factors associated with accurate registration with anaphylaxis disease codes in patients with anaphylaxis. INTRODUCTION Anaphylaxis is a serious, life-threatening generalized or systemic hypersensitivity reaction. [1][2][3] Most anaphylaxis symptoms present acutely and worsen in a short period of time. For this reason, most anaphylactic patients report to the emergency department. Therefore, it is important for the medical staff of emergency department who first face anaphylactic patients to make an accurate diagnosis and provide an immediate and appropriate treatment. The incidence of anaphylaxis has been continuously rising worldwide over the past 20 years. 4,5 The prevalence of anaphylaxis in the general population is at least 1.6% higher in the United States 6 and ranged from 1.5 to 7.9 per 100,000 person-years in Europe. 7 There has been active research on anaphylaxis, but many study subjects are limited to patients registered with anaphylaxis codes. As a result, patients not registered with anaphylaxis codes are excluded as study subjects. 8 To accurately determine the rate of anaphylaxis, it is necessary to evaluate whether the symptoms and signs of pa-tients meet the diagnostic criteria of anaphylaxis and to accurately register an anaphylaxis code. However, anaphylaxis codes tend to be underused, 9,10 and are highly likely to be registered under anaphylaxis-associated codes, allergy-related disease codes, and symptom codes related to the symptoms and signs of anaphylaxis rather than under directly specified anaphylaxis codes. Therefore, if a large number of anaphylactic patients are registered under other codes and therefore excluded, accurate research on anaphylaxis incidence, etiology, and clinical characteristics may be affected. To our knowledge, no previous report has assessed the inci-dence of anaphylactic patients registered under other codes than anaphylaxis. Therefore, this study determined the frequency and clinical characteristics of anaphylactic patients who met diagnostic criteria but were not registered under anaphylaxis codes in the emergency department by comparing them with those of patients who were accurately diagnosed with anaphylaxis. Study population The subjects of this study included adult patients with anaphylaxis aged over 16 years who had presented to the emergency department of a tertiary hospital for 5 years between January 2012 and December 2016. Anaphylactic patients were defined based on a review of anaphylaxis frequency and characteristics and allergy-related codes. 11,12 To identify the omitted anaphylactic patients, disease codes related to symptoms and signs suggested in the clinical diagnostic criteria of anaphylaxis were also collected (Table 1). 13 During the survey period, all medical records of the adult patients who were registered under the disease codes were reviewed retrospectively in order to re-evaluate whether they were actually diagnosed with anaphylaxis. Subjects were excluded if they did not meet the diagnostic criteria of anaphylaxis (as defined by the 2011 World Allergy Organization Guidelines for the Assessment and Management of Anaphylaxis) after reviewing all medical records for anaphylaxis, allergy-related, and symptom-related codes. The study subjects were divided into the accurate group which was registered under T78.0, T78.2, T78.2B, T78.2C, T80.5, and T88.6 codes with the direct specification of anaphylaxis and into the inaccurate coding group which was registered under allergy-related codes and symptom and sign related codes. As the diagnostic criteria of anaphylaxis, the clinical criteria for diagnosing anaphylaxis suggested by 2011 World Allergy Organization Guidelines for the Assessment and Management of Anaphylaxis was applied. 13 Anaphylaxis is highly likely when any one of the following 3 criteria is fulfilled. 1) Acute onset of an illness with involvement of the skin, mucosal tissue, or both and at least one of the following: A. Respiratory compromise (e.g., dyspnea, wheeze-bronchospasm, stridor, hypoxemia) B. Reduced blood pressure or associated symptoms of endorgan dysfunction (e.g., hypotonia [collapse], syncope, incontinence) or 2) Two or more of the following that occur rapidly after exposure to a likely allergen for that patient A. Involvement of the skin-mucosal tissue (e.g., generalized urticarial, itch-flush, swollen lips-tongue-uvula) B. Respiratory compromise (e.g., dyspnea, wheeze-bronchospasm, stridor, hypoxemia) C. Reduced blood pressure or associated symptoms (e.g., hypotonia [collapse], syncope, incontinence) D. Persistent gastrointestinal symptoms (e.g., crampy abdominal pain, vomiting) or 3) Reduced blood pressure after exposure to known allergen for that patient A. Systolic blood pressure of less than 90 mmHg or greater than 30% decrease from that person's baseline Data collection Relevant materials were surveyed to evaluate the patients' general characteristics, causes of anaphylaxis, clinical characteristics, and treatments. We also collected demographic data including patient age, gender, transportation to the emergency department, elapsed time from exposure to symptom onset, elapsed time from symptom onset to emergency department arrival, history of allergic diseases, comorbidities, smoking status, and drinking status. Transportation to the emergency department was classified into public ambulance, transfer from another medical facility, and individual transportation. History of allergic diseases was classified into anaphylaxis, asthma, rhinitis, atopy, drugs, and foods. The causes of anaphylaxis were classified into drugs, radiocontrast media, insect stings, food, exercise, and idiopathic factors. For more detailed causes, drugs were categorized into nonsteroidal anti-inflammatory drugs, penicillin, cephalosporin, vaccines, and acetaminophen; insect stings were categorized into bee, ant, and other insects. Foods were classified into seafood, wheat, buckwheat, nuts, egg, and pork. Aside from those, exercise-induced causes, fooddependent exercise-induced causes, and idiopathic causes were also investigated. Regarding clinical manifestations, the patient symptoms were classified into skin and mucosal, respiratory, cardiovascular, gastrointestinal, and neurologic symptoms. In addition, the severity of hypersensitivity reactions, blood pressure at the time of emergency department arrival, and consciousness were surveyed. On the basis of the method reported by Brown, 14 the severity of the hypersensitivity reactions was classified into severe and non-severe grades depending on hypoxia (SpO2 ≤92%), hypotension (systolic blood pressure <90 mmHg), and neurologic symptoms. Regarding prehospital treatment, the oxygen supply, fluid administration, and epinephrine administration were investigated. With regard to treatment in the emergency department, the oxygen supply, endotracheal intubation, fluid administration, steroid administration, epinephrine administration, bronchodilator administration, and cardiopulmonary resuscitation were investigated. Figure. The numbers of accurately and inaccurately registered anaphylaxis patients. We excluded patients (a) without ICD-10 codes that are associated with anaphylaxis (anaphylaxis, anaphylaxis-related, and symptom-related codes). We further excluded (b) patients with allergy-and symptom-related codes who did not satisfy the diagnostic criteria of anaphylaxis patients among those with ICD-10 codes associated with anaphylaxis. Statistical analysis Frequency analyses of the registered codes were conducted in both the accurate and inaccurate coding groups. To compare the patients' general characteristics, causes of anaphylaxis, clinical manifestations, and treatments between the 2 groups, univariate comparison analysis was performed using χ 2 test, Fisher's exact test, and Mann-Whitney U test. To identify the factors which were highly likely to be registered in the accurate group, those factors that had statistical significance were included in multivariate logistic regression analysis performed after correcting for patient gender. The statistical analyses were performed using IBM SPSS Statistics for Windows, version 21.0 (IBM Corp., Armonk, NY, USA). Statistical significance was defined as a P value less than 0.05. Ethics statement This study was exempted for review by the Institutional Review Board due to retrospective study. RESULTS During the 5-year study period, of 211,486 total adult patients who presented to the emergency department, we reviewed all medical records of 63,826 with International Statistical Classification of Diseases 10th Revision (ICD-10) codes that were associated with anaphylaxis, including anaphylaxis, allergy-related, and symptom-related codes. After excluding cases that did not meet the diagnostic criteria of anaphylaxis in each group, of 618 anaphylaxis patients, 161 (26.1%) and 457 (73.9%) were assigned to the accurate and inaccurate coding groups, respectively; 365 patients had allergy-related codes and 92 had symptom codes (Figure). The average ages were 48.0±13.3 and 44.2±14.2 years in the accurate and inaccurate coding groups, respectively. The 2 groups had no difference in gender. Regarding transportation to the emergency department, 68.3% of the accurate group and 88.8% of the inaccurate coding group had individual transportation. The inaccurate coding group had longer elapsed times from exposure to symptom onset and from symptom onset to emergency department arrival. With regard to past history of allergy, 7.5% of the accurate and 3.5% of the inaccurate coding groups had anaphylaxis history. Regarding comorbid diseases, 9.9% of the accurate and 3.7% of the inaccurate coding group had cancer history. The 2 groups had no differences in smoking history and alcohol consumption at the time of symptom onset ( Table 2). Drugs were the cause of anaphylaxis, in 47.8% and 33.9% of the accurate and inaccurate coding groups, respectively. Analysis of the detailed causes revealed the differences between the 2 groups in cephalosporin (8.7% vs 4.4%), acetaminophen (5.0% vs 1.8%), and radiocontrast media (13.7% vs 2.0%). Insect stings accounted for 18.0% of the accurate group and 9.0% of the inaccurate coding group. Foods accounted for 26.1% and 42.5% of the accurate and inaccurate coding groups, respectively. The 2 groups had no difference in exercise. Idiopathic cases accounted for 6.8% of the accurate group and 12.5% of the inaccurate coding group (Table 3). Among anaphylaxis symptoms, the accurate group had more cardiovascular (77.0% vs 34.8%) and neurologic (29.8% vs 9.8%) symptoms than the inaccurate coding group, whereas the inaccurate coding group had more cutaneous symptoms (92.3%) than the accurate group (74.5%). Severe symptoms occurred in 57.1% of the accurate group and 9.8% of the inaccurate coding group. Non-alert consciousness was present in 14.3% and 0.9% of the patients in the accurate and inaccurate coding groups, respectively. Regarding prehospital treatment, the accurate group more often had oxygen supply (4.3% vs 1.1%) and epinephrine use compared to the inaccurate coding group (2.5% and 0%). Regarding emergency department treatment, the accurate group had more oxygen supply (34.8% vs 9.8%), endo- tracheal intubation (4.3% vs 0%), and epinephrine use (57.8% vs 14.7%) than the inaccurate coding group. Fluid administration, steroid use, and bronchodilator use did not differ between the 2 groups ( Table 4). The factors with statistical significance in univariate comparison analysis were included in the multivariate logistic regression analysis after adjusting for gender. The results indicated that anaphylactic patients with cardiovascular symptoms, severe symptoms, and epinephrine use in the emergency department were likely to be registered with anaphylaxis codes (Table 5). DISCUSSION Anaphylaxis is a hypersensitivity reaction, ranging from urticaria to fatal systemic cardiovascular compromise. Its symptoms and signs vary and its causal relation with allergens is not clear. For this reason, relevant patients may be registered using other codes related to the symptoms and signs rather than anaphylaxis codes. Although anaphylaxis patients are registered under urticaria or angioedema symptom-related codes rather than anaphylaxis codes, any appropriate patient treatment is not incorrect. Nevertheless, registration of patients under other codes rather than anaphylaxis codes makes it difficult to accurately determine the anaphylaxis incidence. To our knowledge, there is no research on anaphylactic patients registered under other related codes. Therefore, future research on anaphylaxis should also consider inaccurately registered anaphylactic patients, as shown in this study. In this study of patients who had presented to the emergency department for 5 years, 618 patients met the diagnostic criteria for anaphylaxis; of these, in the inaccurate coding group were registered under other codes than anaphylaxis codes, a number greater than that in the accurate group (161 patients). In the inaccurate coding group, the most common registered code was urticaria (173 patients), followed by angioedema (130 patients) (Figure). This finding indicates that skin features arising in urticaria and angioedema are easily observed with the naked eye. Additionally, compared to objective symptoms, subjective symptoms such as abdominal pain and shortness of breath are unclear or mild; therefore, patients meeting the diagnostic criteria were likely to be registered as having the subjective symptoms or angioedema, which are relatively clearer than anaphylaxis. In particular, if patients had clear skin features but other mild symptoms, they were often registered under urticaria. Patients with clear mucosal edema accompanied by respiratory symptom were often registered under angioedema. In the inaccurate coding group, 92 patients (14.9%) were registered under the codes in which the symptoms and signs are directly specified. The patient group registered under their respiratory symptom was the largest (31 patients), followed by skin and mucosal symptoms (29 patients). This is most likely because the medical staff was unable to accurately understand diagnostic criteria of anaphylaxis and to make a diagnosis; thus, the patients were registered under their chief complaint as a symptom code. Therefore, to accurately survey the anaphylaxis incidence rate, it is necessary to educate the medical staff of emergency departments to accurately understand the anaphylaxis diagnostic criteria. Previous studies reported the principal triggers of anaphylaxis to include foods, insect stings, and drugs; however, there were differences depending on the study population, study design, and geographic area. 4,10,[15][16][17][18][19] In this study, the causes of anaphylaxis in the accurate group included drugs, foods, and insect stings in this order of prevalence, compared to foods, drugs, and idiopathic anaphylaxis in the inaccurate coding group. This result was similar to those of previous studies. In the accurate group, radiocontrast media were significantly large. That was because the administration of radiocontrast media in the course of examination in the emergency department triggered anaphylaxis and consequently there was a clear causal relation. In the inaccurate coding group, idiopathic anaphylaxis was significantly large. Skin signs are the most characteristic symptoms and signs of anaphylaxis, frequently accompanied by respiratory, gastrointestinal, and cardiovascular symptoms. 10,16,[20][21][22][23] In this study, cardiovascular signs, such as hypotension, were most common in the accurate group, followed by skin signs; in the inaccurate coding group, skin signs were most common, followed by respiratory symptoms. The reason for these differences was that the medical staff recognized patients with severe reactions like hypotension as having anaphylaxis and registered them using an anaphylaxis code; however, patients with relatively mild skin signs or mildly labored respiration were judged to meet the diagnostic criteria of anaphylaxis but were registered under other codes. This supports the finding that the accurate group had significantly higher frequencies of severe symptoms and nonalert consciousness. Anaphylaxis is a medical emergency and prompt management is of vital importance. Epinephrine is an important drug for the initial management of anaphylaxis. Its delayed administration may lead to patient death. 13,24 This study also revealed that the accurate group had significantly higher use of oxygen supply and epinephrine administration. In particular, patients who were administered epinephrine were accurately registered with anaphylaxis codes 4.3 times more often than those who were not (Table 5). This difference means that patients who received epinephrine experienced severe reactions, such as hypotension or hypoxia. As described earlier, the medical staff clearly recognized these severe reactions as anaphylaxis and registered the patients with anaphylaxis codes. Medical practitioners in the emergency department tend to focus on patients with severe anaphylaxis who present with specific symptoms and treatment, as shown in this study. However, anaphylaxis can present with a wide range of symptom severity, from mild to fatal. No case of anaphylaxis should be overlooked, as anaphylaxis has a high probability of worsening within a short period. Therefore, it is important to continuously educate the medical staff in the emergency department about the manifestations and management of anaphylaxis. To accurately diagnose patients with mild symptoms and signs as anaphylactic patients, the medical staffs in the emergency department need to understand the diagnostic criteria of anaphylaxis and accurately register anaphylaxis codes. As shown in this study, there are cases where patients who met the diagnostic criteria of anaphylaxis were registered under other codes. Therefore, to identify anaphylactic patients, it is necessary to search for study patients including those registered with anaphylaxis-related codes. The results of this study cannot be generalized as this was a retrospective study that was conducted at a single university hospital. Further prospective multicenter studies will be needed to overcome this limitation. The study subjects were only those patients who had reported to the hospital emergency department and did not include outpatients or patients who were hospitalized and had anaphylaxis. Given that anaphylaxis occurs acutely, the initial treatment is likely to be provided to the patients in the Emergency Department rather than outpatients, except for those who are hospitalized and have anaphylaxis. To search for anaphylactic patients, this study collected the disease codes used in previous works and symptom codes that satisfied the diagnostic criteria of anaphylaxis. Therefore, it is likely to have excluded anaphylactic patients who were registered with different disease codes. This study focused on the registered disease codes for anaphylactic patients in the emergency department of a single university hospital. The emergency department of a research hospital may have high or low registered disease codes for anaphylaxis, making it difficult to generalize the results of this study. Nevertheless, this study shows the potential for the underestimation of the anaphylaxis frequency and incidence rates reported in previous studies on anaphylaxis. This study revealed that among adult anaphylactic patients who reported to the emergency department, those registered inaccurately outnumbered those registered accurately and that they were sometimes registered not only under allergy-related codes but also under symptom-related codes. Patients with cardiovascular symptoms, severe symptoms, and epinephrine use in the emergency department were highly likely to be accurately registered with anaphylaxis codes.
2018-04-03T03:17:35.726Z
2018-01-15T00:00:00.000
{ "year": 2018, "sha1": "f01a02d3c642c28106ee0025d93bc55843866686", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc5809762?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "f01a02d3c642c28106ee0025d93bc55843866686", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
42619972
pes2o/s2orc
v3-fos-license
Time for evidence-based Ayurveda: A clarion call for action Several thought leaders in Ayurveda and health sciences infer that the sector is in crisis and facing formidable challenges. The inference is based on the unimpressive performance of the sector on all fronts, education, research, clinical practice, industry, and regulation. Reasons for the crisis are complex. Visible, right on the surface reasons, include proximate causes such as the conservative and short sighted attitudes of entrenched administrators, educators, scientists, practitioners, industry, and above all lack of strategic vision and political will at the government level. At a deeper more invisible level is the cultural and epistemological divide between globally dominant western science and still marginalized Indian knowledge systems. The colonial legacy of decrying Indian sciences has yet to be outgrown. More reasons for arrested progress Ayurveda derive from history, especially that of the last millennium when our cultural and intellectual freedom and traditions were trampled and state patronage was ceaselessly denied by foreign rulers. The renaissance, in Europe, and the subsequent developments in science, technology and medicine, almost bypassed the subjugated India. Even after independence ''mainstreaming Ayurveda in national health'' has been a loud political slogan and even from the 1 st to the most recent 12 th 5-Year Plan, it has been bereft of any substantive funding, innovative programs, smart strategy, or a clear roadmap. As a consequence, performance of the independent Department of AYUSH over the last 18 years, since its inception in 1995, has been dismal. While other departments and councils like Council for Scientific and Industrial Research (CSIR), Department of Science and Technology (DST), Department of Biotechnology (DBT), and Indian Council of Medical Research (ICMR), have been governed transparently and professionally by renowned scientists, the Department of AYUSH continues to be at the mercy of bureaucracy. The immense loss of opportunities is not even recognized. The last three years in particular were unusually damaging, despite a significant global demand and extra-AYUSH efforts for evidence-based Ayurveda. TIME TO ADD VALUE Despite these formidable obstacles many significant intellectual efforts have taken place as interpretations, reinterpretations, and critical scholarly commentaries duly recognized by scholars like Meulenbeld. Ayurveda draws its philosophies from Darshanas, which teach relentless and objective search for the truth. The Darshanas expect reproducible knowledge earned through rigorous pramanabased and ethical practices. Charaka and Sushruta laid foundations for logical analysis, sequential nidana and its experiential reversal methods with an emphasis on practical management of patients. Vagbhata reconstructed the texts according to contemporary needs. These Samhitas, in hundreds of verses, explain methods of studying causeeffect relations, evaluation of true associations, and unbiased meticulous observations. But these classics have to be rewritten incorporating the major medical discoveries of the last 2 centuries. The arrested growth of Ayurveda has to be compensated by incorporating the basics of biology, chemistry, and physics. Ayurvedic physicians should not be deprived of major disciplines like microbiology, immunology, biochemistry, genetics, pathology, imaging techniques, endoscopies, and minimal access surgery. DARE TO EXPERIMENT Ayurveda in the 21st century needs a fresh wave of new ideas, adventures and liberation, in order to play its required role in the newly emerging era of medical pluralism. We need frank and objective introspection to ask intrepid questions in the same spirit of the Upanishads, where students were encouraged to question their mentors. The Apta are revered because of their unbiased knowledge and minds open to inquisitive approach. The inquisitive culture in Ayurveda has deteriorated over the centuries. We can no longer live on the glory of the past. The critical outlook of Ayurveda must be regained to build a progressive future. We need to challenge assumptions, try to re-interpret meanings in new contexts and, most importantly, dare to experiment to generate fresh evidence [1] . Today's Evidence based medicine (EBM) expects exactly the same. EVIDENCE-BASED AYURVEDA J-AIM wish to reiterate commitment for evidencebased Ayurveda and endorse need to evolve epistemologically appropriate models to achieve this goal. [2] A recently published special research monograph vividly discusses evidence-base issues in the context of AYUSH. [3,2] Evidencebased practice comprises best research evidence, clinical experience, and patients' preferences. Every healthcare system needs to be evidence based and Ayurveda should be no exception. However, when pleading for evidence, the concept of evidence also needs to be defined appropriately for the in right context. Issues related to the nature of evidence, whether primary or secondary, and whether applicable to the science of Ayurveda, or limited only to Ayurvedic drugs should be thoroughly debated. New scientific evidence is genuinely important though often about evidence for safety and efficacy of AYUSH drugs. As rightly stated by senior thought leader R.H. Singh, we need more research on development of appropriate research methods than aimlessly borrowing outdated, beaten off or conventional biomedical methods which may lead to distortion of Ayurveda with no benefit to either side. We do not mean to abandon biomedical or therapeutic research, but we must invent methods appropriate to generating scientific evidence for Ayurveda. Sadly, little has happened in this direction. Now is the time for action. An innovative R & D path based on reverse pharmacology as proposed by another thought leader Ashok Vaidya is receiving greater acceptances especially when now that the pharmaceutical industry is also facing innovation deficit crises [4] . NEW MODELS FOR EVIDENCE We seem to have better consensus on the urgent need for newer models and methods for evidence-based Ayurveda. [5] Arguably, evidence need not always be restricted to randomized clinical trials (RCTs). Simple and clearly defined research questions are better answered with hierarchical evidence models, in contrast to assessment of complex interventions which needs corroboration of observational research and RCT methods. [6] The rigid hierarchy of evidence where meta-analysis is considered topmost, may not be relevant to, simply because of absence of sufficient clinical data. While the hierarchical evidence model can be challenged, the resolve to do so may not succeed unless we suggest other options for systematic studies. We need to take the onus to develop and adopt appropriate models in practice. The objective of any research design should be to assess causality and minimize bias, chance effects and confounders. Evidence-based Ayurveda may need appropriate blends of modern rigorous trial methods and strengths of observational studies. The Ayurveda research can also benefit from the STROBE (Strengthening the Reporting of Observational studies in Epidemiology) initiative, which involves methodologists, epidemiologists, statisticians, researchers, and journal editors for strengthening the planning and reporting of observational studies. [7] Ayurveda sector has to take cognizance of important initiatives in the methodological domain and develop appropriate methods. Instead, today's Ayurveda sector seems to be trapped in copying modern medicine protocols, many times without understanding the contrasting epistemologies and principles of the respective systems. Scientometrics of published scientific papers reveals that many researchers have applied existing models without confirming relevance to Ayurveda. Obviously, results of such ill-designed studies are unlikely to add any value either to science or to Ayurveda. Over more than 3500 papers on Ayurveda in PubMed include only 15 case series and observational studies. Among case reports published in reputed journals, 79 concern toxicity of Ayurvedic drugs with practically none on safety and efficacy. Five lakh practitioners, 200 colleges, national institutes and a legacy of hundreds of years, has not resulted in any note worthy paper discussing systematic clinical practice data in reputed peer-reviewed journals. Gurudev Tagore once remarked, "What is huge is not necessarily great and pride is not everlasting," efforts to make Ayurveda more open, visible, and respectable in the scientific literature should not be further delayed. We must either create our own open access scientific repositories or publish our data in scientific databases like Cochrane where currently Ayurveda is almost nonexistent. The efforts to compile Ayurveda research at postgraduate and doctorate levels by M.S. Baghel of Gujarat Ayurved University, and A. K. Sharma of NIA Jaipur, Digital Helpline for Ayurveda Research Articles (DHARA) and RUDRA by Arya Vaidya Pharmacy (AVP) Coimbatore have made some beginning. However, quality and impact of such postgraduate dissertations remains to be evaluated. While Traditional Knowledge Digital Library (TKDL) was a timely effort to protect intellectual property rights, we need to develop its knowledgebase and developing comprehensive libraries integrating other efforts like AyuSoft. Ayurveda scholars and practitioners must be given credit for protecting its knowledge during the dark periods; however, it is high time now to face realities of today. Ayurveda practice needs to be dynamic, scientific, ethical, and integrative. It must be liberated from emotional, pridebased, blind-following practices, and refrain from spurious advertisements, mysticism, and self-propagation. Charaka also condemns quackery among practitioners as ''Rogabhisar Vaidya,'' which literally means ''a doctor who spreads diseases rather than providing health.'' The expectations from EBM are no way different than the qualities of a good doctor detailed in the Samhitas. The ability to evaluate the strengths and limitations of existing knowledge is necessary for rational decisions. EVIDENCE FROM AYURVEDA PRACTICE Discordance between teaching, training, research, and clinical practice of Ayurveda may have led to its present stagnancy and complacency. Diversity in styles of practice, schools of thoughts, and Gurukul training, are strengths of Ayurveda; however, they also pose challenges for research. The role of Vaidyas in knowledge generation is crucial as they carry principles and practice of Ayurveda and gain first-hand experience of clinical outcomes and patients' perceptions. J-AIM initiated an interesting discussion on observational therapeutics as suitable evidence model for Ayurveda research [8] as also its advocacy for Vaidya-Scientists. [9] Research on clinical practice is a challenge and we may face initial hurdles in documentation, data retrieval, and standardization and analysis. However, it is important to initiate the process and start moving in the right direction. The few exemplary efforts in this direction must be recognized. The science initiatives in Ayurveda and Ayurvedic biology led by M.S. Valiathan, efforts of Saravu Narahari of Institute of Applied Dermatology and Terence Ryan of Oxford in the field of integrative dermatology, whole system trials done by Ram Manohar of AVP in collaboration with Daniel Furst of UCLA, and another systematic drug development effort through robust RCTs in rheumatology by Arvind Chopra and colleagues from Centre for Rheumatic Diseases, Pune. Recent efforts to develop CONSORT like reporting standards for Ayurveda are also important. J-AIM wishes to recognize significant work undertaken by renowned biostatistician Ashwini Mathur with help of Prathap Tharyan, Director of the South Asian Cochrane Network. We commend the efforts of former AYUSH secretary Shailaja Chandra for publishing the status of Indian Medicine and Folk Healing in two volumes and efforts of Pratik Debnath to establish Gananath Sen Institute of Ayurvidya and Research in Kolkata, as well as Darshan Shankar and his colleagues at FRLHT in gaining its University status recently conferred by the Government of Karnataka. The new University will be known as ''Institute for Transdisciplinary Research in Health Science and Technology.'' J-AIM INITIATIVE We propose active contributions from the practitioner -Vaidya community in this process of evidence building. J-AIM invites perspective papers, thought leadership articles, case series, case reports, and data driven debates based on Ayurveda practice. We plan to appoint a team of independent experts to study such data and recognize original contributions publically by establishing national awards. J-AIM will also facilitate and prioritize publication of such selected data. Our reviewers and editorial team will provide methodological and data analysis support for such practice based evidence research. For this purpose, we suggest three phases and categories in the following order priority: first classical Ayurveda interventions for public health, primary care and difficult to treat diseases where modern medicine has limitations; second, Ayurvedic interventions in chronic, psychosomatic, degenerative conditions to be included as complementary and adjuvant therapies; third, studies on integrative approaches where Ayurveda and modern medicine can add value by offering maximum benefit to patients and the community. J-AIM will form a group of transdisciplinary experts to make indicative list for inclusion and exclusion of diseases, disorders, syndromes, or symptoms in each of these phases. We will also facilitate development of suitable formats for case reports, case series, cohort, case controlled, observational, or controlled clinical studies. TIME FOR ACTION We trust that all these encouraging and timely developments will move evidence-based Ayurveda towards being the future medicine for the world. For the present, we need strategy, efficiency, and real action. J-AIM will be happy to facilitate collaborations with existing efforts to systematically document clinical practice, experimental, and clinical data as required for an evidence base. We realize the intensity and magnitude of efforts required for such ambitious initiatives. We sincerely hope that with the help of associated experts, mentors and well-wishers this will be possible. We also hope that such a national level, voluntary, self-motivated effort will finally help ailing patients who have the right to receive effective, safe, accessible, and affordable healthcare. J-AIM welcomes views, critiques and comments on this call for action. To mark the 150th birth anniversary of Swami Vevekananda and remembering his clarion call, I wish to end the editorial with words of wisdom from Katha Upanishad "Uttishthata, Jaagrata, Prapya Varan Nibodhata" ACKNOWLEDGMENTS I thank senior editorial colleagues R.H. Singh, Darshan
2017-03-31T20:55:00.346Z
2013-04-01T00:00:00.000
{ "year": 2013, "sha1": "719407e3bbc9f4b9e7630f063685ecdca300a7ca", "oa_license": "CCBYNCSA", "oa_url": "https://europepmc.org/articles/pmc3737448", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "3f4a1dfb6b8422fcf60d690f49812ad5972bd853", "s2fieldsofstudy": [ "Medicine", "Philosophy" ], "extfieldsofstudy": [ "Medicine" ] }
15606850
pes2o/s2orc
v3-fos-license
A nuclear-derived proteinaceous matrix embeds the microtubule spindle apparatus during mitosis A live-imaging approach is used to demonstrate that nuclear proteins reorganize during mitosis to form a highly dynamic, viscous spindle matrix that embeds the microtubule spindle apparatus, stretching from pole to pole. INTRODUCTION During cell division the entire nucleus undergoes a dramatic reorganization as the cell prepares to segregate its duplicated chromosomes. For many years the prevailing view on organisms possessing an open mitosis has held that the nucleus completely disassembled during early mitotic stages, thus enabling cytoplasmic microtubules emanating from the separated centrosomes to form a mitotic spindle. This cytocentric view largely discounted any nuclear contributions to the formation and/or function of the mitotic spindle Simon and Wilson, 2011;Sandquist et al., 2011). However, in Drosophila we recently identified two nuclear proteins, Chromator (Rath et al., 2004;Ding et al., 2009;Yao et al., 2012) and Megator Lince-Faria et al., 2009), from two different nuclear compartments that interact with each other and redistribute during prophase to form a molecular complex that persists in the absence of polymerized tubulin (Johansen et al., 2011). Chromator is localized to polytene chromosome interbands during interphase (Rath et al., 2004(Rath et al., , 2006Yao et al., 2012), whereas Megator occupies the nuclear rim and the intranuclear space surrounding the chromosomes (Zimowska et al., 1997;Qi et al., 2004). Chromator has no known orthologues in other species; however, Megator is the homologue of mammalian Tpr (Zimowska et al., 1997). The Megator/ Tpr family of proteins is highly conserved through evolution, and structural homologues are present from yeast to humans (De . Moreover, in addition to Megator, the Aspergillus Mlp1 and human Tpr spindle matrix proteins have a shared function as spatial regulators of spindle assembly checkpoint proteins during metaphase (Lee et al., 2008;Lince-Faria et al., 2009). Both Chromator and Megator are essential proteins required for normal mitosis to occur in apparatus from pole to pole. The findings further suggest that the spindle matrix may directly contribute to the viscoelastic micromechanical properties (Shimamoto et al., 2011) of the spindle. RESULTS The spindle matrix embeds the microtubule spindle apparatus Figure 1 shows time-lapse imaging of Chromator-green fluorescent protein (GFP) and tubulin-mCherry during mitosis in syncytial Drosophila Lince-Faria et al., 2009;Ding et al., 2009). These findings suggest that these proteins are molecular components of the hitherto-elusive spindle matrix that, based on theoretical considerations of the requirements for force production, has been proposed to help constrain and stabilize the microtubule-based spindle apparatus (Pickett-Heaps et al., 1982;Pickett-Heaps and Forer, 2009). Here we demonstrate that this nuclear-derived "internal" spindle matrix is a highly dynamic, self-contained structure that embeds the microtubule spindle FIGURE 1: Confocal time-lapse analysis of Chromator-GFP during mitosis in syncytial Drosophila embryos. (A) Relative dynamics of Chromator-GFP (green) and tubulin-mCherry (red) during a complete mitotic cycle. Scale bar, 10 μm. (B) Chromator-GFP at metaphase. Arrowheads indicate the gap between Chromator-GFP's spindle matrix and centrosomal localization. Scale bar, 10 μm. (C) Relative localization of Jupiter-GFP (green) and tubulin-mCherry (red) at metaphase. Scale bar, 5 μm. (D) Relative localization of Chromator-GFP (green) and tubulin-mCherry (red) at metaphase. Scale bar, 5 μm. (E , G) Line-scan plots of pixel intensity across the spindle along the white lines in C and D for Jupiter-GFP/ tubulin-mCherry and Chromator-GFP/tubulin-mCherry, respectively. The images in C and D are both from a single confocal optical plane. The asterisks indicate the likely position of microtubule K-fibers. (F, H) Plots of the correlation between pixel intensity between Jupiter-GFP/tubulin-mCherry and Chromator-GFP/tubulin-mCherry across the spindle along the white lines in C and D, respectively. The regression line and the value of Pearson's coefficient are indicated for each plot. strongly correlated (r = 0.73 ± 0.10, n = 17; Figure 1F), whereas pixel intensities in line scans of Chromator-GFP and tubulin-mCherry showed little correlation (r = 0.32 ± 0.07, n = 17; Figure 1H). Taken together, these observations are consistent with the hypothesis that the Chromator-defined spindle matrix is part of a viscous, gel-like structure that embeds the microtubule-based spindle apparatus. Furthermore, the findings suggest that although this matrix forms independently of microtubules, its morphology and dynamic behavior during mitosis are governed by microtubule spindle dynamics. To further test this hypothesis, we depolymerized tubulin by injecting colchicine into embryos expressing GFP-Chromator and tubulin-mCherry or histone H2Av-RFP before prophase (Figure 2; Supplemental Movies S4 and S5). Under these conditions Chromator still relocates from the chromosomes to the matrix (Figure 2, A and B); however, in the absence of microtubule spindle formation the Chromator-defined matrix did not undergo any dynamic changes but instead statically embedded the condensed chromosomes for extended periods (>20 min). The movement observed within the matrix is caused by Brownian motion of the chromosomes. Of interest, Chromator under these conditions still relocated to the centrosomes, suggesting that this is a microtubule-independent process. Control embryos injected with vehicle only underwent normal Drosophila embryos. The results show that Chromator has reorganized away from the chromosomes as they begin to condense and fills the entire nuclear space before microtubule invasion ( Figure 1A and Supplemental Movie S1; see also Supplemental Movie S5 for a clearer view of this transition). As spindle microtubules form, Chromator distribution attains a spindle-like morphology while also translocating to the centrosomes ( Figure 1A). At anaphase and telophase Chromator dynamics closely mirror that of the microtubules before relocating back to the chromosomes in the forming daughter nuclei. This dynamic behavior of Chromator during mitosis is very different from microtubule-associated proteins (MAPs) such as Jupiter (Karpova et al., 2006;Supplemental Movie S2). Although Chromator is present throughout the spindle, its poleward boundary does not extend all the way to the centrosome ( Figure 1B and Supplemental Movie S3), as also observed for the putative spindle pole matrix protein NuMA (Radulescu and Cleveland, 2010). Of interest, in line scans of pixel intensity across the spindle we found that peak intensities of the MAP Jupiter coincide with that of microtubules, indicating colocalization (Figure 1, C and E), whereas peak intensities of Chromator are notably distinct from those of microtubules and in many cases show an alternating pattern (Figure 1, D and G). Moreover, pixel intensities in line scans across the spindle for Jupiter-GFP and tubulin-mCherry were FIGURE 2: Spindle matrix dynamics after colchicine injection before nuclear envelope breakdown. (A) Two image panels from the beginning and end of a time-lapse sequence of Chromator-GFP (green) and tubulin-mCherry (red) after colchicine injection. (B) Two image panels from the beginning and end of a time-lapse sequence of Chromator-GFP (green) and histone H2Av-RFP (red). (C) Plot of the average pixel intensity in regions of interest (ROIs) outside the nucleus (red) and inside the nucleus (blue) as a function of time in a colchicine-injected embryo. The two image inserts correspond to the area outlined by a white boxes in A before and after NE breakdown, respectively. The ROIs are indicated by white squares. The difference in expression levels of Chromator-GFP in A and B is due to use of high-and low-expression driver lines, respectively. centrosomes, and NE breakdown and dispersal of nuclear lamins such as lamin B (lamin Dm0 in Drosophila) is not completed until just before the end of metaphase (Stafstrom and Staehelin, 1984;Paddy et al., 1996;Civelekoglu-Scholey et al., 2010). This raises the question of whether the NE or the nuclear lamina presents a diffusion barrier during the early stages of mitosis and thus may contribute to the confinement of spindle matrix proteins. To test whether this is the case, we injected fluorescein-labeled dextrans of molecular mass 70, 500, or 2000 kDa, which are up to 10 times the molecular mass of the spindle matrix proteins Chromator and Megator, into tubulin-mCherry-expressing embryos treated with colchicine. The results showed that all three molecular-mass dextrans entered the nuclear space after NE breakdown on approximately the same timescale as tubulin-mCherry (Figures 3 and 4), indicating the absence of any significant diffusion barriers to spindle matrix proteins. Furthermore, in colchicine-injected embryos lamin B disperses within 2 min, on a timescale similar to that of uninjected embryos ( Figure 5), and does not accumulate in the nuclear space. In contrast, the Chromator-defined matrix persists around the chromosomes for at least 10 times longer. Taken together, these findings suggest that the Chromator-defined "internal" spindle matrix is a distinct and independent structure from both the microtubule-based spindle apparatus and from the lamin B-containing spindle envelope previously described in Xenopus egg extracts (Zheng, 2010) and that the spindle matrix is held together by cohesive molecular interactions within the matrix. The 70-and 500-kDa dextrans incorporate into the spindle matrix Of interest, we noted that 70-and 500-kDa dextrans accumulated within the nuclear space in a way similar to tubulin in colchicine-injected embryos, as illustrated in Figure 3 for 500-kDa dextran. This suggested that branched macromolecular polysaccharides can be incorporated into the spindle matrix. To further explore this possibility, we injected fluorescein-conjugated 70-, 500-, and 2000-kDa dextrans into tubulin-mCherry-expressing embryos without colchicine treatment. As exemplified in Figure 4A for 70-kDa dextran, both 70-and 500-kDa dextrans accumulate in the nuclear space before microtubule spindle formation, and its dynamics during mitosis until the end of telophase, when it gets excluded from the forming daughter nuclei (Supplemental Movie S7), closely resembles that of the spindle matrix proteins Chromator and Megator (Supplemental Movies S1 and S8). In contrast, although the 2000-kDa dextran did enter and equilibrate within the nuclear space at the time of NE breakdown, it did not show any enrichment within the spindle region ( Figure 4B). We speculate that this difference between 70-and 2000-kDa dextrans is due to potential size exclusionary properties of the spindle matrix. These data provide additional support for the concept of a viscous matrix made up of macromolecules enriched in the spindle region by cohesive interactions. The amino-terminal region of Megator is required for its spindle matrix localization Megator is a large, 260-kDa protein (Mtor-FL) with an extended amino-terminal coiled-coil domain (Mtor-NTD) and an unstructured carboxy-terminal domain (Mtor-CTD). Coiled-coil domains are known protein interaction domains, as previously demonstrated for the spindle pole matrix protein NuMA (Radulescu and Cleveland, 2010). Therefore, to explore whether Megator's coiled-coil domain is required for Megator's spindle matrix localization, we conducted time-lapse imaging of full-length, yellow fluorescent protein (YFP)tagged Megator (Mtor-FL), green fluorescent protein (GFP)-tagged mitosis indistinguishable from wild-type preparations (Supplemental Movie S6). Moreover, as illustrated in Figure 2C, unpolymerized tubulin accumulates within the nuclear space, as measured by relative average pixel intensity, to 1.6 ± 0.2 (n = 12, from five different preparations) times the levels outside the nuclear space in the colchicine-injected embryos (see also Figure 2, A and C, and Supplemental Movie S4). This finding suggests the presence of one or more tubulin-binding proteins within the spindle matrix. The nuclear envelope and lamin B do not contribute to the internal spindle matrix Drosophila embryos have semiopen mitosis in which the nuclear envelope (NE) initially breaks down only in the region of the FIGURE 3: The 500-kDa dextran enters and accumulates in the nuclear space on the same timescale as tubulin in colchicine-injected embryos. (A) Image panels from a time-lapse sequence from a tubulin-mCherry (red)-expressing embryo coinjected with fluoresceinlabeled dextran of molecular mass 500 kDa (green) and colchicine. Time is in seconds. Scale bar, 10 μm. (B) Plot of the normalized average pixel intensity in ROIs outside the nucleus and inside the nucleus of tubulin (red) and 500-kDa dextran (green) as a function of time in a colchicine-injected embryo. The solid and stippled lines correspond to areas inside and outside a nucleus, respectively, as outlined by the white boxes in A. The approximate time of NE breakdown is indicated by an arrow. mosomal localization during interphase. Furthermore, if microtubules are prevented from forming by colchicine injection before prophase, both Mtor-FL and Mtor-NTD still relocate to the spindle matrix and, as with the Chromator-defined matrix, do not undergo any dynamic changes but statically embed the condensed chromosomes ( Figure 6E and Supplemental Movie S10). In contrast, under these conditions Mtor-CTD disperses on a rapid timescale in <2 min after NE breakdown ( Figure 6E and Supplemental Movie S11). These findings provide further evidence that the cohesiveness of the spindle matrix depends on specific molecular interactions among the spindle matrix proteins. Depolymerization of microtubules at metaphase collapses but does not disassemble the spindle matrix To test the dependence of the spindle matrix on microtubule dynamics, we injected colchicine into Chromator-GFP-and Mtor-CTD, and GFP-tagged Mtor-NTD, together with histone H2Av-RFP in syncytial embryos ( Figure 6). As illustrated in Figure 6A and Supplemental Movie S8, Mtor-FL localizes to the nuclear interior, as well as to the nuclear rim, at interphase and to the spindle matrix at metaphase. In contrast, Mtor-CTD, which contains the native nuclear localization signal (NLS), is diffusively present in the nucleoplasm without detectable nuclear rim localization at interphase and is absent from the spindle region at metaphase ( Figure 6B and Supplemental Movie S9). Mtor-NTD is present at the nuclear rim with no or very little interior nuclear localization but relocalizes to the spindle matrix at metaphase ( Figure 6C). The localization patterns of Mtor-FL, Mtor-NTD, and Mtor-CTD at interphase are illustrated at higher magnification in Figure 6D. These data suggest that the amino-terminal coiled-coil domain of Megator is required for localization to both nuclear pore complexes and to the spindle matrix, whereas Megator's carboxy-terminal domain facilitates Megator's interchro- matrix physically be linked to microtubules and that changes to the shape and form of the matrix in turn are governed by microtubule dynamics. One possible mechanism to accomplish this is exemplified by NuMA, which, together with dynein, functions as a spindle pole matrix that tethers and focuses the majority of spindle microtubules to the poles largely independently of centrosomes (Dumont and Mitchison, 2009;Radulescu and Cleveland, 2010). Thus we propose that a spindle pole matrix may be a constituent of a larger pole-to-pole matrix that couples this matrix to microtubule dynamics. In Xenopus egg extracts it was suggested that a membranous lamin B-containing envelope derived from the nuclear membrane could be part of the spindle matrix (Tsai et al., 2006;Zheng, 2010). However, our findings clearly demonstrate that the "internal" matrix as defined by the Chromator and Megator proteins is physically distinct from such a structure and that the internal matrix persists after dispersal of lamin B in nuclei arrested at metaphase. Nonetheless, the interplay between microtubules, the spindle matrix, and NE dynamics during mitosis is likely to be finely tuned and mutually dependent (Zheng, 2010). For example, evidence has been provided that the NE and lamin B in systems with semiopen mitosis may contribute to the robustness of spindle function and assembly during prometaphase and that the gradual disassembly of the lamin B envelope is coupled to proper spindle maturation during metaphase (Civelekoglu-Scholey et al., 2010). In this study we present evidence by injection of high-molecular weight dextrans that the disassembling NE and nuclear lamina after their initial breakdown are not likely to present a diffusion barrier to most known proteins. Of interest, even in the absence of such a diffusion barrier we show that free tubulin (possibly as α/β-tubulin dimers) accumulates coextensively with the spindle matrix protein Chromator in colchicine-treated embryos independently of tubulin polymerization. We propose that this enrichment is dependent on one or more proteins within the spindle matrix with tubulin-binding activity. A similar enrichment within the nuclear region of free tubulin after NE breakdown has recently been reported in Caenorhabditis elegans embryos (Hayashi et al., 2012). The enhanced accumulation of free tubulin within the nascent spindle region may serve as a general mechanism to promote the efficient assembly of the microtubule-based spindle apparatus (Hayashi et al., 2012) and be mediated by spindle matrix constituents. The accumulation of tubulin in the nucleus under microtubule depolymerization conditions is not a general property of cytoplasmic proteins, as exemplified by the dynactin complex component DNC-1 in the nematode (Hayashi et al., 2012). A surprising finding of the present study is that nonproteinaceous polysaccharide macromolecules such as dextrans have the ability to be incorporated into the spindle matrix. However, the results of previous studies showed that the spindle pole protein NuMA is highly poly(ADP-ribosyl)ated (Radulescu and Cleveland, 2010) and that poly(ADP-ribose) is required for spindle assembly and function in Xenopus (Chang et al., 2004). Thus it is possible that the size, tubulin-mCherry-expressing embryos during metaphase. As shown in the image sequence of Figure 7 and in Supplemental Movie S12, as the microtubules undergo depolymerization, the Chromator-defined matrix contracts and coalesces around the chromosomes. The reduction in the length of the spindle matrix was almost 60% from when the first image was obtained after colchicine injection to when microtubules were depolymerized ( Figure 7B). This suggests that the spindle matrix is stretched by the microtubules. A similar result was obtained in S2 cells expressing the spindle matrix protein Megator , suggesting that the properties of the spindle matrix described here are a general feature of mitosis and not confined to only syncytial nuclei. Furthermore, the expectation would be that if microtubules were stabilized at metaphase instead of depolymerized, then the shape and form of the spindle matrix would not change. To test this prediction, we injected the microtubule-stabilizing agent Taxol into Mtor-FL-and tubulin-mCherry-expressing embryos during metaphase. As shown in Supplemental Movie S13, under these conditions both the spindle matrix and the microtubules do not undergo any dynamic changes but maintain their metaphase fusiform spindle morphology for extended time periods of >14 min. DISCUSSION In this study we showed that at least two proteins from different nuclear compartments reorganize during mitosis to form a spindle matrix that embeds the microtubule spindle apparatus and that is likely to be part of a molecular complex stretching from pole to pole. As also indicated by previous experiments in S2 cells , the present observations are not compatible with a rigid matrix structure but instead with a highly dynamic viscous matrix made up of protein polymers forming a gel-like meshwork. For such a matrix to be stretched implies that components of the and histone H2Av-RFP (H2Av) during a complete mitotic cycle. The images show their distribution at interphase 1, metaphase, and interphase 2, respectively. Mtor-CTD is diagrammed below the images. Scale bar, 20 μm. (C) Relative dynamics of a truncated, GFP-tagged, amino-terminal construct of Megator (Mtor-NTD) and histone H2Av-RFP (H2Av) during interphase and metaphase. Mtor-NTD is diagrammed below the images. Scale bar, 10 μm. (D) The localization patterns of Mtor-FL, Mtor-NTD, and Mtor-CTD at interphase. Mtor-FL localizes to the nuclear interior, as well as to the nuclear rim, Mtor-NTD is present at the nuclear rim with no or very little interior nuclear localization, and Mtor-CTD is diffusively present in the nucleoplasm without detectable nuclear rim localization. (E) Top, three images from a timelapse sequence of Mtor-FL-YFP (green) and histone H2Av-RFP (red) after colchicine injection at interphase. Middle, three images from a time-lapse sequence of Mtor-CTD-GFP (green) and histone H2Av-RFP (red) after colchicine injection at interphase. Bottom, three images from a time-lapse sequence of Mtor-NTD-GFP (green) and histone H2Av-RFP (red) after colchicine injection at interphase. Time is in minutes and seconds. Scale bars, 10 μm. polymer meshwork with hydrogel-like properties within the nuclear pore (Frey et al., 2006). If, as suggested here, the spindle matrix is a similar gel-like assembly of weakly associated protein polymers, its exact stoichiometry and composition may not be critical and it likely would be able to accommodate the inclusion of a wide array of proteins. However, it is important to note that not all nuclear proteins relocate to the spindle matrix during mitosis. For example, both lamin B and C (Paddy et al., 1996;Katsani et al., 2008) disperse, as does the nucleoporin Nup58 (Katsani et al., 2008). Furthermore, in this study we demonstrate that the aminoterminal coiled-coil region of Megator is required for its spindle matrix localization during mitosis, whereas the carboxy-terminal region disperses. In future experiments it will be of interest to determine the nature of the specific molecular interactions that govern which proteins are incorporated into the matrix. Regardless of the exact composition and structure of the spindle matrix, the demonstration here of a self-contained macromolecular structure embedding the spindle apparatus during mitosis will have important implications for our understanding of microtubule dynamics (Dumont and Mitchison, 2009). Furthermore, in a recent study of the micromechanical properties of the metaphase spindle, the effective viscosity of the spindle region was measured to be ∼100 times higher than in the surrounding cytoplasm (Shimamoto et al., 2011). This difference was attributed largely to the actions of motor and nonmotor proteins cross-linking microtubules, with the assumption of negligible contributions from the spindle medium. However, the results of this study suggest that a gel-like spindle matrix is likely to directly contribute to the viscoelastic mechanical properties of the spindle. Drosophila melanogaster stocks and transgenic flies Fly stocks were maintained according to standard protocols (Roberts, 1998), and Canton S was used for wild-type preparations. Full-length, GFP-tagged Chromator constructs under native or GAL-4 promoter control have been previously characterized . Tubulin-mCherry, Jupiter-GFP, and lamin-GFP fly stocks (stocks 25774, 6836, and 7378, respectively) and a tubulin-GAL-4 driver line (stock 7062) were obtained from the Bloomington Drosophila Stock Center, Indiana University (Bloomington, IN). The Megator YFP-trap fly line (w[1118]; PBac{602.P. SVS-1}Mtor[CPTI001044]) was obtained from the Drosophila Genetic Resource Center, Kyoto Institute of Technology (Kyoto, Japan; stock 115129). The H2AvDmRFP1 transgenic line was the gift of S. Heidmann and has been previously described (Deng et al., 2005). For the Megator-CTD construct under native promoter control a genomic region of 949 nucleotides upstream and 9 nucleotides downstream of the ATG start codon was PCR amplified and fused with an in-frame GFP tag, as well as with Megator carboxy-terminal coding sequence corresponding to branching, and charge distribution of such polymeric carbohydrate modifications of spindle matrix proteins might play a role in regulating its assembly and function. Furthermore, these modifications might contribute directly to the viscoelastic properties of the spindle and contribute to the modulation of microtubule dynamics and spindle stabilization. An issue for the spindle matrix hypothesis has been to account for its molecular composition and structure, especially as the number and diversity of its possible constituents has grown (reviewed in Johansen et al., 2011). In Drosophila, in addition to Megator and Chromator, the nuclear proteins Skeletor, EAST, and Mad2 have been demonstrated to be associated with the spindle matrix (Walker et al., 2000;Qi et al., 2005;Katsani et al., 2008;Lince-Faria et al., 2009;Ding et al., 2009). Another candidate nuclear spindle matrix protein that relocates to the spindle region during mitosis in a microtubule-independent manner is the nucleoporin Nup107 (Katsani et al., 2008). Thus it is becoming clear that during mitosis many disassembled components of interphase nuclear structure do not simply disperse but rather reorganize, making important contributions to mitotic progression (De Johansen, 2007, 2009;Simon and Wilson, 2011). For example, many nuclear pore complex constituents in addition to Megator/Tpr and Nup107 have been demonstrated to relocate to the spindle region in both invertebrates and vertebrates (reviewed in Johansen et al., 2011). Of interest, certain nuclear pore proteins have been shown to form a three-dimensional residues 1758-2347, and inserted into the pUAST vector using standard techniques (Sambrook and Russell, 2001). For the Megator-NTD construct under native promoter control the same upstream region as for the Mtor-CTD construct was fused with an in-frame GFP tag, with Megator amino-terminal coding sequence corresponding to residues 1-1757, and with the NLS from the NLS-pECFP vector (Clontech, Mountain View, CA) and inserted into the pPFHW vector (Murphy, 2003) using standard techniques (Sambrook and Russell, 2001). Transgenic Mtor-CTD and Mtor-NTD fly lines were generated by P-element transformation by BestGene (Chino Hills, CA). Fly lines expressing combinations of transgenes were generated by standard genetic crosses. Time-lapse confocal microscopy and injections Time-lapse imaging of the fluorescently tagged constructs in live syncytial embryos were performed using a TCS SP5 tandem scanning microscope (Leica, Wetzlar, Germany) or an UltraView spinningdisk confocal system (PerkinElmer, Waltham, MA) as previously described . In brief, 0-to 1.5-h embryos were collected from apple juice plates and aged 1 h. The embryos were manually dechorinated, transferred onto a coverslip coated with a thin layer of heptane glue, and covered with a drop of halocarbon oil 700. Time-lapse image sequences of a single z-plane or of zstacks covering the depth of the mitotic apparatus were obtained using a Plan-Apochromat 63×/1.4 numerical aperture objective. For colchicine injections, colchicine (Sigma-Aldrich, St. Louis, MO) was dissolved in dimethyl sulfoxide (DMSO) to a concentration of 100 mg/ml as a stock solution. The final concentration of colchicine for injection was 1 mg/ml by diluting the stock solution with PEM buffer (80 mM Na 1,4-piperazinediethanesulfonic acid, pH 6.9, 1 mM MgCl 2 , 1 mM ethylene glycol tetraacetic acid, 5% glycerol). Injections of ∼100-200 pl of 1 mg/ml colchicine into each embryo were performed with an IM-300 programmable microinjector system (Narishige, Tokyo, Japan) connected to the Leica confocal TCS SP5 microscope system, as previously described (Brust-Mascher and Scholey, 2009). For Taxol injections, ∼100-200 pl of 20 mg/ml Taxol (Sigma-Aldrich) in DMSO was injected into each embryo. Control injections were performed with DMSO alone or with PEM buffer with 1% DMSO. Fluorescein-labeled dextrans of molecular mass 70, 500, or 2000 kDa (Invitrogen, Carlsbad, CA) were injected into syncytial embryos using standard methods (Brust-Mascher and Scholey, 2009). Image quantification and analysis Image processing and quantification were carried out with the Im-ageJ 1.45 software (National Institutes of Health, Bethesda, MD) or with Photoshop (Adobe, San Jose, CA). QuickTime movies were generated with QuickTime Pro 7.6.6 (Apple, Cupertino, CA). Scatter plots, average pixel intensities of regions of interest, and determination of Pearson's correlation coefficient of the measured fluorescence intensity of line scans generated in ImageJ were performed and calculated using Excel (Microsoft, Redmond, CA).
2016-04-15T09:12:14.267Z
2012-09-15T00:00:00.000
{ "year": 2012, "sha1": "bc60224e3912722a161b2b943ef422636533d41e", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.1091/mbc.e12-06-0429", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "4fe61537f82caad5255c16026c7cf60e3414886e", "s2fieldsofstudy": [ "Biology", "Materials Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
231797470
pes2o/s2orc
v3-fos-license
SYNTHESIS AND CHARACTERIZATIONS OF A VERSATILE SILICA INCORPORATED MAGNESIUM-ZINC OXIDE PHOTOCATALYST Magnesium incorporated silicon dioxide-zinc oxide photocatalyst was synthesized. Structural properties were investigated by x-ray diffraction studies. These studies revealed the amorphous nature of the catalyst. The ultraviolet-visible spectroscopic analysis gave the bandgap variation from 2.4 to 2.6 eV with doping concentration. Surface morphology study, elemental analysis and thermal analysis were also carried out. Photocatalytic activities were followed by employing an ultraviolet-visible spectrophotometer. Photocatalytic reduction of chromium (VI) was studied using the catalyst under sunlight and ultraviolet radiation. This catalyst showed enhanced catalytic activity for the reduction of chromium (VI) and the degradation of methylene blue. The sample, 10 % magnesiumenriched catalyst showed the highest photocatalytic activity. Under sunlight, photocatalytic reduction of chromium (VI) was very rapid when compared with that under ultraviolet radiation. INTRODUCTION Mother Nature inherits a unique encompassing style that has worked for the benefit of all living beings at dimensions that have now been replicated and webbed across this world under the common name "nano" and practically delivered in the form of "nanotechnology". "There is plenty of rooms at the bottom" are the famous words of Richard Feynman which has galvanized scientists across the globe to manipulate atoms and get desired molecules that exhibit multi-functionality. Nano-materials are different from micro materials, in that they follow quantum mechanics whereas the other follows Newtonian mechanics. This difference in working gives drastically different properties for nano-materials when compared with micro materials. Metallic and metal oxide nanoparticles have a plethora of applications in the service of mankind. Nanoparticles exhibit unique properties due to their very small size, shape and surface to volume ratio. [1][2][3][4][5][6][7][8] The industrial revolution and the globalization that followed have pushed the world to the brink of nonexistence. This appealing situation has forced even the most technologically advanced nations to rethink waste remediation, water purification, and clean air. India is the second-most populous country in the world and has a considerable presence in the textile, dyeing and bleaching industrial sectors of the world. Water contamination is one of the most common problems faced by industries and society. Most of the dyes used in the dyeing industry can create environmental issues and hence their degradation becomes important. It is assumed that the degradation processes are benign and are relatively safe for man and the environment. [6][7][8][9][10][11][12][13][14][15][16][17] The heterogeneous photocatalysis using semiconductor nanoparticles or thin films is a practical solution for chromium (VI) reduction and dye degradation. Many research and industrial applications are progressing for a strong remedy to this pollution. But many of the attempts failed due to high expenditure and other demerits. 12 Bacterial degradation is a popular method that became inefficient due to the emission of new toxic elements. Activated carbon faced problems for this application due to the huge price for waste disposal. 2 So, it is high time to find a solution that is highly efficient and practical for contaminated water resources. Heterogeneous photocatalysis, a productive and powerful technique for total mineralization of pollutants from the environment is followed in this work. 3 The photocatalyst is a material absorbing light, producing electron-hole pair and initiates a chemical reaction without being consumed. [17][18][19][20][21][22][23] Semiconductor metal oxide-based photocatalysts act as activators catalyzing the oxidation process. Degradation of organic dyes in polluted water can be performed at low cost and with other tunable properties by photocatalytic treatment without any notable loss in photocatalytic activity 24 . Comparing to bulk materials, semiconductor metal oxides in nano-scale possess tremendous functionalities due to quantum confinement, which gives them unique structural, electronic, antimicrobial, optical, anti-tumor, anti-inflammatory and wound healing properties. Among the trending research on metal oxide semiconductors, the search for photocatalyst material with high performance under solar illumination is a highlighted topic. This is because of its abundance and eco-friendly nature 1 . A photocatalyst provides a reaction path that is green, sustainable and non-toxic. Among the metal oxide semiconductors, TiO2, ZnO and WO3 are used as the best photocatalyst because of their high activity, low cost, insolubility in water, stability and non-toxicity. 4 There are a considerable number of methods for separation and accretion of pollutants but unable to degrade them. Photocatalytic activity is a perfect solution for such degradation. Several studies showed that TiO2 as a worthy photocatalyst. The potentiality of other metal oxides as photocatalysts is an ongoing research topic. In this area, ZnO is a promising photocatalyst for the degradation of organic pollutants. There are many reports during the last few years on the degradation of dye-based pollutants mediated by ZnO. 5 Zinc oxide possesses an energy gap, 3.3 eV and an exciton binding energy, 60 meV which enables it to be used extensively for solar cells, sensors, LEDs, photocatalysis, UV detectors and FETs. 23 Changing the properties into our desired form is simply achieved through bandgap engineering. This is achieved by doping ZnO by metals, non-metals and transition metal ions. Doping not only can cause an impact on optical, catalytic and electrical properties but also can develop new structural designs. 13,23 There are reports on the photo catalytically active Mg-doped zinc oxide nano-crystals 1 . In the current work, preparation and characterization of the photocatalyst, magnesium doped silica-zinc oxide nanocomposite, and photodegradation of Cr (VI) and methylene blue using this catalyst are exposed. The impact of reaction parameters like time, the concentration of the dopants and exposure to radiation are investigated. Preparation of Foxtail Millet Husk Ash Foxtail millet, obtained locally, Ettimadai, Coimbatore, India, was cleaned with distilled water for removing the adhered impurities and then dried in a hot air oven. The husk was removed from the millet by grinding it in a domestic blender and then by winnowing. The separated husk was washed thoroughly with a copious amount of de-ionized water and dried in a hot air oven for 6 hours. The husk thus obtained was treated with 1M HNO3 and continued agitation for one day to remove all the metallic particles originated from the soil and then treated with distilled water and dried in a hot air oven at 333 K for 12 h. The neat husk was calcinated in a muffle furnace at 973 K for 6 h so that the organic contents were removed. The resulting white powder (ash) was used for further work. 6 Silica can undergo a structural transformation which depends on the time and temperature of combustion. At 550 0 C -800 0 C, amorphous ash and at temperatures above this, crystalline ash is produced. This type of silica has different properties. Preparation of Sodium Silicate From the Husk Ash A simple chemical method was followed to produce silica from the husk ash. The solubility of silica is very low at p H less than 10 and increases sharply at p H greater than 10. This solubility behavior enables silica to extract in a pure state from millet husk ash dissolving under alkaline conditions. This method based on alkaline solubility is cost-effective than the current smelting method. 7 A 0.4 M, 200 mL solution of NaOH was prepared. This solution was heated to 70 0 C in a microwave oven into which the prepared ash of millet husk was added. The solution was heated for better dissolution of the ash into NaOH solution. This alkaline solution was kept under continuous stirring over a magnetic stirrer at 800 rpm for 12 hours at room temperature. It was then filtered using a Whatman No. 40 filter paper. The filtrate obtained is a solution of sodium silicate containing NaOH also. Synthesis of the Photocatalyst Nanocomposite The method employed for the production of the nanocomposite is the mechanical-assisted chemical coprecipitation technique. 13 For producing silica incorporated zinc oxide, a solution of ZnSO4.7H2O was prepared in the required concentration and stirred with a solution of sodium silicate, prepared from the husk, over a magnetic stirrer, after adding CTAB. The stirring was extended for 5 hours. The precipitate thus obtained was thoroughly cleaned using de-ionized water and acetone and dried in a hot air oven. It was then heated in a muffle furnace at 150 0 C for ten hours to form SiO2-ZnO. The sample was then cooled to room temperature and then grounded for size reduction. The Mg-doped samples were prepared following the same method taking 1, 5, or 10 mole % of magnesium sulphate heptahydrate also, separately. Characterization The Mg incorporated SiO2 -ZnO nanocomposite was characterized by different techniques. To understand the bandgap variation with doping, optical studies using UV-visible spectrophotometer was performed. To analyze the composition of ash, XRF analysis was performed using model AXIOS® from PAN analytical. Weight loss for white ash as a function of temperature was studied using TGA measurements on a Nietzsche STA449F1 programmable thermo balance. FE-SEM images of zinc oxide nanoparticles and the nanocomposite were obtained using a JSM-JEOL 6390 FE-SEM instrument. Structural properties of ash and the nanocomposite were obtained from Philip PW1700 XRD. Photocatalytic Applications Instrumentation of the Photocatalytic Reactor A self-assembled photoreactor, which is cheaper and a powerful instrument, was used for this work 1 . The sketch of the reactor is as in Fig.-1. It has an insulated cabin with a window. The cabin can be maintained at the desired temperature using an air heat-exchanger attached to the cabin. Inside of the cabin is decorated with UV LED strips of 0.2 W in series, providing an enclosed ultraviolet environment for photocatalytic reaction. The arrangement has a magnetic stirrer at the bottom. Reduction of Chromium (VI) It was performed in UV, RGB 300 UV LED strip with 150 V-265 V AC input. A series of experiments were conducted using the prepared three different samples of Mg-SiO2-ZnO nanocomposite as the photocatalyst. 100 mL, 50 ppm chromium (VI) solution was prepared to add 0.01 M oxalic acid as the hole-scavenger. 20 mg of the catalyst was dropped and shaken at 600 rpm. Initially, the solution was stirred in dark for 15 min. The sample was collected at each half an hour during photoreduction to perform UV-Vis spectroscopic analysis. The indicator used for the photocatalytic reaction was Biphenyl carbazide. Residual chromium (VI) concentration was estimated at 540 nm with UV-Vis spectrophotometer of SAFIRE Scientific Company. The same experiment was repeated under sunlight with all three catalysts to study the photocatalytic reduction of Cr (VI). Degradation of Methylene Blue Degradation of methylene blue was performed in sunlight using the nanocomposite as the photocatalyst. 100 mL of 0.01 % of methylene blue solution was prepared. 20 mg of the photocatalyst was added and shaken at 600 rpm. The sample was collected at each half an hour and the concentration of residual methylene blue was estimated using UV-Vis spectrophotometer. RESULTS AND DISCUSSION Optical studies were done at room temperature using a UV-visible spectrophotometer. Absorbance spectra of Mg-SiO2-ZnO against wavelength are as presented in Fig.-2. As doping concentration increases, the wavelength of absorption decreases that indicates an increase in the band gap [14][15][16] . The plot shows that the material has visible light absorbance above 350 nm. Elemental compositions of the prepared raw husk, acid-washed husk and husk ash were obtained by XRF analysis as shown in TGA shows the weight loss of the sample with an increase in temperature. The weight loss of the raw husk up to 700 0 C was studied, Fig.-4. The weight loss observed till 100 0 C is due to the loss of water content of the sample and after 100 0 C, there is a steady decrease in weight up to 250 0 C. After this, there is a steep decrease from 250 0 C to 350 0 C. This drastic weight loss is due to the burning of the organic components of the husk, 80 % to 40 % of the weight gets reduced at this temperature range. Finally, 75.47 % of the sample gets pyrolyzed at 700 0 C. This is the organic contents of the raw husk that gets vanished at 700 0 C. The particle size of the sample and shape could be determined using an FE-SEM image. ZnO, Fig.-5, shows an agglomerated nano-flake structure. It shows a particle size of 201.8 nm and 266.4 nm at 1 μm special resolution. The FE-SEM image of 1 % Mg-SiO2-ZnO, Fig.-6, shows a more agglomerated image and not a flake structure. A different shape of particles is observed in this nanocomposite. The EDX image, Fig.-7, shows the elements present in the sample. The electron beam is used for this surface analysis and characteristic X-rays are produced representing the elements present in the sample. Photocatalytic Study The irradiation of UV light onto the sample causes the excitation of electrons from conduction band to valance band leaving holes in valance band 25 . Concerning SHE, the reduction potential of chromium (VI) /chromium (III) is 1.33 V. A catalyst that has a bandgap above this can reduce Cr (VI) 25 . The Cr(VI) peak at ~540 nm is reduced at a faster rate with the increase in doping concentration, Fig.-10, under UV 1 . The 1 % nanocomposite completely reduced Cr (VI) in 4 h whereas 5 % catalyst completed the reduction within 180 min. It only took 150 min for the reduction using 10 % catalyst. This is because as doping concentration increases, the bandgap increases, and the blue shift is observed 1 . When the same experiment was carried out under sunlight, a faster reduction of Cr (VI) was observed, Fig.-11. The 1 % composite reduced the toxic element within 120 min, 5 % within 90 min and 10 % within 45 min. A faster reduction is observed under sunlight because the composite has a band gap in the visible region. Methylene blue has a structure as shown in the Fig.-14. A lone pair of electrons are present in the N-S heterocyclic group. This electron pair enters in a reaction with highly reactive OH • , which destroys the conjugated heterocycle. As a result of which the absorption peaks in the UV-vis spectrum decreased 11 . So, this nanocomposite acts as an efficient photocatalyst for the reduction of Cr (VI) and the degradation of methylene blue. CONCLUSION The acidity of zinc sulfate and the basicity of sodium silicate resulted in the synthesis of ZnO and SiO2, where the incorporation of Mg enhances the property in Mg-SiO2-ZnO tri-component nanocomposite. Advanced and cost-effective synthesis of ZnO and Mg-SiO2-ZnO by co-precipitation method revealed improved properties. ZnO is crystalline, whereas the nanocomposite is amorphous. The optical bandgap of the nanocomposite varies in the range of 2.4 -2.6 eV. The FE-SEM image shows a nano-flake structure for ZnO with agglomeration and grain size in the nanometer range. The EDX plot reveals the presence of each element of the nanocomposite. The use of foxtail millet husk ash as a source of silica is proved to be a good choice. From XRF data analysis, the ash contains 91.1 % of SiO2 and all other elements are at trace level. Acceleration in the photocatalytic degradation using the prepared nanocomposite may be due to the increase in defects with doping and increase in the surface area offered by silica which has a porous nature. Chromium (VI) is reduced to chromium (III) at a faster rate in the presence of sunlight than UV with the prepared nanocomposite as the photocatalyst. It also reduces methylene blue, one of the hazardous dyes in polluted water, in the presence of sunlight. So, this nanocomposite with enhanced properties enables it to be used as a versatile component for industrial applications.
2021-01-07T09:05:42.419Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "6be835d633b8f8c748090c93b9669b155f57bd4b", "oa_license": null, "oa_url": "https://doi.org/10.31788/rjc.2020.1345859", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9929bf07f5156eef74302043485a595284c507c5", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
251535138
pes2o/s2orc
v3-fos-license
Joint External Evaluation scores and communicable disease deaths: An ecological study on the difference between epidemics and pandemics The Joint External Evaluation (JEE) assesses national capacities to implement the International Health Regulations (IHR). Previous studies have found that higher JEE scores are associated with fewer communicable disease deaths. But given the impact of COVID-19 in many countries, including those believed to have developed IHR capacities, the validity of the JEE for pandemic preparedness has been questioned. We constructed univariable and multivariable linear regression models to investigate the relationship between JEE scores and i) deaths from communicable diseases before the pandemic and ii) deaths from COVID-19. We adjusted for country differences in age, health system access, national wealth, health expenditure, democratic governance, government restrictions, pre-pandemic tourist arrivals and testing capacity (estimated by test positivity rates). For COVID-19 deaths, we calculated cumulative deaths per 100,000 at 3, 6 and 12 months into the pandemic. A total of 91 countries were included, with a median JEE score of 50%. On multivariable linear regression the association between JEE scores and log COVID-19 deaths was significant and positive at 3 months (β 0.05, p = 0.02), becoming statistically non-significant, at 6 (β 0.02, p = 0.27) and 12 months (β -0.03, p = 0.19), while the association with log communicable disease deaths was significant and negative (β -0.03, p = 0.003). A higher Stringency Index was significantly associated with higher log COVID-19 deaths at 3 (β 0.04, p = 0.003) and 6 (β 0.04, p = 0.001) months, but not at 12 months (β 0.02, p = 0.08). Higher test positivity rates were associated with higher log COVID-19 deaths at all time points, at least partially attenuating the positive association between Stringency Index and log COVID-19 deaths. While universal health coverage indices (β -0.04 p<0.001) and international tourist arrivals were associated with log communicable disease deaths (β 0.02, p = 0.002), they were not associated with log COVID-19 deaths. Although the same tool is used to assess capacities for both epidemics and pandemics, the JEE may be better suited to small outbreaks of known diseases, compared to pandemics of unknown pathogens. Introduction The 2005 International Health Regulations (IHR) provide an overarching framework to assist countries to "prevent, protect against, control and provide a public health response to the international spread of disease" and defines countries' rights and obligations in handling emergencies that have the potential to cross borders. The Joint External Evaluation (JEE) is a voluntary, collaborative, multisectoral process led by national governments and supported by the World Health Organization (WHO) that aims to assess and monitor national capacities to implement IHR, with a view to identifying opportunities to strengthen preparedness and response. Disease outbreaks are often unpredictable and require a range of approaches for preparedness and response that may or may not be adequately addressed by IHR and JEE. The unprecedented scale of the COVID-19 pandemic contrasts greatly with the more typical and much more frequent local epidemics of known diseases, such as cholera and yellow fever [1]. Given the severe impact COVID-19 has had in nations largely believed to be operating in line with IHR, and thus prepared to confront outbreaks, the relevance of IHR competencies and validity of the JEE have been questioned. At the request of Member States, WHO recently convened a Review Committee on the Functioning of IHR during COVID-19, concluding that much of what is contained within the current IHR is appropriate and well-considered, but was not sufficiently implemented prior to the pandemic [2]. The Global Health Security Index (GHSI) was developed in 2019, as an independent adjunct to the JEE. A 2020 regression analysis concluded that, for both the JEE and the GHSI, higher total scores were associated with fewer deaths from communicable diseases in general [3]. However, a separate analysis showed that these scores were not correlated with COVID-19 mortality across countries during the early phase of the pandemic [4], although the study failed to account for important confounders. A later regression analysis showed that higher JEE scores were correlated with fewer COVID-19 deaths [5], but adjusted for testing rates (likely to reflect the true numbers of cases) as a proxy for testing capacity, with only a very limited set of countries in multivariable analysis. To improve pandemic preparedness and risk assessment, it is necessary to understand the extent to which the relationship between national scores on the JEE and deaths from COVID-19 differs from that for other communicable diseases. This analysis will support policy makers to review the utility of the JEE in assessing preparedness against both epidemics and pandemics. Investigated variables and sources In this ecological study, we reviewed all published JEE reports available in the public domain on the WHO website [6] and obtained data on a range of other factors that may affect the relationship between JEE scores and deaths from COVID-19 or other communicable diseases. The Oxford COVID-19 Government Response Tracker Stringency Index combines nine different indicators: school closures, workplace closures, cancellation of public events, restrictions on public gatherings, closures of public transport, stay-at-home requirements, public information campaigns, restrictions on internal movements, and international travel controls [7]. We recorded the Stringency Index on a scale of 0 to 100 for each country at 2, 5 and 11 months into the pandemic, reflecting a time lag between restrictive government policies and any potential impact on COVID-19 deaths (which were measured at 3, 6 and 12 months). This assumed a median lag of 10 days between a change in government mobility restrictions and community rates of infection [8], five days from infection to symptom onset, and a further 16 days from symptom onset to possible death [9]. We obtained 2019 data on national population size, the proportion of the population aged �65, Gross National Income per capita, % Gross Domestic Product (GDP) spent on healthcare, and universal health coverage (UHC) service index (2017) from the World Bank public database [10]. We also collated data on pre-pandemic international tourist arrivals (as a proxy for international travel) from the UN World Tourism Organization Dashboard [11] and data on the strength of democratic governance through the Economist Intelligence Unit's (EIU) 2020 democracy index. The EIU democracy index covers 5 domains, combining 60 indicators, and is measured on a scale of 0 to 100 from the least to the most democratic [12]. Higher scores have been associated with lower excess mortality rates due to the COVID-19 pandemic in high-income countries [13]. We included test positivity rate as a proxy for testing capacity and a potential confounder which could affect the relationship between JEE and deaths. The fraction of tests that return a positive result can provide an indication as to the adequacy of a COVID-19 testing programme and the reliability of death statistics; a low test positivity rate suggests low transmission and sufficient surveillance capacity, whereas a high test positivity rate suggests high transmission and inadequate testing, with many COVID-19 deaths undetected and unrecorded [14]. We combined routinely available data on COVID-19 tests [15] with data on cases [16] to calculate test positivity (cases per 100,000/tests per 100,000) across countries at 3, 6 and 12 months into the pandemic. From 2016-18 most JEE reports were published in the same format, while later assessments included a slightly different scoring system. We extracted data in the same way but adjusted to ensure comparability across countries, converting all raw JEE scores to percentages representing the proportion of the maximum possible score obtained by a country. Outcome measures The WHO declared the COVID-19 outbreak a pandemic on 11 th March 2020 [17]. We calculated cumulative COVID-19 deaths per 100,000 population for each country at 3 months (10 th June 2020), 6 months (10 th September 2020) and 12 months (10 th March 2021) into the pandemic, by dividing recorded COVID-19 deaths [18] by population size [19] and multiplying by 100,000. We used the Global Burden of Disease study [20] to similarly collate data on communicable disease deaths per 100,000 in 2019 for each country (i.e. prior to the COVID-19 pandemic), excluding deaths from maternal, neonatal and nutritional diseases. Data extraction All data from databases were extracted into Microsoft Excel initially and then copied directly into a master sheet, which contained the included set of countries in one column. The lead author (VJ) checked the accuracy of the extraction process using the 'IF' command in Microsoft Excel to identify any values in the master sheet which did not match the relevant data from the database. If inaccuracies were identified they were rectified by manually overriding the inputted value with that in the original database file. A second author (TB) double-checked the accuracy of key data through using scatter plots on STATA to identify outliers. Where outliers were identified this prompted a further check of the master sheet against the relevant databases, to ensure that these values were genuine. Statistical analysis We first made histograms of each variable. Due to high skew in communicable disease and COVID-19 deaths, we applied log-transformation and reported outcomes as log communicable disease deaths and log COVID-19 deaths per 100,000 population, respectively. We made scatter plots to visualise the relationship between variables and outcome measures. We fitted univariable and multivariable linear regression models to investigate the relationship between JEE score and i) log COVID-19 deaths per 100,000 population and ii) log communicable disease deaths per 100,000 population, with statistical significance at a P value <0.05. We constructed three multivariable models to investigate the relationship between JEE scores and COVID-19 deaths (one each for deaths at 3, 6 and 12 months into the pandemic) and one to investigate the relationship between JEE scores and communicable disease deaths. We considered confounders based on their potential relationship with disease control and deaths and statistical significance on univariable regression. For multivariable models investigating COVID-19 deaths at 12 months, we adjusted for 1) the proportion of the population aged �65, 2) health system access (measured by UHC service index), 3) health expenditure as a proportion of GDP, 4) COVID-19 testing capacity (measured by positivity rates), and 5) the strength of government restrictions one month prior to deaths (measured by the Stringency Index). For the multivariable models investigating COVID-19 deaths at 3 and 6 months we repeated this method but excluded test positivity rates, since data were missing for 43 countries, and instead included strength of democratic governance. National wealth (measured by Gross National Income per capita [10]) was excluded in the main analysis due to multicollinearity [21] with UHC service index (r = 0.75). For the model investigating deaths from communicable diseases, potential confounders included 1) proportion of the population aged �65, 2) health system access, 3) national wealth, 4) strength of democratic governance, and 5) pre-pandemic international tourist arrivals. Sensitivity analysis was performed to test the robustness of our findings (S3-S6 Tables), by adding originally excluded variables to each respective multivariable model and excluding test positivity rates from the model investigating COVID-19 deaths at 12 months, due to missing data for 32 countries. We used Microsoft Excel and Stata version 16 in the analysis. Ethics Data used in this study were all open-access, obtained through routine data sources and collected at the population-level. There were no human or animal participants involved directly in this study and no ethical approval was required. Results We identified and analyzed a total of 96 JEE reports. Data on COVID-19 deaths were unavailable for five countries (Cambodia, Eritrea, Federated States of Micronesia, Laos and Turkmenistan), leaving 91 for analysis (Table 1). High-income countries and the America and Europe region had the highest JEE scores, and low-income countries and the African region had the lowest scores. The median JEE score (as a percentage of the maximum possible score) across all countries was 50% (IQR 37.9-66.0%). Scores varied across JEE domains, with the highest scores for 'detect' (median 58.5, IQR 51.1-71.0), followed by 'prevent' (median 54.3, IQR 37.6-65.5) and 'respond' (median 43.7, IQR 30.6-66.4). All of the investigated factors included in the multivariable model for COVID-19 deaths, were significantly predictive for deaths at all time points on univariable regression. Similarly, all of the factors included in the multivariable model investigating communicable disease deaths (2019) were significantly predictive in univariable models. Pre-pandemic tourism was not significantly associated with COVID-19 deaths on univariable regression, and was therefore excluded from multivariable models. The proportion of GDP spent on health was not significantly associated with communicable disease deaths on univariable regression and was therefore excluded from the multivariable model. JEE score was positively associated with log COVID-19 deaths at 12 months into the pandemic but negatively associated with log communicable disease deaths (Fig 1). These relationships were statistically significant on univariable linear regression (S1 and S2 Tables). However, on multivariable linear regression ( Table 2) the positive association between JEE scores and log COVID-19 deaths became non-significant at 6 and 12 months, while the negative association between JEE scores and log communicable disease deaths remained statistically significant (β 0.03, p = 0.003), with a 0.03 decrease in log communicable disease deaths for every percentage point increase in JEE score. Higher test positivity rates, representing poor testing capacity or high levels of transmission, were associated with higher log COVID-19 deaths at all time points. Stringency Index was also associated with higher log COVID-19 deaths at all time points but adding test positivity rates into the model at least partially attenuated this positive association (S3-S5 Tables). While UHC indices (β -0.04 p<0.001) and international tourist arrivals were associated with communicable disease deaths (β 0.02, p = 0.002), they were not significantly associated with COVID-19 deaths. Review of scatter plots of residuals against fitted values showed no violations of heteroskedasticity and quantile plots of residuals showed no departures from normality in any of the models. Sensitivity analysis showed that these results remained valid even after altering the variables included in each multivariable model (S3-S6 Tables). The proportion of the variation in COVID-19 deaths at 12 months and communicable disease deaths explained by both models was high, with R 2 values of 0.60 and 0.75 respectively. Discussion Countries with higher JEE scores were associated with significantly fewer overall communicable disease deaths, but not COVID-19 deaths. This finding remained even after accounting for a range of other factors that could influence the distribution of COVID-19 deaths across countries and after considering multiple time points. This suggests that the JEE may be better suited to assessing epidemics of known diseases compared to a pandemic of a novel pathogen. Test positivity rates were the only factor strongly associated with more COVID-19 deaths at 12 months, further supporting the notion that the determinants of the impact of epidemics and pandemics may differ. Epidemics of known diseases and pandemics of novel pathogens pose greatly different challenges when it comes to testing. The JEE explicitly assesses testing for infectious diseases through the availability of diagnostics, laboratory systems and real-time indicator and eventbased surveillance systems [22]. It does not focus on scale, size and surge capacity of national infectious disease testing systems. Developing and scaling up the availability of tests, including RT-PCR and rapid tests such as the lateral flow assay, has been an unprecedented challenge for all countries during the pandemic [23]. Although the first diagnostic tests for COVID-19 were developed within 2 weeks of the reference genome being published [24], countries scrambled to compete for scarce global resources to expand testing programmes, leading to vast global inequities in access to testing. For example, in April 2020, while the USA was approaching 4 million tests conducted, Nigeria, Africa's most populous country with a population almost two-thirds that of the USA, was nearing only 7000 tests [25]. In our multivariable analysis, even after accounting for JEE scores, weaker testing capacity (indicated by high test positivity rates) was significantly associated with increased COVID-19 deaths at 12 months. Given the existing but limited assessment of infectious disease testing capacities and scalability in the current JEE, this may be an area for early reform, if the JEE is to be used as an effective pandemic preparedness risk assessment tool. Although the JEE is a self-assessment tool and was not designed to compare countries' performance, the indicators should reflect necessary components of epidemic preparedness; logically, countries with higher scores should be better prepared to respond to a pandemic and limit transmission and mortality. The fact that JEE scores were not associated with fewer COVID-19 deaths goes against this expectation, but is in line with previous findings [4,5]. Possible reasons for the discrepancy in associations between JEE scores and COVID-19 deaths compared to other communicable disease deaths include 1) For many common infections, the natural history of disease is well understood, clinical tests are available and surveillance systems are well-established. In the early stages of the COVID-19 pandemic, global testing capacity was inadequate, including in many high-income country global travel hubs, which made countries vulnerable to imported cases. Consequently, the true burden of COVID-19 cases in real-time was likely underestimated [26], limiting effective public health action; 2) The pandemic required outbreak response capacities of a scale not seen for smaller epidemics of known diseases. Existing infrastructure for testing, contact tracing, infection control and clinical management had to be rapidly scaled up and reorganized [27,28]; 3) In the absence of vaccines or therapeutics, few disease control measures for COVID-19 were initially available, as they might be for other common epidemic-prone diseases, such as tuberculosis or measles. In this context, individual behaviours across populations, including physical distancing and mask wearing, became an important determinant of spread [29], in turn affected by public risk communication as well as individual attitudes toward infectious diseases, personal risk and freedoms; 4) Unlike for small outbreaks of known diseases, decision-making on control measures took place largely outside of traditional public health agencies, with political leaders having to weigh trade-offs in a highly public time-pressured environment and with significant scientific uncertainty; and 5) In its current form, the JEE considers a range of factors but focuses on the existence of capacities, such as policies, procedures, and systems, rather than the quality of them, which is more difficult to demonstrate, and the ability to deliver these capacities at the speed and scale required in a pandemic. Other limitations of the JEE, such as potential bias created from self-assessment and limited scope [3] may further help explain the disparity in the relationship between communicable disease and COVID-19 deaths. Limitations of our study include: 1) Our study may be vulnerable to selection bias given that not all countries have volunteered for the JEE (analysable data were available only from 91 countries, less than half of the world's total), and those that have may be different to those that have not; 2) The JEE was not designed as a comparative tool across settings but rather to allow countries to identify areas of weakness and track progress. JEE implementation and interpretation may therefore vary across different countries and over time, independent of other factors; 3) We used COVID-19 deaths in our analysis, which may not be recorded in the same way across countries, although these data are likely less susceptible to biases related to testing and healthcare-seeking compared to some other indicators, such as numbers of cases; 4) Underreporting of deaths due to inadequate mortality surveillance, if associated with lower JEE scores, could have biased the results. We considered open-access data on excess mortality [30] as an alternative and more comprehensive way of measuring COVID-19 impact but, due to large amounts of missing and poor data quality, did not pursue this approach; and 5) Our study covered only a 12-month period during which vaccine distribution has been highly unequal across the world [31]. The associations between JEE scores and COVID-19 deaths may change as the pandemic progresses. Our findings have implications for global health policy. Many of the countries with the highest numbers of cases and deaths from COVID-19 are also considered to have the most robust IHR capacities and health systems as measured by the JEE. The IHR, and the JEE that is derived from it, may therefore require design change or supplementation through other mechanisms if the wide range of factors involved in preventing, detecting and responding to both future pandemics and epidemics of known diseases are to be comprehensively captured through country risk assessments. It may be necessary to develop and test more complex and demanding requirements within tools like the JEE, simulation exercises and after-action reviews, to improve future emergency preparedness by better measuring real-world capabilities. For example, approaches could better account for the altered response capacities, disease control measures, and political considerations that may be necessary in dealing with pandemics and novel pathogens. Our results also underscore the importance of testing capacity, which may prove a sensible early target for the pandemic-focussed reform of existing tools. Given that the existing public health response capacities outlined in Annex 1 of the IHR [32] are still not adequately implemented in many countries, including more requirements within the JEE may not translate into better emergency preparedness. Indeed, one critical implication from our findings is the need to recognize the limitations in the utility of JEE as a measure of pandemic preparedness, given the uncertainty of how any future pandemic may look and the potentially devastating consequences of complacency or overconfidence in response capabilities. Various factors may improve the implementation of IHR and deserve attention alongside the scrutiny of risk assessment tools. These include stronger public health systems, workforce and institutions; financial incentives; appropriate decentralisation of powers; community and private sector engagement; upskilling the public health workforce; improved data collection, transparency and sharing; supportive legal instruments; national leadership; and regional collaboration [33,34]. Much of the recent discourse on global health security has emphasised integration with the One Health, UHC and health system strengthening agendas [35][36][37], underscoring the links between humans, animals, ecosystems and health systems. The discussion on pandemic preparedness must expand to consider essential public health functions [38], providing an opportunity to consider pandemic risk and preparedness in the broader context of actions required to promote and protect health. High quality test, trace and isolate systems (often operating outside the traditional structures of health systems) have proved an essential part of early pandemic response in many countries [39]. Nor can the focus be entirely on the implicated pandemic pathogen; during the first peak in the United Kingdom (UK), 96% of COVID-19 deaths were in individuals with at least one pre-existing medical condition, most of which are chronic non-communicable diseases [40]. Those living in more densely populated and deprived areas, working in high-risk occupations with poor access to healthcare, are at a high risk of infection and severe illness [41,42]. Sociocultural and political factors have also been found to be important predictors of COVID-19 deaths across countries [12,43,44] compared to smaller-scale outbreaks of known infectious diseases. Capturing such vulnerabilities as part of assessments of preparedness will require more research and an expansion in ambition, but may help to improve the utility of risk assessments. Effective systems of public sector governance will be required to operationalize essential public health functions. Pandemic preparedness assessments must also build an understanding of how evidence is used in decisionmaking, and how well-informed, fair, and inclusive strategic decisions are likely to be in emergency scenarios, regardless of more specific public health vulnerabilities or capacities. Supporting information S1
2022-08-13T15:04:43.835Z
2022-08-11T00:00:00.000
{ "year": 2022, "sha1": "5be4d0e19acc7165b733ad744c862da607e6cffd", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/globalpublichealth/article/file?id=10.1371/journal.pgph.0000246&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "29251ca5940ce25e02471c6d99af8960bacba56c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14835206
pes2o/s2orc
v3-fos-license
Genetic transformation of structural and functional circuitry rewires the Drosophila brain Acquisition of distinct neuronal identities during development is critical for the assembly of diverse functional neural circuits in the brain. In both vertebrates and invertebrates, intrinsic determinants are thought to act in neural progenitors to specify their identity and the identity of their neuronal progeny. However, the extent to which individual factors can contribute to this is poorly understood. We investigate the role of orthodenticle in the specification of an identified neuroblast (neuronal progenitor) lineage in the Drosophila brain. Loss of orthodenticle from this neuroblast affects molecular properties, neuroanatomical features, and functional inputs of progeny neurons, such that an entire central complex lineage transforms into a functional olfactory projection neuron lineage. This ability to change functional macrocircuitry of the brain through changes in gene expression in a single neuroblast reveals a surprising capacity for novel circuit formation in the brain and provides a paradigm for large-scale evolutionary modification of circuitry. DOI: http://dx.doi.org/10.7554/eLife.04407.001 Introduction Animals display a wide repertoire of complex and adaptable behaviours executed by equally complex nervous systems. Understanding how the vast number of diverse cell types is assembled into functional neural circuits in complex brains during development is a major challenge. Studies of lineage tracing and circuit mapping reveal that heterogeneous pools of neural progenitors sequentially generate series of neuronal progeny, and that such lineally related neurons with shared developmental histories often share functional connectivity in the brain. In consequence, neural lineages can be considered to form neuroanatomical units of projection that represent the developmental basis of the functional circuitry of the brain (Pearson and Doe, 2004;Cardona et al., 2010;Pereanu et al., 2010;Custo Greig et al., 2013;Franco and Muller, 2013;Gao et al., 2013;Kohwi and Doe, 2013). This is exemplified in Drosophila where the tens of thousands of neurons that comprise the adult brain are generated during development by a set of approximately 100 pairs of individually identifiable neural stem cells called neuroblasts (Truman and Bate, 1988;Urbach et al., 2003;Technau, 2003a, 2003b;Technau et al., 2006). Each neuroblast gives rise to a specific, invariant lineage of post-mitotic neural cells in a highly stereotyped manner and in many cases, lineally related neurons share functional connectivity and many neuroanatomical features such as innervation of common neuropiles in the brain and common axon tract projection patterns (Pereanu and Hartenstein, 2006;Ito et al., 2013;Lovick et al., 2013;Wong et al., 2013;Yu et al., 2013). Examples of this are the four neuroblast lineages that give rise to the intrinsic cells of the mushroom body, a neuropile compartment involved in learning and memory, or the five neuroblast lineages that innervate the antennal lobe, the primary olfactory processing centre in the fly brain (Ito and Hotta, 1992;Ito et al., 1997;Stocker et al., 1997;Jefferis et al., 2001;Das et al., 2008Das et al., , 2011Das et al., , 2013Lai et al., 2008;Chou et al., 2010). Thus, neuroblast lineages are considered to form neuroanatomical units of projection that represent the developmental basis of the functional macrocircuitry of the fly brain (Pereanu and Hartenstein, 2006;Cardona et al., 2010;Lovick et al., 2013;Wong et al., 2013). Comparable principles are manifested in the developing cerebral cortex of vertebrates, which consists of diverse neurons organized into six distinct layers, each of which is laid in place sequentially during development. Neural progenitors in the cortex are known to be multipotent, capable of generating neurons that populate each of the layers. Lineage tracing experiments in mice suggest that lineally related neurons occupy columns spanning across the layers of the cerebral cortex, as proposed in the 'radial unit hypothesis' (Rakic, 1988;Yu et al., 2009b;Li et al., 2012;Ohtsuki et al., 2012). Furthermore, lineally related neurons also show a propensity to interconnect and have functional similarity, for example, similar orientation preferences in the visual cortex (Li et al., 2012;Ohtsuki et al., 2012). Thus in vertebrates as in invertebrates, developmental history and lineage relationships govern the assembly of functional circuits. In order to understand how lineally specified circuits develop in the brain, it is critical to understand the molecular mechanisms that confer unique identities to neural progenitors and their lineages. Studies on the molecular genetics of brain development indicate that neural progenitors emerge from the embryonic neuroectoderm where unique spatial information represented by unique combinations of gene expression specifies unique identities to the progenitors. For example, in Drosophila, the embryonic neuroectoderm becomes spatially regionalized due to the action of embryonic patterning genes, which define the anterior-posterior and dorsal-ventral body axes. Their combined expression creates a Cartesian coordinate-like gene expression system in the neuroectoderm, resulting in unique domains of expression of developmental control genes along the neuroectoderm (Skeath and Thor, 2003;Urbach and Technau, 2004;Technau et al., 2006). Changes in the combinatorial expression pattern of these genes in specific domains can lead to changes in the identities of the eLife digest The cells in the brain-including the neurons that transmit information-work together in groups called neural circuits. These cells develop from precursor cells called neuroblasts. Each neuroblast can produce many cells, and it is likely that cells that develop from the same neuroblast work together in the adult brain in the same neural circuit. How the adult cells develop into their final form plays an important role in creating a neural circuit, but this process is not fully understood. In many animals, the complexity of their brain makes it difficult to follow how each individual neuroblast develops. However, all of the neuroblasts in the relatively simple brain of the fruit fly Drosophila have been identified. Furthermore, the genes responsible for establishing the initial identity of each neuroblast in the Drosophila brain are known. These genes may also determine which adult neurons develop from the neuroblast, and when each type of neuron is produced. However, the extent to which a single gene can influence the identity of neurons is unclear. Sen et al. focused on two types of neuroblasts, each of which, although found next to each other in the developing Drosophila brain, produces neurons for different neural circuits. One of the neuroblasts generates the olfactory neurons responsible for detecting smells; the other innervates the 'central complex' that has a number of roles, including controlling the fly's movements. A gene called orthodenticle is expressed by the central complex neuroblast, but not by the olfactory neuroblast, and helps to separate the two neural circuits into different regions of the fly brain. Sen et al. found that deleting the orthodenticle gene from the central complex neuroblast causes it to develop into olfactory neurons instead of central complex neurons. Tests showed that the modified neurons are completely transformed; they not only work like olfactory neurons, but they also have the same structure and molecular properties. Sen et al. have therefore demonstrated that it is possible to drastically alter the circuitry of the fruit fly brain by changing how one gene is expressed in one neuroblast. This suggests that new neural circuits can form relatively easily, and so could help us to understand how different brain structures and neural circuits evolved. neuroblasts that delaminate from the neuroectoderm during embryogenesis (e.g., Deshpande et al., 2001). The process of spatial regionalization of the embryonic neuroectoderm is very similar in vertebrates. Homologous embryonic patterning genes result in unique domains of combinatorial gene expression along the vertebrate neuroectoderm (Reichert and Simeone, 2001;Reichert, 2005, 2008;Reichert, 2009). Thus, in both vertebrates and invertebrates, the cells of the neuroectoderm acquire unique spatial information in the form of a combinatorial code of gene expression, which is conferred by embryonic patterning genes. It is noteworthy that neural progenitors also use temporal information (typically a series of sequentially expressed transcription factors) to generate neuronal diversity within lineages (Pearson and Doe, 2004;Lin and Lee, 2012;Kohwi and Doe, 2013). While spatial cues convert a homogenous pool of progenitors into heterogeneous populations, temporal cues result in the ordered production of different neural subtypes from each progenitor. Given that spatial information in the neuroectoderm, in the form of embryonic patterning genes, imparts heterogeneity to neural progenitor populations, it is likely that these genes might also act as intrinsic determinants in the progenitor to give lineages their unique identities and hence determine their place in neural circuitry. This implies that spatially encoded intrinsic factors determine the identity of the progenitor and, as a consequence, the unique circuit features of its neural lineage. According to this, removal or addition of one or more of these genes could lead to a change in neuroblast identity resulting in transformation of the neuronal lineage and the lineal circuitry that derives from it. However, the extent to which individual transcription factors can contribute to this specification of neuroblast identity is not well understood. In order to test this, it is important to be able to uniquely identify individual neural progenitors and their lineages in the brain. The complexity of the vertebrate brain makes it difficult to conduct such an analysis at the resolution of single progenitors and single lineages. However, each of the neuroblasts in the Drosophila brain has been identified and their lineages characterized in the larval and adult brains (Ito et al., 2013;Lovick et al., 2013;Wong et al., 2013;Yu et al., 2013). Furthermore, each of these neuroblasts has also been characterized by the expression of a specific combination of spatial genes, which could act as cell intrinsic determinants in the specification of unique neuroblast identity and hence control lineage-specific neuronal cell fate (Skeath and Thor, 2003;Urbach and Technau, 2004;Technau et al., 2006). This allows an investigation of the role of putative intrinsic determinants by changing their expression in identified stem cells and assessing its effect at the lineage level in an otherwise normal brain. Here, we focus on two identified neuroblast lineages in the Drosophila brain, LALv1 and ALad1, which develop in close spatial proximity to each other in the larval brain but become spatially segregated in the adult brain. While the ALad1 neuroblast generates olfactory projection interneurons that innervate the antennal lobe, the LALv1 neuroblast generates wide-field interneurons that innervate the central complex. We show that orthodenticle (otd), an embryonic patterning gene involved in specifying the anterior-most regions of the neuroectoderm and embryonic brain Reichert and Bello, 2010), is expressed during development in the LALv1 neuroblast lineage but not in the ALad1 neuroblast lineage. Remarkably, loss of otd from the LALv1 neuroblast results in a complete transformation in the identity of the neurons that derive from this lineage. The otd null LALv1 neurons transform into antennal lobe projection interneurons similar to the ALad1 lineage, and this transformation includes a complete change in the neuroanatomy of the neurons, a change in their molecular properties as well as in their functional connectivity. This remarkably complete respecification of a neuroblast lineage upon the mutation of a single gene in the brain demonstrates that intrinsic determinants acting in the neuroblast during development specify the identity of its neural progeny and the macrocircuitry that these progeny establish. This large-scale modification of functional circuits in the brain by a single transcription factor in a single stem cell is unprecedented and reveals a surprising capacity for novel neural circuit formation in the developing brain, which may provide a paradigm for large-scale evolutionary modification of brain connectivity. Results Development, morphogenesis, and differential expression of Otd in two identified central brain neuroblast lineages, LALv1 and ALad1 We focused our analysis on two identified neuroblast lineages referred to as LALv1 and ALad1 (Pereanu and Hartenstein, 2006;Lovick et al., 2013) (see 'Materials and methods' for lineage nomenclature). During postembryonic development in the larval brain, the adult-specific (postembryonically generated) neural progeny of these lineages have their cell bodies clustered close to each other, dorsal to the larval antennal lobe ( Figure 1A,B). Although their cell body clusters are closely apposed, the two lineages can be easily identified based on their distinct and unique axon tracts that project to different brain regions Das et al., 2013;Lovick et al., 2013). The anatomical features of these two wild-type lineages can be visualized by MARCM clonal labelling (randomly induced neuroblast clones; ubiquitous Tub-Gal4 driver). The ALad1 lineage initially projects its axon tract medially, dorsal to the larval antennal lobe, then turns posteriorly and projects towards the protocerebrum via the medial antennal lobe tract ( Figure 1B,C) (Das et al., 2013;Lovick et al., 2013). The LALv1 lineage initially projects its axon tract ventro-medially, posterior to the larval antennal lobe, then loops dorsally and splits into two secondary axon tracts ( Figure 1B,D) (Spindler and Hartenstein, 2011;Lovick et al., 2013). In addition to the differences in axonal trajectories, we found that these two lineages also differed in their expression of the transcription factor Otd. Co-immunolabelling for the homeodomain transcription factor Otd and for Neurotactin (to identify lineage-specific axon tracts) shows that the LALv1 neuroblast ( Figure 1F) and all of its lineal progeny (white dotted lines in Figure 1F and inset in Figure 1D) express Otd. In contrast, neither the ALad1 neuroblast nor its lineal progeny are found to express Otd ( Figure 1I-L and inset in Figure 1C). In the adult brain, the neural progeny of the ALad1 lineage are olfactory projection neurons, which innervate the glomeruli of the antennal lobe and the neural progeny of the LALv1 lineage are widefield interneurons that innervate the central complex, a sensorimotor integration centre in the fly brain (Ito et al., 2013;Wong et al., 2013;Yu et al., 2013). To study the neuroanatomical features of the two lineages in the mature brain, we took advantage of the fact that they are differentially labelled by four enhancer-Gal4 driver lines in the adult brain. Thus, the adult LALv1 lineage is labelled by the OK371-Gal4 (a glutamatergic neuron label) and Per-Gal4 driver lines ( Figure 1O, Table 1), while the ALad1 lineage is not. Conversely, the adult ALad1 lineage is labelled by the Cha7.4-Gal4 (a cholinergic neuron label) and GH146-Gal4 lines, while the adult LALv1 lineage is not ( Figure 1P, Table 1). MARCM clonal labelling of the neurons in the adult ALad1 lineage using GH146-Gal4 or Cha7.4-Gal4 drivers shows that their cell bodies are positioned dorsal to the adult antennal lobe, their dendrites innervate the antennal lobe glomeruli and their axons exit the lobe via the medial antennal lobe tract ( Figure 1M,N,P). The axons then project dorso-posteriorly to innervate the calyx of the mushroom body and the lateral horn (Video 1) (Ito et al., 2013;Wong et al., 2013;Yu et al., 2013). MARCM clonal labelling of the neurons in the adult LALv1 lineage using Per-Gal4 or OK371-Gal4 drivers shows that their cell bodies are positioned ventral to the adult antennal lobe and their axons project into the loVM tract, which courses posteriorly behind the adult antennal lobe, then loops dorsally creating the prominent LEp fascicle, which innervates the central complex neuropiles and the lateral accessory lobe (Video 2) ( Figure 1O) (Spindler and Hartenstein, 2011;Ito et al., 2013;Lovick et al., 2013;Wong et al., 2013;Yu et al., 2013). As in the corresponding larval lineages, the adult LALv1 neuroblast lineage expresses Otd while the adult ALad1 neuroblast lineage does not (insets in Figure 1O,P). It is noteworthy that in the adult brain the cell bodies of the LALv1 neurons are located ventral to the adult antennal lobe, whereas in the larval brain the position of the cell bodies is dorsal to the larval antennal lobe (compare Figure 1A,M). This change in cell body position of the LALv1 lineage occurs as a consequence of the morphogenetic changes associated with the de novo development of the adult antennal lobe (Jefferis et al., 2004;Spindler and Hartenstein, 2011;Lovick et al., 2013;Wong et al., 2013). In summary, the LALv1 neuroblast and its progeny, which innervate the central complex, express otd throughout brain development as well as in the adult brain, while the ALad1 neuroblast and its progeny neurons, which innervate the antennal lobe, do not. Loss of Otd from the LALv1 neuroblast results in lineage identity transformation As otd is expressed in all of the cells of the central complex lineage-the neuroblast and the postmitotic neurons-we tested its possible function in both these cell types. In order to do this, we used MARCM-based clonal mutational methods to generate GFP-labelled otd −/− clones in the LALv1 lineage in an otherwise heterozygous background (Lee and Luo, 2001). Using this technique, it is possible to genetically inactivate otd in the postmitotic neurons, the GMC, or the neuroblast, thus allowing us to assess its role in each of these cell types (see schematic in Figure 2A). We generated such otd −/− clones early in larval development and analyzed them in the adult brain. Using this technique, we first investigated a possible requirement of otd in the postmitotic neurons of the central complex lineage. In these experiments, in which we used the OK371-Gal4 and Per-Gal4 driver lines to label the MARCM clones, we obtained a total of seven wild-type single cell clones and 11 otd −/− single cell clones. Although we have not dated the birth of these clones precisely (matched the time of clone generation), the wild-type single cell clones that we obtained in our experiments were very similar to those described previously (Yu et al., 2009a). Six of these single cell wild-type clones consisted of neurons that innervated both the lateral accessory lobe as well as one of the noduli of the central complex ( Figure 2B) and one clone only innervated the lateral accessory lobe ( Figure 2C). All of the 11 otd −/− single cell clones we recovered also displayed a similar neuroanatomy. Their cell bodies were located ventral to the antennal lobe, their axons coursed through the loVM and LEp tracts and they all innervated the lateral accessory lobe and one of the noduli of the central complex ( Figure 2G,H). Thus, loss of otd function from the postmitotic neurons did not result in any gross defects in the neuroanatomy of these neurons. It is however possible, that there are fine-scale changes in the arborisation of these neurons within the lateral accessory lobe and the central complex that we were unable to identify. It is also possible that otd function is required in the GMC for the targeting of the postmitotic neurons (see schematic in Figure 2A). However, in our experiments, we never obtained two cell GMC clones in order to be able to address this possibility. We then asked if otd might be required in the neuroblast of the LALv1 lineage for its proper development. In order to test this, we inactivated otd in the neuroblast during early larval development and analyzed the neuroanatomy of the resultant labelled wild-type and otd −/− mutant neurons in the adult brain. In these experiments, in which we used the OK371-Gal4 and Per-Gal4 driver lines to label the MARCM clones, we recovered 6 and 14 wild-type clones, respectively. As expected, all the wild-type neuroblast clones displayed the neuroanatomy of the central complex lineage as described above (ventrally position cell bodies, axon projection via the loVM and LEp tracts and innervations in the lateral accessory lobe and central complex. Figure 3B-E,J-M). However, when we generated otd −/− neuroblast clones in the LALv1 lineage (identifiable by the loss of Otd staining in the corresponding cell cluster ventral to the antennal lobe; white arrowhead in Figure 3G,O), neither of these drivers labelled the mutant LALv1 lineage ( Figure 3F-I,N-Q). In order to investigate the neuroanatomy of the otd −/− LALv1 lineage further, we utilized the ubiquitously expressed Tub-Gal4 driver to label neuroblast clones and recovered 19 WT and 37 otd −/− neuroblast MARCM clones in the LALv1 lineage. While the wild-type neurons displayed all the features of the LALv1 lineage described above (Figure 4-figure supplement 1A), the (yellow arrow in F) and the ALad1 neuroblast does not (yellow arrow in J). (M and N) show anterior and lateral views of 3D reconstructions of the LALv1 (green) and the ALad1 (magenta) lineages in the adult brain. Note that the adult antennal lobe (AL, yellow in M and N) is situated between the ALad1 lineage (antero-dorsal to AL) and the LALv1 lineage (ventral to AL) and the cell bodies of these lineages are not closely apposed anymore. The arrows in M and N indicate the ALad1 tract (magenta), which projects dorsally towards the protocerebrum and the LALv1 tract (green), which projects posterior to the AL. (O and P) show WT clones of the adult LALv1 and ALad1 lineages, respectively. Their cell bodies are outlined by white dotted lines and the AL is outlined by yellow dotted lines. White arrows trace the tracts of these lineages. The LALv1 lineage innervates the lateral accessory lobe (LAL) and the central complex (CC). The ALad1 lineage innervates the calyx of the mushroom body (MB) and lateral horn (LH). The midline is represented by a yellow line in all images. Scale bars in C (applicable to D) and in L (applicable to E-L) are 20 µm. Scale bar in P (applicable to O) is 50 µm. Genotypes in C and Taken together, these findings indicate that the mutant LALv1 neurons have acquired a transformed identity. Moreover these data suggest that this transformed identity has features characteristic of antennal lobe projection neurons. Determining the lineage identity of the transformed neurons As the neuroanatomy of the otd −/− LALv1 lineage shows such a dramatic transformation, we wanted to confirm that the transformed neurons did indeed belong to the LALv1 lineage. We used three different approaches to determine that it was indeed the LALv1 lineage that transformed into an antennal lobelike lineage upon the loss of otd from its neuroblast. First, we showed that the appearance of the transformed otd −/− LALv1 lineage corresponds to the appearance of an extra antennal lobe lineage. Second, we showed that the transformed otd −/− LALv1 lineage results in the corresponding loss of the LEp tract specific to the wild-type LALv1 lineage. Third, we used an independent molecular marker to unambiguously identify the wild-type and otd −/− LALv1 lineage. An extra antennal lobe lineage In the wild-type adult brain, the GH146-Gal4 driver specifically labels a subset of the antennal lobe projection interneurons that derive from three identified neuroblast lineages, ALad1, ALl1, and ALv1 (Stocker et al., 1997;Jefferis et al., 2001). If the transformed identity of the otd −/− LALv1 lineage does indeed correspond to that of an antennal lobe lineage, it should also express the antennal lobe specific enhancer Gal4 driver line, GH146. To investigate this, we generated wild-type and otd −/− MARCM clones in the LALv1 lineage and used the GH146-Gal4 enhancer line to label the recovered clones. As expected, the only wild-type neuroblast clones recovered corresponded to the ALad1, ALl1, and ALv1 lineages; GH146-Gal4 labelled wild-type neuroblast clones of the LALv1 lineage were never recovered in these experiments. However, when otd −/− neuroblast clones were induced in the LALv1 lineage, we recovered 39 examples of a fourth type of GH146-Gal4-labelled neuroblast clone. As the expression of GH146-Gal4 is much more restricted that Tubulin-Gal4, this also provided us with the opportunity to describe the otd −/− LALv1 lineage in more detail. The neurons in this type of mutant clone all exhibited extensive dendritic innervation of the antennal lobe (39/39; Figure 4D-G). This innervation was largely multiglomerular ( Figure 4D-G), although not all glomeruli were always innervated (yellow arrowhead in Figure 4E) and innervation tended to be more intense in the posterior parts of antennal lobe ( Figure 4G). Furthermore, the axons of this clone projected via the medial antennal lobe tract towards the protocerebrum (37/39; white arrow in Figure 4B,G,H,K), where they innervated the calyx of the mushroom bodies (39/39; Figure 4I,L) and then turned laterally to innervate the lateral horn (39/39; Figure 4J,M). The cell body position of this type of otd −/− clone as well as the entry point of its tract into the antennal lobe (both ventro-medial) did not correspond to any of the known GH146-Gal4 labelled antennal lobe lineages (Ito et al., 2013;Yu et al., 2013). Importantly, this is also true for the ALv1 lineage, whose cell bodies are also located ventral to the antennal lobe; despite the ventral cell body position of the LALv1 and ALv1 lineages, their overall neuroanatomy is very different from each other. The neurites of the otd −/− LALv1 lineage enter the lobe medially (magenta asterisk in Figure 5A,D,H) while the neurites of the ALv1 (as well as the otd −/− ALv1) enter it laterally (yellow asterisk in Figure 5A,D,H). Moreover, while the otd −/− LALv1 lineage uses the medial antennal lobe tract (magenta arrow in Figure 5A,D,H), the ALv1 (as well as the otd −/− ALv1) uses the mediolateral antennal lobe tract (yellow arrow in Figure 5A,C). Finally, while the otd −/− LALv1 lineage first innervates the calyx of the mushroom body and then the lateral horn, the GH146-labelled neurons of ALv1 (as well as the otd −/− ALv1) largely innervate only the lateral horn. The otd −/− LALv1 lineage was also recovered along with each of the three antennal lobe lineages, and in one case all three known GH146-Gal4 labelled antennal lobe lineages (ALad1, ALl1 and ALv1) were recovered along with it, resulting in four distinct GH146-Gal4 labelled lineages in the brain ( Figure 5A-C). We further confirmed this using the GH146-QF driver, which, like the GH146-Gal4 driver labels the ALad1, ALl1, and the ALv1 lineages (Potter et al., 2010). In this background, we used the pan-neuroblast-specific Insc-Gal4 driver to down regulate Otd expression early in all neuroblasts. As expected, in control brains, the GH146-QF driver labelled a total of three antennal lobe lineages ALad1, ALl1, and ALv1 (magenta dotted lines in Figure 5D-G). In contrast, in brains where otd was efficiently down regulated in all neuroblasts, the GH146-QF driver labelled an additional fourth projection interneuron lineage in addition to the three clusters normally seen ( Figure 5H-K). This confirms that the transformed otd −/− LALv1 lineage is distinct from the other antennal lobe lineages and results in the addition of an extra projection interneuron lineage in the antennal lobe. Taken together, these data suggest that the loss of otd from the LALv1 neuroblast results in the addition of an extra antennal lobe lineage. Loss of the LEp tract If the loss of otd from the neuroblast of the LALv1 lineage does indeed result in its neuroanatomical transformation into a lineage of a different fate, then this should correspond to the loss of the LALv1specific axon tract (LEp) in the brain. To investigate this, we first characterized the axon tract of the wild-type LALv1 lineage, which is readily identifiable in the adult brain based on Neuroglian immunolabelling patterns . In wild-type brains, Neuroglian immunolabelling shows the loVM tract (cyan arrow, left hemisphere in Figure 6C,D) and the characteristic LEp tract of the LALv1 lineage, which projects around the antennal lobes and towards the central complex (cyan arrowhead, left hemisphere in Figure 6C,D). In brains in which one LALv1 neuroblast clone is mutant, the brain hemisphere that contains the otd −/− LALv1 neuroblast clone (identified by the loss of Otd immunolabelling; yellow dotted lines in Figure 6B′), still shows the loVM tract (cyan arrow, right hemisphere in Figure 6C,D), which is shared by other lineages. However, the LALv1-specific LEp tract, which projects towards the central complex is entirely missing (cyan arrowhead, right hemisphere in Figure 6C,D). This shows that loss of otd from the LALv1 neuroblast corresponds to the loss of the LEp tract of the wild-type LALv1 lineage, providing support that loss of otd from the LALv1 neuroblast transforms it into a lineage of different identity. An independent molecular marker To investigate the identity of the mutant LALv1 lineage further, we identified Acj6 as a molecular marker that could unambiguously identify the LALv1 lineage in wild-type and otd mutants. Acj6 is a POU transcription factor that is known to be expressed in the ALad1 lineage and in a subset of the ALl1-derived projection interneurons of the wild-type brain ( Figure 7B). In addition to these two cell clusters, we observed a third Acj6 positive cell cluster ventral to the antennal lobe of the wildtype brain (cyan arrowhead in Figure 7B). MARCM clonal labelling using the Per-Gal4 enhancer line together with anti-Acj6 and anti-Otd antibodies unambiguously identified this cluster as the LALv1 lineage (cyan arrowhead in Figure 7A-D). Importantly, this cell cluster continues to express Acj6 immunoreactivity following mutational inactivation of otd in the LALv1 lineage ( Figure 7F,G). Thus, Acj6 provides a molecular marker for the identification of the LALv1 lineage independent of Otd expression in wild-type and mutant clones. The analysis of otd −/− LALv1 MARCM clones identified by Acj6 immunolabelling and co-labelled by GH146-Gal4 shows that the fourth GH146-positive neuroblast clone described above does indeed correspond to the mutant LALv1 lineage ( Figure 7E,H). This confirms that upon the loss of otd from the neuroblast, the neural progeny of the LALv1 lineage transform into an antennal lobe fate. Interestingly, the cell body position of the transformed otd −/− LALv1 neurons varied somewhat in the 76 otd −/− neuroblast clones obtained (from both the Tub-Gal4 and GH146-Gal4 experiments). In most cases (54/76) the cell body position of the otd −/− LALv1 neuroblast clones remained ventral to the antennal lobe, similar to the cell body position of the wild-type central complex lineage. Thus, in most cases, in terms of cell body position the neuroanatomy did not transform towards the antennal lobe lineage position (antero-dorsal to the adult antennal lobe). Occasionally, however, the cell bodies were shifted closer to the midline (17/76), and in a few rare cases the otd −/− LALv1 neuronal cell bodies were located antero-dorsal to the antennal lobe (5/76), a position similar to the wildtype antennal lobe lineage (see Video 3). This suggests that loss of otd in LALv1 neurons not only consistently transform their axonal and dendritic terminals towards the antennal lobe lineage neuroanatomy but also relocate their cell bodies in some cases to resemble the ALad1 antennal lobe projection neuron lineage. In summary, loss of otd from the LALv1 neuroblast results in a transformation of its progeny neurons, from a wild-type central complex identity to an antennal lobe projection neuron identity. Overexpression of Otd in the antennal lobe lineage results in a partial reciprocal transformation We next asked whether otd gain-of-function in the antennal lobe lineage, ALad1, might result in a reciprocal anatomical transformation of this lineage into one resembling the wild-type central complex lineage, LALv1. We used the MARCM system to misexpress the full-length otd coding sequence in the antennal lobe neuroblast clones using a Tub-Gal4 driver. In all Otd misexpression clones of the antennal lobe lineage (15/15), we found a partial transformation of this lineage towards the central complex identity ( Figure 8). All 15 clones comprised a few cells that retained neuroanatomical features of the wildtype antennal lobe lineage such as antero-dorsal cell body position, innervation of the antennal lobe, and axonal projections via the medial antennal lobe tract (yellow asterisk and arrowhead in Figure 8A-C). However, most of the cells in the clones displayed neuroanatomical features of the central complex lineage. These cell bodies were positioned ventral to the adult antennal lobe, they projected their axons via the loVM and LEp tracts and they innervated the lateral accessory lobe (magenta asterisk and white arrowheads in Figure 8A-C). Thus, otd gain-of-function was able to cause a partial, albeit incomplete transformation, of the antennal lobe lineage into a central complex-like lineage. Specific molecular changes occur in the transformed otd −/− LALv1 lineage The innervation pattern of the otd −/− LALv1 neuroblast lineage strikingly resembled that of the antennal lobe lineage, ALad1. Furthermore, the projection neuron-specific GH146 driver, which does In this brain, the three known antennal lobe lineages normally labelled by GH146-Gal4 (ALad1, ALl1, and Alv1) have been recovered, along with an additional fourth neuroblast lineage, LALv1. (D-G) document a control brain where Otd is not down regulated in the LALv1 lineage (yellow dotted lines in E). In this brain, there are three clusters of antennal lobe projection neurons labelled by GH146-QF corresponding to the ALad1, ALl1 and ALv1 lineages, magenta dotted lines in D and G). (H-K) show a brain in which there has been efficient knock down of Otd in the LALv1 lineage (note loss of Otd immunolabelling ventral to the antennal lobe; yellow dotted lines in I). In this brain, apart from the ALad1, ALl1, and ALv1 projection neurons, an additional, fourth cluster of cells is seen innervating the antennal lobe ('4', LALv1). The yellow asterisks in A, D, H indicate the point of entry of the ALv1 lineage into the antennal lobe, and the yellow arrow indicates its axon tract. The magenta asterisks in A, D, H indicate the point of entry of the LALv1 lineage into the antennal lobe, and the magenta arrow indicates its axon tract. Note that these are distinct from each other. Genotype in A-C: FRT19A,otd YH13 / FRT19A,Tub-Gal80,hsFLP; GH146-Gal4,UAS-mCD8::GFP/+. Genotype in D-K: UASdicer/+;Insc-Gal4/UAS-miRNA-otd-1;GH146-QF,QUAS-mtdTomato-HA/+). Scale bars are 50 µm. The one in A is applicable to B and C and the one in in D is applicable to D-K. Yellow line represents the midline. DOI: 10.7554/eLife.04407.010 Research article To test if this is indeed the case, we analyzed the activity of select molecular markers (the Gal4 driver lines described above) in the central complex lineage (summarized in Table 1). In order to test the activity of these enhancers in the otd −/− LALv1 lineage, we generated otd −/− MARCM clones and used these selected Gal4 lines to label the mutant lineage (see schematic in Figure 3A). Finally, we also tested the expression of the homeodomain transcription factor LIM1, which is expressed in the wild-type LALv1 lineage and is not expressed in the wild-type ALad1 lineage. As described above, when we generated wild-type MARCM clones and used either OK371-Gal4 or Per-Gal4 driver lines to label the clones, we found that both lines were able to drive reporter expression in the wild-type LALv1 lineage ( Figure 3B-E,J-M). In contrast, neither of these driver lines was able to drive reporter expression in the otd −/− LALv1 lineage ( Figure 3F-I,N-Q). This suggests that the OK371-Gal4 and Per-Gal4 driver lines are suppressed in the transformed otd −/− LALv1 neurons. However, in these experiments, the transformed neurons were not labelled at all because the activity of the drivers was suppressed in the otd −/− LALv1 lineage. In order to confirm this finding, we decided to positively label the otd −/− LALv1 neurons and in this background assay the activity of the Gal4 drivers. In order to do this, we utilized the dual MARCM method (Lai and Lee, 2006), which uses two independent binary expression systems (Gal4-UAS and LexA-LexA operator) to label the MARCM clones. In these experiments, we used the GH146-LexA driver to label the otd −/− LALv1 neurons positively and combined it with OK371-Gal4 and Per-Gal4 driver lines to assay their activities. We first tested if the GH146-LexA driver, like the GH146-Gal4 and the GH146-QF driver, was active in the otd −/− LALv1 neurons. Thus, in the first set of dual MARCM experiments, we combined the GH146-LexA with Tubulin-Gal4 ( Figure 9A-D). Under these conditions, when we generated otd −/− neuroblast clones in the LALv1 lineage, we found that the transformed neurons were labelled by both the Tubulin-Gal4 (magenta dotted lines and inset in Figure 9A) and GH146-LexA (magenta dotted lines and inset in Figure 9B) drivers, confirming that GH146-LexA, like GH146-Gal4 and the GH146-QF, is active in the otd −/− LALv1 neurons and thus able to label it. In the following dual MARCM experiments, we used the GH146-LexA to positively label the transformed otd −/− LALv1 neurons and combined it with either OK371-Gal4 or Per-Gal4 driver lines to assay for their activity in the transformed neurons. In both cases, while the GH146-LexA positively labelled the transformed otd −/− LALv1 neurons (magenta dotted lines and insets in Figure 9E,I), neither of the Gal4 driver lines was able to drive reporter expression in these mutant neurons (magenta dotted lines and insets in Figure 9F,J). This indicates that enhancers that are normally active in the wild-type central complex lineage and inactive in the antennal lobe lineage become suppressed in the transformed otd −/− LALv1 lineage. Might the converse be true; do enhancers that are normally inactive in the wild-type central complex lineage and active in the antennal lobe lineage become activated in the transformed otd −/− LALv1 lineage? To test this, we used the Cha7.4-Gal4 in MARCM clonal experiments. As expected, we never recovered Cha7.4-Gal4 labelled LALv1 neuroblast clones in wild-type MARCM experiments (data not shown). In contrast, when we generated otd −/− neuroblast clones in the LALv1 lineage (identifiable by the loss of Otd staining ventral to the antennal lobe; magenta dotted lines in Figure 9N) the Cha7.4-Gal4 driver robustly drove reporter expression in the transformed otd −/− LALv1 lineage (magenta dotted lines and insets in Figure 9M-P). This suggests that the Cha7.4-Gal4 becomes activated in the transformed otd −/− LALv1 lineage. Furthermore, the concomitant loss of the OK371-Gal4 driver (a putative glutamatergic label) and ectopic activation of the Cha7.4-Gal4 driver (a putative presence of Otd immunolabelling (magenta dotted lines in B′). In the left brain hemisphere, which contains the wild-type LALv1 lineage, the loVM (cyan arrow on the left in C) and the LALv1 specific LEp tracts (cyan arrowhead on the left in C) that are identifiable by Neuroglian immunolabelling (highlighted in magenta). In the right brain hemisphere, which contains the otd −/− LALv1 lineage, the loVM tract (taken by other lineages) is still present (cyan arrow on the right in C). The LALv1 specific LEp tract (cyan arrowhead on the right in C) that is exclusively made by the LALv1 lineage, is entirely missing in the right brain hemisphere, which contains the otd −/− LALv1 lineage (cyan arrow on the right in C). The yellow arrowheads in A, C, D point to the new tract of the otd −/− LALv1 lineage innervating the antennal lobe (magenta asterisk). Genotype: FRT19A, otd YH13 /FRT19A,Tub-Gal80,hsFLP; GH146-Gal4,UAS-mCD8::GFP/+. The midline is represented by a yellow line in all images. DOI: 10.7554/eLife.04407.011 Figure 10J-L). Interestingly, the LN1-Gal4 driver, which is inactive in both the wild-type LALv1 and ALad1 lineages, remains inactive in the otd −/− LALv1 lineage (data not shown). Taken together, these findings indicate that the otd −/− LALv1 lineage acquires the molecular signature of a wild-type antennal lobe lineage (see Table 1) implying that otd loss-of-function in the LALv1 neuroblast lineage results in a molecular as well as an anatomical transformation of this lineage into one resembling the ALad1 lineage. The otd −/− transformed LALv1 lineage establishes functional connectivity in the antennal lobe and can respond to odour stimulation Given the extent of the anatomical and molecular transformations seen in the otd −/− LALv1 neuroblast lineage, might the neurons in the transformed lineage receive functional input from olfactory sensory neurons? To address this question, we specifically expressed the calcium sensor G-CaMP3 in the transformed otd −/− LALv1 lineage by MARCM clonal labeling (Wang et al., 2003) and used twophoton microscopy to monitor calcium activity in the dendrites of these transformed neurons in the antennal lobe. We first tested if the transformed otd −/− LALv1 neurons established functional connectivity with other neurons in the antennal lobe. Typically, olfactory sensory neurons bring odour information to the antennal lobe via the antennal nerve. Here, they make synaptic connections with projection neurons and local interneurons; projection neurons, take the odour information to higher brain centres (mushroom body and lateral horn) and local interneurons process the odour information locally in the antennal lobe. We reasoned that if the transformed neurons did made functional connections within the antennal lobe they would be postsynaptic to the olfactory sensory neurons and would be activated upon the activation of the antennal nerve. We therefore electrically stimulated the antennal nerve while simultaneously monitoring calcium activity from the transformed neurons. We found that electrical stimulation of the antennal nerve, which contains the axons of the olfactory sensory neurons, resulted in an increase in calcium activity in the dendrites of the otd −/− LALv1 lineage. Moreover, a greater number of electrical stimulus pulses applied to the antennal nerve resulted in a corresponding increase in the amplitude of the calcium signal recorded in the mutant transformed neurons ( Figure 11A-C). These results demonstrate that the transformed otd −/− LALv1 neurons were able to make functional connections in the antennal lobe and were able to receive functional input from sensory afferents. We further investigated if the transformed otd −/− LALv1 neurons could respond to specific odour stimuli. To do this, we performed calcium imaging experiments similar to those described above but replaced the electrical stimulation of the antennal nerve with odour stimulation of the intact antenna (olfactory sensory neurons). Four different odorants (isoamyl acetate, ethyl butyrate, 3-octonal, 3-heptanol) were selected based on their ability to excite all or some of the VM2, DM2 and DM3 glomeruli (Dacks et al., 2009;Semmelhack and Wang, 2009) in the antennal lobe, which are innervated by the otd −/− LALv1 neurons. Imaging calcium activity from the dendrites of the otd −/− LALv1 neurons in these glomeruli in response to the selected odours show that each of the four odorants evoked a unique pattern of glomerular activity. Isoamyl acetate excited all three glomeruli, whereas ethyl butyrate excited only the VM2 and DM2 glomeruli. 3-octanol and 3-heptanol, however, excited just the DM2 and VM2 glomeruli, respectively ( Figure 11D,E). These patterns of glomerular activation in the otd −/− LALv1 neurons are strikingly similar to that of wildtype antennal lobe olfactory projection neurons (Wang et al., 2003;Dacks et al., 2009;Semmelhack and Wang, 2009). Taken together, these functional studies indicate that the otd −/− LALv1 neurons receive specific input from olfactory sensory neurons that results in glomerulus-specific activation patterns to different odorants. This in turn implies that otd loss-of-function in a single neuroblast leads to a remarkably extensive reconfiguration of the macrocircuitry in the brain, which includes anatomical, molecular as well as functional transformation of neurons in the central complex lineage into neurons with properties of olfactory projection neurons. Discussion During neuronal development in both vertebrates and invertebrates neural progenitors use spatial and temporal information to generate diverse neuronal subtypes. For example, in Drosophila, unique spatial information imparts heterogeneity to the neuroblast pool and then temporal cues acting in the Research article neuroblasts generate further diversity. In this way diverse neuronal subtypes are produced by the neuroblast lineages, which consequently create the diverse functional macrocircuitry of the brain. In addition, the neuroblasts in the central brain of Drosophila are characterized by the expression of a specific combination of cell intrinsic determinants (Urbach and Technau, 2003a) that are thought to act in the specification of unique neuroblast identity and hence control lineage-specific neuronal cell fate. In this study, we show that such intrinsic determinants present in the neuroblast are essential for the proper specification of the entire lineage that derives from the neuroblast. Our data demonstrate that a remarkable rewiring of the functional macrocircuitry of the brain occurs due to the manipulation of one intrinsic factor, otd, acting in an identified neuroblast during development. This transformation affects molecular properties, anatomical projection patterns (dendritic and axonal), and functional inputs in all of the neurons in the lineage (summarized in Figure 12) such that a central complex lineage is transformed into a functional olfactory projection neuron lineage. This otddependent, lineage-specific respecification of interneurons has implications for our understanding of the development and evolution of the circuitry in the brain. The ability of a neuroblast lineage to transform completely into another upon the loss of a single intrinsic determinant suggests that many of the other putative members of a potential neural identity code might be shared between these lineages. The observation that the neuroblasts of the central complex and antennal lobe lineages develop in such close spatial proximity to each other during early development suggests that these two neuroblasts may experience similar spatial cues as they develop on the neuroectoderm. If this is the case, then by manipulating a single differentially expressed factor, otd, we might have been able to uncover the underlying similarity in the intrinsic spatial code between the two neuroblast lineages. Importantly, this neuroblast-specific transformation of lineage identity resulted in an alteration of the brain's circuitry such that an entire neuroanatomical unit of projection to the central complex was lacking while a novel and functional ectopic unit of projection was added to the antennal lobe. Implicit in these findings are the notions of 'coded' and 'soft' properties of circuit assembly. On the one hand, the neuroanatomical and molecular transformation described above demonstrates that circuitry in the brain is 'hard-wired' or 'coded' by the spatially encoded intrinsic factors-the presence or absence of otd from the central complex neuroblast determines the identity of the resultant neurons. On the other hand, the resulting functional transformation suggests that circuit assembly involves substantial 'soft-wiring'-the olfactory sensory neurons and interneurons indigenous to the antennal lobe are able to make functional connections with the extraneous transformed otd −/− LALv1 neurons, which they are normally not 'hard-wired' to connect with. Thus, while genetically encoded properties might 'lock' lineages into particular circuit states (central complex or antennal lobe) it is their 'soft' properties (developmental plasticity) that allow circuits to functionally incorporate changes as dramatic as extraneous neurons. As both these wiring strategies operate simultaneously in the brain, it bestows upon the brain the huge potential of evolvability of functional circuits. Many interesting questions emerge as a result of our findings. How does the developmental plasticity of a functional circuit support these large-scale rearrangements? Do developing circuits acquire a propensity for exuberant connectivities, or do they try and maintain a homeostasis in their connections and therefore make compensatory changes in the number of synapses with their normal partners? It has been shown in some cases that neuronal activity can mediate such 'soft' properties of synaptic connections (Tripodi et al., 2008;Singh et al., 2010). It will be interesting to test if this is also the case for the transformed neurons and the olfactory circuit. Finally, do all parts of the brain display such striking developmental plasticity such that they can be remodelled to this extent and incorporate extraneous neurons into existing circuitry? The ability to change the functional macrocircuitry of the brain through changes in the expression of a single transcription factor in a single neuroblast lineage may provide a simple paradigm for largescale modification of brain connectivity during evolution. The otd −/− transformed LALv1 lineage functionally integrates into the antennal lobe circuitry and participates in olfactory information processing. This suggests that a functional rewiring of the olfactory circuitry can occur due to the addition of an entire neuroblast lineage to the normal olfactory circuit. In more general terms, this type of lineagespecific rewiring might fuel the evolutionary modification of neural circuitry in the brain. It provides an elegant and simple solution to the evolution of complex circuitry in that a 'microevolutionary' molecular change (changing the expression of one gene in one cell) can have 'macroevolutionary' consequences on the brain's circuitry (changing an entire macrocircuit or an information processing module). This simple strategy suggests that large-scale changes in the brain's wiring do not need to come about through many minor, sequentially accumulating changes at the cellular level. Instead, large-scale wiring changes can occur in response to remarkably simple changes in gene expression in single cells. In this Fly strains and genetics Fly stocks were obtained from the Bloomington Stock Centre (IN, USA) and, unless otherwise stated, were grown on cornmeal media, at 25°C. UAS-miRNA otd-1 was kindly provided by Henry Sun, Taiwan. full-length otd cDNA clone RE-10280 (pBluscript backbone) was purchased from DGRC. Confirmed plasmid DNA (pJFRC-10xUAS-Otd FL with an attB integration site) was microinjected (2 μg/μl in 1× microinjection buffer) into Drosophila embryos that contain PhiC31 integrase and selected attP docking site on the second chromosome. Further crossing of G0 flies, screening of the transformants and balancing of insertions performed at the Transgenic Fly Facility at C-CAMP facility at NCBS campus, Bangalore, India. Immunohistochemistry and imaging Brains were dissected in 1× PBS and fixed in freshly prepared 4% PFA for 30 min at room temperature. The fixative was removed, and the brains were washed with blocking solution (1× PBS with 0.3% TritonX and 0.1% BSA). Primary antibodies were diluted in blocking solution. The samples were incubated in a moist chamber on horizontal shaker at 4°C for 24 hr. Samples were then washed with 0.3% PTX (1× PBS with 0.3% TritonX) and secondary antibody diluted in 0.3% PTX was added. The samples were incubated in this at 4°C in a moist chamber on horizontal shaker overnight, after which they were washed and mounted in vectashield on glass slides. All samples were imaged on Olympus Fluoview (FV1000) laser scanning confocal microscope. Optical sections were acquired at 1-µm intervals with a picture size of 512 × 512 pixels. Images were digitally processed using Adobe Photoshop CS3. 3-D reconstructions were made using Amira. Functional imaging Brains of clonal animals were dissected in Ca 2+ -free AHL saline, which contains 108 mM NaCl, 5 mM KCl, 4 mM NaHCO 3 , 1 mM NaH 2 PO 4 , 8.2 mM MgCl 2 , 5 mM HEPES, 5 mM trehalose, and 10 mM sucrose, with pH adjusted to 7.4. Live brains that contain only the otd null transformed neurons were selected for imaging experiments. Two-photon calcium imaging and antennal nerve stimulation (electrical) Two-photon calcium imaging was performed as described previously (Root et al., 2008). The antennal nerves were cut from the base of the antennae. The brain preparation was then pinned down on a Sylgard-coated petri dish with AHL saline containing 2 mM CaCl 2 . The antennal nerve was stimulated electrically with a glass suction electrode, at 1 ms in duration, 10 V in amplitude and 100 Hz in frequency (S48 Grass stimulator). The response of the transformed neurons was monitored by a custom-built two-photon microscope. Excitation wavelength was 930 nm. Images were captured at 4 frames/s. Two-photon calcium imaging and odour stimulation For odour stimulation, the fly brain was dissected leaving the antennae intact and embedded in agarose containing AHL saline with 2 mM CaCl 2 . The agarose gel was removed from the antennae and a piece of Kimwipes was used to dry the antennae. A glass coverslip was placed on top of the brain preparation for imaging. Odour delivery was controlled by solenoid valves described previously (Root et al., 2008). Odour vapour was obtained by placing a filter paper containing 10 μl of an odorant in a 100-ml bottle. Mixing an air stream with an odour stream at different flow ratios was used to deliver odorants at a specific concentration. Isoamyl acetate and ethyl butyrate were delivered at 1% (odorant at 10 ml/min and air at 990 ml/min), whereas 3-heptanol and 3-octanol were delivered at 2.5% saturated vapor pressure. Each odorant was applied for a duration of 2 s. Images were acquired for 20 s at a rate of 4 frames/s and a resolution of 128 × 128 pixels. At the end of each experiment, an image stack was collected at a resolution of 512 × 512 pixels for glomerulus identification. The data were analyzed and plotted using Igor Pro 6.2 (Wavemetrics). The peak response of stimulation (ΔF/F) was shown as mean ± S.E.M.
2016-05-12T22:15:10.714Z
2014-12-03T00:00:00.000
{ "year": 2014, "sha1": "b3e743b5a098131a21252985bf7848cd1b3ccc98", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7554/elife.04407", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7d0f21aadd3125c6185f2ffcf6ceabac4de9a261", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
21855411
pes2o/s2orc
v3-fos-license
Chimpanzee adenoviral vectors as vaccines for outbreak pathogens ABSTRACT The 2014–15 Ebola outbreak in West Africa highlighted the potential for large disease outbreaks caused by emerging pathogens and has generated considerable focus on preparedness for future epidemics. Here we discuss drivers, strategies and practical considerations for developing vaccines against outbreak pathogens. Chimpanzee adenoviral (ChAd) vectors have been developed as vaccine candidates for multiple infectious diseases and prostate cancer. ChAd vectors are safe and induce antigen-specific cellular and humoral immunity in all age groups, as well as circumventing the problem of pre-existing immunity encountered with human Ad vectors. For these reasons, such viral vectors provide an attractive platform for stockpiling vaccines for emergency deployment in response to a threatened outbreak of an emerging pathogen. Work is already underway to develop vaccines against a number of other outbreak pathogens and we will also review progress on these approaches here, particularly for Lassa fever, Nipah and MERS. Since the first documented report of the use of an engineered virus to induce a protective immune response, 1 clinical testing of numerous potential vaccine vectors has been undertaken against a broad range of diseases. Over many years of preclinical development, a series of new vector or vaccination regimens have demonstrated improved immunogenicity: in particular, antigen-specific antibody and/or T cell responses have been increased through iterative rounds of vector vaccine development. This is well illustrated by the development of malaria vaccines against P. falciparum encoding the ME-TRAP antigen, where vaccine-induced T cell responses have increased from 44 IFN-g spot-forming cells per million peripheral blood mononuclear cells (SFC) after DNA vaccination, to 850 SFC after a single vaccination with a simian adenovirus-vectored vaccine (Table 1). Importantly, viral vectors have not shown age-limitations in their use, with comparable T cell responses observed following vaccination with a modified vaccinia Ankara (MVA) vector expressing the influenza A antigens NPCM1 in healthy older adults (aged 50-60, 60-70, 80C years) compared to a younger adult population (aged 18-55 years). 2 In addition, age de-escalation studies of chimpanzee adenovirus 63 (ChAd63) ME-TRAP in West-African children have demonstrated potent T cell and antibody responses in immunised children as young as 1 week of age. 3,4 The urgent need for a treatment or vaccine intervention during the West-African Ebola outbreak saw five vectored vaccines tested concurrently in Phase I trials; three non-replicating adenoviruses of different serotypes, MVA and Vesicular stomatitis virus (VSV), all encoding the ebolavirus glycoprotein (GP). All vaccines were primarily tested for their ability to induce high levels of antibodies against GP, as this correlated with protection observed in non-human primates, although cell-mediated immunity has also been shown to play a protective role with some vectors. 5,6 While it is not straightforward to directly compare antibody levels induced by the different vectors due to the range of assays employed by different groups, responses following a single vaccination with ChAd3, Ad26 and rVSV were detectable within 28 days, with a very significant enhancement in antibody responses observed when adenoviral prime vaccinations were followed by an MVA boost. 7,8 Humoral immunogenicity induced by various viral vectors encoding Ebolavirus (EBOV) glycoprotein is summarised in Table 2. Although initially developed as a platform for inducing T cell responses, single vaccinations with ChAd63 have demonstrated good antibody induction against malaria antigens, which could be enhanced by boosting with an MVA. [9][10][11][12] Ad-MVA regimens induced IgG responses that were maintained for at least 180 days after immunisation. 7,13 Prime-boost vaccination with adenoviral and MVA vectored vaccines is now well-stablished as a safe and robust strategy for inducing both cellular and humoral immunity against malaria and ebolavirus, with the addition of an MVA boost increasing both the magnitude and the breadth of the T cell response ( Fig. 1 and Table 1). 13,14 Either vaccine can act as prime or boost, as demonstrated in a novel Phase I Ebola vaccine trial with AdHu26 and the multivalent MVA BN-Filo vaccines. 7 Although the highest T cell and antibody responses to ChAd3 MVA were observed with a four to eight-week interval between prime and boost, reducing the interval to one week still induced comparable T cell responses to the eight-week interval. However, the shorter prime-boost interval did lead to a reduction in antibody responses including neutralising antibodies. 13 MVA vectors have been successfully used to boost responses in adults induced by vaccination with BCG in infancy, demonstrating the potential of the MVA vector to boost any pre-existing T cell memory response 15 The development of multivalent MVA vectors, such as MVA BN-Filo which encodes four proteins from three ebolavirus species and Marburg virus, is also a potentially important tool for reducing the number of vaccine products that might need to be manufactured, by encoding protective antigens from several strains of the same pathogen or from multiple pathogens into the same vaccine construct (reviewed in 16 ) The large genome of MVA allows insertion of a larger amount of foreign DNA compared with other viral vectors including adenoviruses. Consistent with previous studies, strong T cell responses to the EBOV glycoprotein were observed after a single ChAd3 administration and significantly enhanced after an MVA boost, but were undetectable after rVSV vaccination. 17 Only the rVSV vaccine was assessed for efficacy during the outbreak in a ringvaccination trial where volunteers were stratified into immediate or delayed vaccination groups following exposure. 18 The significant reduction in Ebola cases from 10 days after vaccination in this Guinea trial highlights the need to induce a rapid immune response in an outbreak scenario, with a single-dose vaccine remaining the most manageable option. For a rapid response in an outbreak setting, an early induction of protective immunity would be prioritised over durability. However, for immunisation of healthcare workers and other first responders in anticipation of a potential outbreak in the future, durability Table 1. Comparison of cellular immune responses with different delivery methods for the same malaria antigen (ME-TRAP) at seven days after the final vaccination. Immunogenicity as measured by ex vivo interferon-gamma ELISPOT using the same ELISPOT method and peptide pools in the same lab. would be more important than rapid induction of immunity. For the former application, single dose vaccines will be most desirable, whereas for durable immunity a multi-dose regimen would likely be acceptable and could be required. Viral vector biology influences the choice of vaccine platform Several viral vectors currently have the potential to serve as single dose vaccine platforms for the purpose of outbreak preparedness, having shown robust immunogenicity in clinical trials (reviewed in 19 ). However, in order to achieve high vaccine effectiveness, it is equally important to consider parameters affected by vector biology, such as manufacturability, stability and safety of the vaccine. A key factor is the manner in which the vaccine antigen is encoded and expressed. In adenoviral vectored vaccines, the antigen is typically placed under the control of a heterologous, strong promoter, and encoded in an independent expression cassette which is inserted into a well-characterised location in the adenoviral genome. This is most commonly the E1 locus. Concurrent deletion of the adenoviral E1 genes at this locus renders the virus replication incompetent. Vector production can therefore only take place in complementing cell lines expressing the E1 genes, such as HEK-293 or PER.C6 Ò . 20 Typical genetic engineering methods for antigen insertion into adenoviral vectors include plasmid-based homologous recombination in E. coli, 21 bacterial artificial chromosome (BAC)based recombineering, 22 or in vitro Gateway Ò recombination. 23 The placement of such antigen expression cassettes within the viral genome leads to de-novo expression of the antigen in the vaccine target cells, which in turn results in a strong humoral as well as cellular immune response against the antigen. More recently, the capsid-incorporation approach has shown promise for the induction of antigen-specific antibodies using modified adenoviral vectors. 24 Here, antigenic epitopes or entire antigens are engineered to be part of adenoviral capsid proteins and are thus displayed on the surface of the viral vector, for recognition by the immune system. However, as the capsid-display strategy has not yet been evaluated in clinical trials, this review will focus on traditionally engineered adenoviral vectors with antigen cassettes at the E1 locus. The first adenoviral vaccine vectors to be developed were based on human adenovirus serotype 5 (HAd5), a species C adenovirus which commonly infects humans. However, it was found that pre-existing anti-HAd5 antibodies which are present in a large proportion of the human population could significantly dampen the humoral and cellular immune response to the vaccine antigen. 25 Various strategies have since been explored to circumvent this problem: the use of alternative human serotypes, such as HAd26 or HAd35, 26 re-engineering the capsid of HAd5 to prevent antibody recognition, 27 and the use of simian adenoviral vectors against which there is no preexisting immunity. 28 As discussed above, chimpanzee adenoviral vectors (ChAds) have successfully been used in clinical trials against a variety of diseases. ChAds are non-enveloped viruses, meaning that the antigen (e.g. a membrane glycoprotein) is not present on the surface of the vector, but is expressed at high levels once the vector enters the target cells of the vaccinated individual. This is in contrast to VSV-based vaccine vectors, which, as enveloped viruses, are designed to incorporate glycoprotein antigens into their viral lipid membrane and thus display the antigen on the virus surface, in addition to expressing it upon entry into the target cell. 29 Crucially, VSV-based vectors carrying heterologous glycoprotein antigens are generally deleted for their endogenous glycoprotein (VSV-G), which implies that it falls to the vaccine antigen to fulfil the role of functional viral fusion protein as an essential component for vector propagation during manufacture as well as for target cell entry. This important requirement for functionality inherently affects the choice of antigen for VSV-based vectors, as some viral glycoprotein antigens are either not functional by themselves (e.g. Nipah virus glycoprotein G needs glycoprotein F 30 or are not incorporated into the VSV membrane without modification (e.g. HIV env 31 ). In addition, while adenoviral vectors can equally well encode antigens which are not membrane-bound glycoproteins (e.g. Ebolavirus nucleoprotein, HIV gag), VSV vectors carrying such antigens rely on the endogenous glycoprotein (VSV-G) for viral entry. Since the full-length VSV-G protein is implicated in neurotropism, 32 a genetically attenuated vector carrying a truncated VSV-G has been developed, 33 which has an acceptable safety profile in healthy adults. 34 Having thus weighed up some of the characteristics of the two most clinically advanced vectors for emergency preparedness platforms, it becomes apparent why vector biology can have significant implications for vaccine safety. Specifically, tissue tropism and replication competency of the viral vector have to be taken into consideration. Intuitively, a replicationdeficient vector (such as ChAd) carries less safety risks than a replication competent, albeit attenuated, vector (such as VSV), since the inability to replicate prevents dissemination of the vector throughout the body. Accordingly, transgene expression of replication-deficient adenoviral vectors was shown to be confined to the injection site and the draining lymph nodes, 35 whereas recent Phase I/II trials of rVSV-ZEBOV found evidence of viral vector replication in synovial fluid and skin lesions, presumed to be a result of Ebolavirus glycoprotein-specific tissue tropism of the vaccine. 8 These findings underline the difficulty in predicting the safety profile of VSV-based vaccines, since tissue tropism will be highly dependent on the chosen glycoprotein antigen. In contrast, adenoviral vectors have a well-characterised safety profile, across a range of age groups, which is largely independent of the nature of the antigen. 36 Lastly, vector biology may also significantly impact vaccine manufacture and delivery. For emergency preparedness stockpiling, each vaccine might need to be produced at a scale of 500,000 -2 million doses, with the option to quickly increasing manufacture to perhaps 4-6 million doses or more in case of an outbreak, depending on the specific pathogen. Of the two most clinically advanced platforms, VSV and adenoviruses, the latter can likely meet this requirement more easily: GMP-compliant large-scale adenoviral vector production facilities exist in many countries, related to the regular use of adenoviral vectors not only in prophylactic vaccines but also in some cancer and gene-therapy trials. One potential drawback of ChAd-based viral vectors compared to human Ad vectors is the need for vector optimisation or cell line engineering to ensure high viral yields during virus production. For example, ChAd vectors may need to contain certain E4 genes from HAd5 in order to grow to high titers in current HAd5-E1-transcomplementing cell lines, as was demonstrated with the ChAdOx1 vector. 23 Alternatively, producer cell lines can be engineered to increase viral yield. 37 However, this need for optimisation has not been a hurdle to large-scale manufacturing so far. GMP-compliant VSV-vector production has also been developed in recent years, and scalable manufacture of rVSV is now possible. 38 Once a stockpile has been produced, vector stability during storage and deployment is critical. Most viral vectored vaccines are stable for >5 years at ¡70 C, and a 2-8 C cold chain is required for distribution and storage of adenoviral vectors. One study assessing the recently deployed rVSV-ZEBOV vaccine observed a significant loss of viral titres at a temperature of 4 C after 2 weeks, 39 whereas adenoviral vectors were shown to be stable for 20 days at room temperature in a sucrose buffer. 40 In addition, sensitivity of any VSV-based vaccine to pH changes is presumably dependent on the specific envelope glycoprotein (i.e. the vaccine antigen). Overall, vaccine stability in terms of temperature and pH range would therefore likely be variable across a panel of putative VSV-based outbreak vaccines, since the glycoprotein will differ from vaccine to vaccine. In the case of an adenoviral vectored vaccine, on the other hand, variation in stability is expected to be minimal, since the vaccine antigen is not present in the viral capsid, and the composition of the virus particles would be very similar across different vaccines. Of note, new approaches for thermostabilisation have recently been developed for adenoviral vectors, such as immobilization of viral particles in a sugar glass on a filter 41 or the use of biocompatible additives to slow down the degradation of virus particles. 40 These improvements are expected to have a significant impact on the deployment of vectored vaccines in challenging climates such as sub-Saharan Africa. In human populations, pre-existing immunity to simianderived adenoviral vectors is, unsurprisingly, less prevalent than immunity to human adenoviruses, and antibodies to some simian vectors, such as ChAdOx1, appear to be particularly rare. 23 Anti-vector immunity to the backbone of simian viruses increased after vaccination but is relatively short-lived. As a result, reuse of the same vectors has been successful for boosting after 6 or more months in clinical trials. 42 Limitations of the traditional approach The traditional approach to vectored vaccine design has been to identify an immunogenic antigen from the pathogen, construct the vector in the chosen platform and then assess immunogenicity and efficacy in murine models, prior to further testing in higher species and progression to the next stage of vaccine development. A significant obstacle in this approach is that the pathogen must be infectious in rodents if the efficacy of the vaccine is to be assessed preclinically and therefore the data may rely on mouse-adapted or chimeric pathogens (Table 4 summarises common mouse models for evaluating candidate vaccines for outbreak pathogens). In the case of MERS CoV or SARS CoV, preclinical vaccine candidates could be tested in murine models with a mouse adapted strain of virus, 43,44 but for newly emerging pathogens, establishing a mouse model could take significant time, in particular for evaluation of numerous viral isolates or serial passaging of a virus in mice. Alternatively, use of neonatal mice or knockout mice (e.g. of IFN-a/bR, STAT-1) have been required to mimic human disease for Ebola, Marburg, Lassa, Nipah, or Zika viruses in mice, and only through expression of human DPP4 (receptor for MERS CoV) in mouse lungs could infection of mice with MERs CoV be achieved. 45 While these mouse models may prove useful in drug discovery, if a significant component of the immune response is compromised, it is unlikely that protection observed in pre-clinical studies will be consistent with the protective immune response required in humans. A new strategy for developing vaccines against outbreak pathogens A more economical and achievable strategy than traditional approaches to vaccine development and deployment would be to focus on manufacturing small stockpiles of vaccine using a common platform technology. ChAd vectored vaccines provide a good example of a suitable vaccine platform, which has been identified as one of significant interest by the WHO R&D Blueprint process. The overall strategy would be to generate suitable stockpiles for emergency response use having previously demonstrated safety and immunogenicity of each vaccine up to Phase II trials in the target geographical regions. These products could be stored in relevant locations for each disease and, in the event of an outbreak emerging, could be deployed in a ring vaccination program similar to that employed in a Phase III trial in Guinea of the rVSV ZEBOV vaccine during the West African Ebola outbreak. 18 Such a deployment would need to be made under the provisions of policies for use of unapproved medicinal products, such as the FDA Expanded Access program, also known as "compassionate use", or other emergency use legislation. This would require fulfilment of certain conditions including that no comparable or satisfactory therapy is available, that the risk of harm from the vaccine is not greater than the risk of disease and that there is sufficient evidence of the safety and effectiveness of the product to support its use in the given circumstances. 46 In this context, a vaccine for an outbreak pathogen, based on a well-developed platform, such as ChAd vectors, with evidence of efficacy from a relevant animal model would be likely to gain approval for use in a limited setting. Based on research, manufacturing and clinical trial costs for the ChAd3 vectored vaccine developed for Ebola, vaccines might be stockpiled for just $50 million per disease, representing a fraction of the cost of bringing a vaccine through to licensure. Deployment would provide the efficacy data in humans required for approval by a national regulator, increasing the likelihood of the vaccine progressing through the later stages of development. Tackling future outbreak threats To improve responsiveness to epidemics, in 2015 the WHO published a list of nine diseases requiring urgent vaccine R&D to prevent public health emergencies in the future. This list was revised in 2017, and key characteristics of the diseases prioritised by the WHO are summarised in Table 3. The process of prioritising diseases took into account properties of the causative pathogen e.g. transmissibility, host-based factors such as immunopathology, clinical aspects including ease of accurate diagnosis, availability of countermeasures and mortality, public health capacity and epidemiological factors. 47 Research and development priorities for these diseases include development of suitable diagnostic tests, assessment of potential treatments, identification of key knowledge gaps, production platforms, behavioural interventions and acceleration of vaccine development. Preparation of sufficient quantities of safe and efficacious vaccines against potential outbreak pathogens is an extremely effective strategy. However, a lack of access to dedicated longterm funding has hampered vaccine development for outbreak pathogens in recent decades. 48 As well as limiting the number of new vaccines being developed, the number of facilities with the capacity to biomanufacture vaccines is also limited, which is a significant issue for outbreak preparedness. 49 In addition, WHO recognised that generally applicable platform technologies for rapid vaccine development are required and have set out to identify and prioritise the leading platforms. To address these issues, the Coalition for Epidemic Preparedness Innovations (CEPI) was launched in January 2017, bringing together funders including the Wellcome Trust, the Bill and Melinda Gates Foundation, and the governments of Norway, Germany, Japan and others. 50 The initial fund is $460 million, with the European Commission also pledging cofunding of €250 million and further funding due to be confirmed from the Government of India by the end of 2017. The fund will initially focus on the Nipah, Lassa and MERS viruses, aiming to bring two candidate vaccines through development against each disease. CEPI also aims to promote technical and institutional platforms to improve responsiveness to future epidemics. The approach undertaken by CEPI will advance vaccine development for diseases where research to date has been limited. This is in large part due to the lack of market potential for such vaccines in conjunction with the huge costs involved over a long period of time to provide a vaccine, from pre-clinical development through to licensure, estimated at upwards of $200 million to $500 million per vaccine. 51 Therefore, the funding required to license a vaccine for each of the priority diseases highlighted by the WHO blueprint would run into many billions of dollars, and opportunities to assess the efficacy of these vaccines in humans would be rare. Prioritising vaccine development for the greatest threats Although Ebola virus disease (EVD) has been described since 1976, the outbreak that began in 2014 was larger than all the previous episodes combined, potentially due to a mutation in the glycoprotein that occurred immediately prior to the rapid increase in the number of EVD cases. 52,53 Although not sufficiently advanced to be deployed immediately during the outbreak itself, several vaccines against ebolaviruses had already been manufactured to Good Manufacturing Practice (GMP) standards providing a rare opportunity to undertake phase I trials very rapidly and then assess efficacy against disease. The 2014 outbreak provided a much-needed impetus to improve pandemic preparedness for emerging pathogens. To this end, the three identified viruses as targets for vaccine development, by CEPI have known potential to cause outbreaks with high mortality: MERS-CoV, Nipah virus and Lassa virus. Nipah virus Nipah Virus (NiV) is a recently-recognised and highly pathogenic zoonotic paramyxovirus that can cause severe disease in man with high associated fatality rates (up to 100%). 54 Outbreaks have occurred in Malaysia, Singapore and India with almost annual occurrence in Bangladesh. Human-to-human transmission is common in Bangladesh and has also been documented in India. 55 Several species of pteropid fruit bats are known to be host reservoirs of NiV, with accumulating evidence that both NiV and other paramyxoviruses can circulate worldwide in bats. [54][55][56] The high fatality rate, direct infection from natural reservoirs, infection following amplification in susceptible domestic livestock such as pigs, documented human-to-human transmission, and the potential ability to transverse the globe, all emphasise the pandemic potential of NiV. 56 There are no clinically approved vaccines against NiV, however, one therapeutic approach (monoclonal antibody therapy) has recently completed a phase I clinical trial with results still to be reported. 57 While monoclonal antibody treatment may be efficacious in a short window post-exposure, this treatment option is not suitable for large-scale use, and as such, vaccine development is a key research focus for the prevention of NiV-mediated disease. Advantageously, there are a number of animal models of NiV infection which are used in vaccine development programs and are considered to sufficiently mirror NiV-induced pathogenesis observed in humans, e.g. the hamster, ferret and African Green Monkey (AGM) models. [58][59][60] While vaccine-mediated cellular immunity has been demonstrated to play a role in protection in preclinical models of NiV infection, 61 the most advanced vaccine modalities demonstrating clear efficacy across multiple animal models have primarily induced humoral immunity. A soluble glycoprotein (sG) subunit vaccine from the related henipavirus Hendra virus (HeV) is an extensively studied vaccine that can protect ferrets and AGM from experimental challenge with NiV or HeV. Primeboost regimens with adjuvanted HeV-sG subunit proteins are efficacious in stringent NiV challenge models, across a range of doses (4-100ug), and with pre-challenge neutralising antibody titres as low as 1:28. 62,63 The HeV sG vaccine (Equivac Ò HeV) has been licensed to vaccinate horses in Australia against HeV. 64 A number of viral vectored vaccines have also been tested and show promising immunogenicity and/or efficacy against NiV-mediated disease. These include poxvirus (canarypoxvirus ALVAC strain), vesicular stomatitis virus (VSV), rabies virus (RABV), adeno-associated virus (AAV), Newcastle disease virus (NDV) and Venezuelan equine encephalitis virus (VEEV); this topic has recently been comprehensively reviewed. 56,65 Lassa virus Lassa virus (LASV) is a medically relevant arenavirus which produces conditions ranging from asymptomatic infection to a lethal haemorrhagic fever, Lassa fever (LF). Annually, LASV appears to infect between 300,000 to 500,000 individuals with mortality rates ranging from 2% to in excess of 50% in outbreaks. 66,67 LF is an endemic zoonosis in parts of West Africa including Nigeria, Liberia, Sierra Leone and Guinea, with more recent studies highlighting the spread of LASV into surrounding areas e.g. Mali, Benin and Ghana. This epidemiology suggests that efficacy trials of Lassa fever vaccines could be conducted successfully in countries such as Nigeria and Sierra Leone. The common African rat (Mastomys natalensis) is the zoonotic reservoir for LASV and is thought to facilitate the ease of LASV spread to humans. Despite the recurrent and high disease incidence with associated significant morbidity and mortality, there are no approved vaccines. Currently, LF treatment relies on supportive care and, where available, the administration of the antiviral drug ribavirin. 68 There continues to be an unmet need for medical interventions that can curb the spread of LASV and avert the morbidity and mortality associated with potential viral dissemination into a large geographical area due to the zoonotic reservoir. 69,70 The first clinically available vaccine for the prevention of an arenavirus haemorrhagic fever was Candid #1, a live-attenuated vaccine against Junin virus infection, available through the Argentine National Immunization Plan. 71 Unfortunately, the development of a LASV vaccine has not progressed as rapidly. Cellular immunity is thought to be critical for survival of LF infection, with early T cell activation associated with a better clinical outcome. 72,73 Recent studies focusing on the early stages of LF in non-human primates (NHP) have confirmed previous observations that early and strong T-cell responses are associated with effective control of virus replication and recovery, while fatal LASV infection of NHP has been associated with a lack of peripheral T-cell activation. 73,74 It has also been demonstrated that some vaccination strategies primarily aimed to elicit LASV-specific humoral immunity are not effective, e.g. gamma-irradiated LASV. 75 The development of LASV vaccines has involved a number of different platform technologies including non-replicating vaccine approaches, such as inactivated LASV virus, virus-like particles (VLPs), and DNA vaccines, as well as replicationcompetent vaccine strategies (both recombinant and re-assortment viral vectored vaccines). The four replication-competent LASV vaccine candidates that have been extensively studied are based on vaccinia virus, 76,77 vesicular stomatitis virus, 78 Mopeia virus (MOPV) 79 and yellow fever virus (YFV) 17D vectors 80 with all of these vaccine candidates tested in different animal models, including NHPs. Efficacy testing in animal models that mimic the major pathophysiological and immunological features of human LF are a prerequisite before licensure. Rodents are an obvious first species to establish immunogenicity, but as LASV has a rodent host reservoir and the response to LASV varies depending on mouse strain, age and inoculation route, rodents are not suitable as a valid LF disease model. Guinea pigs are the most sensitive model to study lung pathology, 81,82 while common marmosets (CM) are surrogates to study liver involvement. 83 However, LASV-infected rhesus and cynomolgus monkeys are considered the gold-standard models and are the only available and relevant challenge models for human LF. The YFV vaccine strain 17D has been genetically manipulated to express the LASV glycoprotein and was designed to control both diseases, YF and LF, in areas of overlapping incidence in West Africa. 84 While it can protect guinea pigs, 80 it has failed to protect marmosets and is genetically unstable. 86,87 In addition, while recombinant vesicular stomatitis virus (rVSV) expressing LASV glycoprotein was protective in nonhuman primate challenge, the protection was not sterile and LASV viremia could be measured post-infection. 85 LASV and MOPV are closely related Old World arenaviruses that can exchange genomic segments (reassort) during coinfection. Clone ML29, encodes the major antigens of LASV and also MOPV antigens. Preclinically, both marmosets and guinea pigs have survived an otherwise fatal LASV infection. 86,87 Recent studies have demonstrated that SIVinfected rhesus macaques respond well to ML29 vaccination, and survive when challenged with a heterologous lethal arenavirus strain (LCMV-WE) indicating that ML29 is both safe and immunogenic in immuno-compromised animals. 88 Another vaccine vector that proved effective in guinea pigs against LASV challenge is a Venezuelan equine encephalitis virus (rVEE) replicon particle expressing GP or NP. 89 Animals were fully protected against LASV challenge after prime/boost/boost immunization with this vector. One of the most promising vaccines is vaccinia virus encoding LASV glycoprotein; nonhuman primates vaccinated with this vaccine candidate were protected against challenge. 90,91 However, despite several promising vaccine candidates in pre-clinical evaluation, none has yet advanced to a clinical trial in humans. Novel coronaviruses: MERS CoV and SARS CoV Several novel coronaviruses have emerged over the last decade, causing outbreaks mainly in the Middle East region and Asia, in Saudi Arabia, Jordan, Qatar and China in particular. An epidemic of Severe Acute Respiratory Syndrome (SARS) was reported in 2003, which started in China and caused over 8000 cases with between 10 and 50% mortality depending on age. 92 The causative agent was identified as a novel coronavirus, SARS CoV, not previously identified as infectious to humans, 93 with bats and civets as natural reservoirs. 94,95 Middle Eastern Respiratory Syndrome (MERS) was first reported in 2012 in a man who became ill in Saudi Arabia. 96 The isolation of another novel coronavirus followed, known as MERS CoV, which has subsequently caused nearly 1900 cases and 670 deaths. 97 Dromedary camels are a reservoir, although transmission also occurs from human to human. 98 Strategies for producing effective coronavirus vaccines have focussed on expression of either the spike protein or nucleocapsid proteins or, in some cases a combination of both, in a range of vectors including rabies viruses, VSV and VEE (reviewed in 99,100 ). A report from a recent workshop in Riyadh on countermeasures for MERS CoV bringing together funders, public health experts and researchers concluded that progress with vaccine development is still hindered by the lack of animal models for evaluating efficacy. 100 Small animals do not naturally express a functional form of the dipeptidyl peptidase 4 (DPP4) receptor; however, transgenic mice expressing human DPP4 are susceptible to infection. 101,102 Despite this advance, mouse models are likely to be less useful for the assessment of immune correlates than larger animal models such as rhesus macaques and common marmosets, which exhibit the severe clinical syndromes observed in humans. 103,104 MVA and ChAd viral vectors for MERS have reached GMP manufacture, while a DNA vaccine is now being tested in clinical trials. 105,106 Progress with development of chimpanzee adenovirus vectors for outbreak pathogens In May 2017, the first cases in an outbreak of EVD were reported in the Bas Uele Province in the Democratic Republic of the Congo (DRC). 107 This area shares a border with the Central African Republic and is particularly remote and difficult to access. As the causative species has been identified as Zaire ebolavirus, the rVSV-ZEBOV vaccine is being considered at the time of writing, for deployment in a ring vaccination design to protect contacts and frontline healthcare workers (HCWs). 108 This fresh outbreak is the 8 th to occur in the DRC and highlights the potential utility of vaccination to protect HCWs, particularly where remote locations present significant logistical challenges for responding to and containing outbreaks. Maintaining the current momentum for developing vaccines against outbreak pathogens is crucial, and as such, simian adenoviruses are uniquely fit for purpose as an effective vaccine platform, not in small part due to their predictable safety profile, stability, manufacturability, but most importantly owing to their immunogenicity. Therefore, a single-antigen pathogen-specific ChAd vector vaccine could be suitable as a single dose approach for rapid induction of protective immunity in an outbreak, but for durable protection for potential first responders a ChAd prime, MVA boost approach could be more effective. Novel vaccines against outbreak pathogens are under development in a range of simian adenovirus serotypes including ChAd3, ChAd63 and ChAdOx1 (reviewed in 109 ) and for the human vectors AdHu26 and AdHu5. Application of a pipeline approach to developing vaccines for outbreak pathogens can greatly accelerate the output of candidate vaccines as the key processes, such as generation of constructs, production of virus stocks, defining preclinical immunogenicity, and GMP manufacture can be substantially standardized. An approach that is currently being adopted for at least twelve potential outbreak pathogens using standardized preclinical processes (Table 5), with several advancing to GMP manufacture and clinical testing. The latter include vaccines against MERS-CoV, Rift Valley fever virus, Zika virus and Chikungunya virus. The key bottlenecks for this approach are the identification of vaccine antigens and the availability of appropriate animal models of disease. For preparations to be made to counter future threats, some knowledge of emerging pathogens is required, and yet detailed epidemiological surveillance for many infectious diseases remains limited in regions where incidence is greatest. 110 Recent data suggests that around 60% of Table 5. Status of chimpanzee adenovirus vector (ChAd) vaccine development for a range of outbreak pathogens at the Jenner Institute, University of Oxford (as May 2017). The genetic background for all vectors is ChAdOx1 (a species E modified chimpanzee adenovirus based on isolate Y25). 23 Antigens are inserted at the E1 locus via Gateway Ò recombination. For preclinical immunogenicity testing, mice typically receive a single-dose of 10 8 emerging infectious diseases are zoonotic with the majority originating in wildlife, requiring surveillance among livestock animals and wildlife species, as well as in humans. 111 Although Ebola outbreaks have occurred sporadically since 1976, the pace of vaccine development for Ebola has been slow with most vaccines undergoing preclinical evaluation for more than 5 years before the start of Phase I clinical trials. The 2014-15 outbreak provided much needed momentum for public health experts and the research community to improve preparedness for future epidemics. 112 In order to continue to improve our preparedness for future outbreaks, epidemiological surveillance and vaccine development will need to accelerate substantially.
2018-04-03T01:57:04.548Z
2017-10-30T00:00:00.000
{ "year": 2017, "sha1": "9844f45ee76c0350e9dc365f6732838f10da7fc8", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21645515.2017.1383575?needAccess=true", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "7d262832606b297d4d1cd2af7a45578d10823418", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
13720243
pes2o/s2orc
v3-fos-license
Copenhagen Psychosocial Questionnaire - A validation study using the Job Demand-Resources model Aim This study aims at investigating the nomological validity of the Copenhagen Psychosocial Questionnaire (COPSOQ II) by using an extension of the Job Demands-Resources (JD-R) model with aspects of work ability as outcome. Material and methods The study design is cross-sectional. All staff working at public dental organizations in four regions of Sweden were invited to complete an electronic questionnaire (75% response rate, n = 1345). The questionnaire was based on COPSOQ II scales, the Utrecht Work Engagement scale, and the one-item Work Ability Score in combination with a proprietary item. The data was analysed by Structural Equation Modelling. Results This study contributed to the literature by showing that: A) The scale characteristics were satisfactory and the construct validity of COPSOQ instrument could be integrated in the JD-R framework; B) Job resources arising from leadership may be a driver of the two processes included in the JD-R model; and C) Both the health impairment and motivational processes were associated with WA, and the results suggested that leadership may impact WA, in particularly by securing task resources. Conclusion In conclusion, the nomological validity of COPSOQ was supported as the JD-R model-can be operationalized by the instrument. This may be helpful for transferral of complex survey results and work life theories to practitioners in the field. Results This study contributed to the literature by showing that: A) The scale characteristics were satisfactory and the construct validity of COPSOQ instrument could be integrated in the JD-R framework; B) Job resources arising from leadership may be a driver of the two processes included in the JD-R model; and C) Both the health impairment and motivational processes were associated with WA, and the results suggested that leadership may impact WA, in particularly by securing task resources. Conclusion In conclusion, the nomological validity of COPSOQ was supported as the JD-R model can be operationalized by the instrument. This may be helpful for transferral of complex survey results and work life theories to practitioners in the field. PLOS Introduction The Copenhagen Psychosocial Questionnaire (COPSOQ) is one of a few research-based instruments that have been developed for use at workplaces as well as for research purposes [1]. The instrument is internationally recognized as a risk assessment tool by both the International Labour Organization and the World Health Organization [1,2] and is used in workplace surveys worldwide for work environment development and follow-up of organizational changes [3,4]. COPSOQ is a generic, theory-based questionnaire covering a broad range of aspects of the psychosocial working environment rather than being linked to a specific theoretical framework [5,6]. The instrument covers central dimensions of the seven theories of psychosocial factors at work [5], which were identified as most influential by Kompier [7] COPSOQ validation studies have been conducted in a number of countries (see e.g. [6,[8][9][10][11][12][13][14][15][16]) and various aspects of the reliability and validity have been tested. Among these, test-retest reliability [14,16,17], minimally important score differences [18], differential item functioning and differential item effect [19], and criterion-related validity in relation to e.g. different measures of sickness absence [6,20]. Construct validity has been corroborated through analyses of inter-scale-correlations, and also in relation to other instruments measuring corresponding constructs (see. e.g. [14,16]). So far, however, no study has tested the concurrent validity of COPSOQ scales in one comprehensive, theoretically based model. In this study, we focus on aspects of WA as an outcome in relation to an extended JD-R model. Work ability (WA) is important for the individual employee, the workplaces and for society. WA is a multifaceted construct which in epidemiological research typically consists of the worker's self-assessed ability to work now and in the near future with respect to work demands, health and mental resources [21]. Associations between COPSOQ scales and WA have been demonstrated in a number of recent studies [10,[22][23][24][25][26][27][28]. Results from these studies demonstrate negative associations between work ability and Quantitative & Emotional Demands [22,24,25], Role Conflicts [22] Work Family Conflict [24] as well as Stress [22][23][24], Burnout [22] and Sleeping Troubles [22,23]. In contrast, positive associations with work ability have been reported in relation to Influence [24,25], Possibilities for Development [22,25], Meaning in Work and Role Clarity [22], Quality in Leadership [22,25], Social Support [23][24][25], Social Community at Work [22,23,25], Job Satisfaction [22,24,26], Justice and Respect [22] and General Health [23]. Our operationalization in the present study comprises a combination of self-rated general health [6], current global WA [21,29] and expected future health-related WA in the present occupation. In recent years, the Job Demands-Resources (JD-R) model has become one of the most influential stress and motivation models in work and organizational psychology [30,31]. It has been validated in many cross-sectional studies (e.g. [32,33]) and also longitudinally [34]. The model is comprehensive and drawing on classical job satisfaction theories in addition to previous work environment models [30,35]. This makes it especially suitable for use at workplaces, where a holistic approach is needed. The basic assumption of the JD-R model is that all psychosocial work characteristics can be categorized into demands and resources [32,36]. Job demands refer to physical, psychological, social, or organizational aspects of a job that require sustained physical and/or psychological effort (e.g. workload, role conflicts), whereas job resources (e.g. social support, job autonomy) refer to aspects of the job that may reduce job demands and the associated physiological and psychological costs, are functional in achieving work goals, and stimulate personal growth, learning, and development. The model posits that high job demands may trigger a health impairment process (leading to strain and burnout and further to ill-health), whereas high job resources are "energizers" initiating a motivational process leading to positive attitudes towards work and positive behaviours and may also reduce strain symptoms. However, not all resources and demands interact equally in relation to strain [35], and also it has become apparent that the complexity of the concepts is higher than assumed in the early days of the JD-R model [37]. The role of hindrance and challenge demands have become the subject of research and there might be a need of such distinctions even in relation to job resources [37,38]. Bakker and Demerouti have proposed a division between job resources arising from organizational, interpersonal, and task level [39]. While task resources was regarded as most important for motivational outcomes in the job characteristics theory [40] a shift in work life research has been seen during recent years [4]. Today, work-life interpersonal relations are considered highly relevant as for example argued by Grant and Parker [41]. This shift is reflected by the COPSOQ II scales, which comprises job factors, relational factors, leadership and climatic factors in addition to a number of health-related as well as motivational outcomes. Formal leaders have a crucial role for employee wellbeing and health as they can affect working conditions such as amount of assignments, role clarity and influence as well as the social environment [42]. Accordingly, job resources provided by leaders may be perceived as antecedents for task and interpersonal resources. In relation to the JD-R model, Schaufeli has recently pointed to this specific role of engaging leadership [43]. His findings from analyses based on an extended JD-R model indicate that engaging leadership affects wellbeing of the employees indirectly via the impact on job demands, but in particular on job resources. Simultaneously testing associations between the entire COPSOQ instrument and aspects of WA in a nomological framework will go one step further than previous validity studies on the instrument. Besides, it will add to an overview, which in particularly may be helpful for transferral of complex methods and theories from research to practice. In the present study, we applied an extended JD-R model based on domains suggested by results from previous validation studies of COPSOQ II [6,8,10]. We aimed at testing the concurrent validity of the entire COPSOQ instrument with aspects of WA as outcome using an extended JD-R model with leadership resources as an antecedent to job demands and two kinds of job resources: task resources and interpersonal resources. Questionnaire development and data collection In 2003 a team of Danish and Swedish researchers (Arvidsson, Johansson, Kolstrup & Pousette) made a first translation of the Danish version of COPSOQ into Swedish and this work was updated by Ektor-Andersen until a final version was established in 2007 [44]. Even though scales from this version have been used in Sweden in a number of research projects since then, the Swedish version of the instrument has not previously been subject for a validation study. The validation process included a back-translation of the existing Swedish version of COPSOQ II into English, a systematic evaluation process and five rounds of cognitive interviews using a think aloud procedure with additional probing [45][46][47]. Interviews were conducted with 26 informants selected to achieve variation in gender, age, region of residence, and occupation. The overall purpose was to develop the formulations of the items by identifying potential problems in the questionnaire and clarifying how informants understood key concepts at an early stage of the process as suggested by Willis [48,49]. Based on the findings from the back-translation and the interviews, the Swedish version of COPSOQ was revised and tested on new rounds of informants until well-functioning formulations conceptually equivalency with the English version was achieved. The initial steps of the validation process corroborated face and content validity of the items (further details have been published elsewhere [45][46][47]). The Utrecht Work Engagement scale [50,51], the one-item Work Ability Score [29,52] and other additional items were tested similarly. The data for the present study was collected from May 2014 to January 2015. All staff employed at the Public Dental Health Service in four regions of Sweden (N = 1782) received an email with a personal login and password to an online questionnaire and after two reminders 1345 respondents had replied, providing a response rate of 75% (ranging from 71%-81% among the regions). Employees on long-term sickness absence or parental leave were excluded from the sampling frame as presence at the workplace is required for the questionnaire to be relevant to fill in. This has probably led to an overestimation of the true level of work ability for the total work force and a risk of underestimation of the strength of the associations in the model tested as research show that e.g. self-reported sickness absence predicts future reduced work ability [53]. Respondents were on average almost three years older than the non-respondents (p≤0.001). Pearson chi-square tests revealed that the response rate was higher for managers than for other employees (91.8% vs. 73.8%, p≤0.001). The response rates also differed between occupational groups: dentists having the lowest (67.8%) and employees with educational backgrounds outside dentistry the highest (84.2% (p≤0.001)). Study population The study sample (Table 1) comprises primarily Swedish-born women, and the mean age of the total sample was 48.5 (SD 11.3) years. The respondents worked on average 36.9 (SD 6.0) hours per week, had worked 17.3 (SD 13.8) years in the same organization and almost all had a permanent position (98.1%). The Swedish public dental sector comprises large regional organizations including service facilities, administration in addition to general and specialized dental clinics. The sector has often been described as influenced by New Public Management, in particularly regarding management by objectives with an emphasis on quantitative measures of productivity (e.g. [54][55][56][57][58]). There is a widespread belief among senior management of the regional dental organizations that large clinics with a formal management structure is an advantage [59]. While the objectives in economic terms largely are given from the organization, the local leadership may differ within the organization [59]. Based on the context it therefore seems likely that first line managers have more opportunities for influencing interpersonal relations and task resources than job demands. Measures In general, COPSOQ items have five response options on Likert-type scales, for example from never to always or from a very low to a very high extent. For the analyses, items were scored 100, 75, 50, 25, 0, and scale scores calculated as the mean of the items for each scale, including only those respondents who had answered at least half of the questions included in the scale [6]. Work engagement was measured by the nine-item version of the Utrecht work engagement scale [50,51]. Each item had seven response options on a Likert-type scale ranging from 0 = never to 6 = always. The scale score was computed as mean of item scores. WA was measured by three items A) self-assessment of current global WA as compared with the lifetime best WA on a scale from 0-10 [52], B) general self-rated health from COP-SOQ II [6] and C) prospective health-related WA by asking: "Considering your health, do you believe that you can work in your current job even in two years?" with the response alternatives: No, hardly; maybe; yes, probably and scored 0, 50, 100. Multiple indicators for each latent variable were used in the tested models. Leadership Resources were indicated by five scales, Interpersonal Resources by three scales, Task Resources by three scales and one additional single item, Job Demands by five scales, Strain Indicators by three scales, Positive Work Attitudes by three scales and WA was indicated by three single items. We chose to exclude the COPSOQ scales for Meaning in Work and for Vertical Trust from the analyses due to a conceptual overlap and high shared variance with Work Engagement and Horizontal Trust, respectively. Analyses Data analyses were conducted using IBM SPSS and AMOS version 23 [60]. More than 15% of the respondents choosing the lowest or highest response options was considered evidence of a floor or ceiling effect, respectively [61]. A number of indices were used to examine the overall fit of the hypothesized and alternative models to the data: χ 2 test and Root Mean Square Error of Approximation (RMSEA) as absolute goodness-of-fit indices. RMSEA values below 0.05 indicate good fit, 0.06-0.08 reasonable fit, 0.08-0.10 mediocre fit, and >0.10 poor fit [62][63][64]. In addition, relative goodness-of-fit indices were investigated: the Comparative Fit Index (CFI) and Tucker-Lewis Index (TLI). The classical criterion for these two indices suggests that values over 0.90 and even over 0.95 indicate a good fit [62]. The fit of nested models was compared by testing the significant changes in the χ 2 values and with nested models we used the Akaike Information Criterion (AIC) to compare the models; the smaller the value of AIC, the better fitting the model is [65]. The statistical significance of the indirect effects was tested using bootstrapping procedures (5000 bootstrap samples, 95% two-sided CI). Ethics The study was approved by the Regional Ethics Board in Southern Sweden (Dnr. 2013/256 & 2013/505) and informed consent was obtained from all individual participants included in the study. Scale characteristics Scale characteristics included in the Swedish COPSOQ II version and other variables included in the present study are presented in Table 2. The internal consistencies were above 0.70 for all scales except for Role Conflicts (0.65). The proportion of internally missing values for the scales was below 2% except for the two items asking whether the employees withhold information from the management, and vice versa, as well as one item asking whether the nearest superior is good at handling conflicts (2.9-3.3% missing values). The scale for Meaning in Work had a high ceiling effect (20.2%) and a high mean score (80.7 st.dev. 15.5). The scales for Role Clarity and Social Community at Work also showed some ceiling effect (15.2%-17.3%), while Sleeping Troubles and Work-Family Conflict had a corresponding floor effect (15.8-16.9%). Correlations between all study variables are presented in Table 3. Relationships between the COPSOQ scales and work ability First we tested the Confirmatory Factor Analytic (CFA) measurement model which specifies the pattern by which each measure loads on a particular factor (p6 in [66]) The CFA model presented an acceptable fit to the data (χ 2 (276) = 1552.00; CFI = .92; TLI = .91; RMSEA = .061). The initial model was respecified to allow error covariance between Social Support from Superior and Quality of Leadership, and also between Variation and Role Clarity based on the inspection of the modification indices and their conceptual interrelatedness. Items belonging to Social Support from Superior and Quality of Leadership inquires about the personal relation to the nearest superior, in contrast to items on e.g. Organizational Justice, which are based on a shift of referent addressing the climate at work [47]. Role clarity addresses issues such as the extent to which the employee knows exactly his/her areas of responsibility, which naturally is related to how much variation the job offers. The scales and items loaded on the factors as expected, and factor loadings ranged from .44-.92 ( Table 2). Next we tested the proposed fully mediated model A (Fig 1) against alternative models. Model A included paths from Leadership to Job Demands, and Task and Interpersonal Resources (first set of mediators), which in turn were related to Strain Symptoms and Positive Work Attitudes (second set of mediators), and finally these latent variables were related to WA. The partially mediated model B included the paths in model A plus the direct paths from Leadership to Strain Symptoms and Positive Work Attitudes and to WA. Another partially mediated model C was like model B and additionally included direct paths from Job Demands, and Task and Interpersonal Resources to WA. Finally in model D, although not expected in the JD-R model, a path from Job Demands to Positive Work Attitudes was added in the model, as previous studies indicate that job demands may also be related to positive work attitudes [34,58,67]. An overview of all models is presented in Fig 2. All the model fits are shown by Table 4. Comparing the different models either by using χ 2 difference test or AIC measures indicated that model D had the best fit. Removing the three non-significant paths from this partially mediated model gave the final model E (significant paths are presented in Fig 3). We found two unexpected associations: Leadership had a direct, weakly negative effect on WA (β was-.17, p<.01). Job Demands had a weakly positive effect on WA (β was .24, p<.001). Theoretically, the signs of these relationships should have been reversed, and according to the correlation table, they are ( Table 3). Because of the complexity of the model, we suspected that these results could be due to suppressor effects. Indeed, by removing the paths from Job Demands to Strain Symptoms and from Strain Symptoms to WA, the relationship between Job Demands and WA became non-significant, and removing the paths from Leadership to Task and Interpersonal Resources the relationship between Leadership and WA turned nonsignificant. To investigate the robustness of our final model (Fig 3), we investigated the indirect effects in the model. The results indicated that Leadership had indirect effects on Strain Symptoms, Positive Work attitudes, and on WA, and similarly that Job Demands, Task Resources and Interpersonal Resources had indirect effects on WA (Table 5). All in all, the results lend support to our extension of the theoretically based JD-R model and for the relationships between different scales included in the COPSOQ instrument. Discussion This study contributed to the literature by showing that: A) The scale characteristics were satisfactory and the COPSOQ instrument could be integrated in the JD-R framework; B) Job resources arising from leadership may be a driver of the two processes included in the JD-R model; and C) Both the health impairment and motivational processes were associated with WA, and the results suggested that leadership may impact WA, in particularly by securing task resources. The internal reliability measured by Cronbach's alpha was at an adequate level (0.70-0.95 [61]) for all scales except for Role Conflicts. This corresponds with findings from Denmark [6] and it has previously been argued that some items in COPSOQ, including items in Role Conflicts, can be regarded as causal indicators rather than effect indicators and are thereby not necessarily inter-correlated [17]. Floor and ceiling effects were on an acceptable level for all scales. The highest ceiling effects were seen for Meaning in Work, Role Clarity and Social Community at Work, which corresponds to results from the Spanish validation study [9]. The overall pattern of associations are in line with results from other COPSOQ studies regarding direction of association between WA and job demands, job resources, strain indicators and job satisfaction [22][23][24][25][26]. While the concurrent and convergent validity of COPSOQ has been explored previously, the present study adds a confirmatory approach based on the theoretical reasoning of the JD-R Model. Investigating the scales in such an overall network of expectations supports the nomological validity of the instrument [68,69]. Previous research in relation to the JD-R model has shown that WA can be impacted negatively by the health impairment process [70], and positively by the motivational process [71]. Also, relationships between demands, resources and WA has been studied, but to our best knowledge, the present study is the first to investigate both the mediating processes of the JD-R model simultaneously in relation to WA as an outcome. Our results corroborate the relevance of distinguishing different kinds of resources, as suggested by Demerouti and Bakker [39]. The way leaders exert their leadership affects work characteristics and thereby indirectly the wellbeing as well as the stress level among employees [43,72,73]. Schaufeli has previously found the effect of engaging leadership on commitment, employability, self-rated performance and performance behaviour to be mainly mediated through the two paths of the JD-R model [43]. In accordance with this, our results indicated that leadership resources can function as a trigger for both processes of the JD-R model with WA as outcome. Leadership was most strongly associated with resources in both studies, indicating that leadership motivates work more than decreasing demands on the employees. However, while Schaufeli [43] found a direct association between leadership and performancerelated outcomes including employability, we did not find a corresponding direct effect of leadership on WA. This may indicate that different mechanisms operate depending on the nature of the outcome. We found a negative association between demands and positive work attitudes which is not theoretically expected in the JD-R model. This can be understood on the basis that human service work is based on a moral commitment [74]. Therefore, high work pressure may impact the opportunities for delivering good quality of care, which is essential for achieving the intrinsic rewards from patient interaction [75]. Implications COPSOQ is a comprehensive instrument including a large number of scales. Understanding the interrelationship between the scales in terms of the JD-R model may facilitate communication with practitioners in their efforts to understand and translate survey results into workplace interventions. Despite much being known about the role of work environment for health, motivational and organizational outcomes, it has proven to be difficult to implement this knowledge in organizational development. Understanding complex issues and theories is essential for a successful transferal from workplace surveys to concrete changes. The overall model is good in the respect that it can be helpful in e.g. training managers and other stakeholders in understanding the two processes and how they interrelate with what is actually mapped in a workplace survey. The results suggest that a relevant strategy could be promoting a leadership which can improve WA through its effects on job resources and demands. Securing task resources such as opportunities for development, influence on the work situation and having clarity about what is expected in the job seems especially relevant for obtaining positive work attitudes understood as job satisfaction, commitment and engagement. Still, the relative importance of different kind of resources might be contextually dependent. Therefore, further testing is needed for better understanding the role of different kinds of resources and their internal relationships as well as their respective importance for the motivational and the health deteriorating processes. In particular, further research is needed to establish the relationships in a longitudinal perspective and in different contexts. Also, the results contribute to the applicability of COPSOQ for future research within the framework of the JD-R model. The JD-R model posits that the relevant types of demands and resources vary according to the setting and occupations under study. While this specificity is an advantage as regards relevance of the operationalization, it also reduces the opportunities for investigating the relative importance of factors and their interplay across occupational groups or organizational forms. This kind of knowledge is needed for risk management and organizational development in following up on results. A way forward in addressing the trade-off between the need for generic and tailored instruments could be to include relevant scales from COPSOQ in future studies based on the JD-R framework. This could contribute to research on the roles of similar demands and resources in different settings/occupations and thus provide new knowledge concerning, for example, in which situation a demand becomes challenging in contrast to hindering, or the relative importance of various kind of resources [76]. Strengths and limitations Our study is innovative as it is the first time that most of the COPSOQ scales are tested in one comprehensive model simultaneously in relation to WA. However, some strengths and limitations exist, in particular as regards the study population and the design of the study. The response rate of our study was high and the internal non-response low compared to previous COPSOQ validation studies [6,[8][9][10]. The findings concerning psychometric characteristics and associations of the study model are in accordance with results from previous studies on the instrument. This supports the reliability and validity of the Swedish version of COPSOQ for use in a broader context than dentistry. In addition, the fact that the study is theoretically based and parts of the model have been tested earlier provides some support for generalisation of the overall pattern of associations also to other national versions of the COPSOQ instrument. However, the cross-sectional design and the use of self-reported data only constitute a clear limitation of our study as it increases the risk of confounding and reverse causality. The data collected from individuals were nested in workplaces and organizations actualizing a potential need for applying multilevel analyses. However, for the vast majority of scales a rather modest variance was attributed to the workplace level (ICC(1)< 0,10) and the design effect below 2, which is considered to be a relevant cut-off for when clustering in the data needs to be taken into account [77,78]. Still, the results should be interpreted with caution and be tested in other populations preferable using a longitudinal design integrating register data and multilevel methods if applicable. Conclusion The overall findings of the present study supported the reliability and construct validity of the Swedish version of COPSOQ II tested in a structural equation model based on an extended JD-R model and with workability as outcome.
2018-05-03T02:53:33.782Z
2018-04-30T00:00:00.000
{ "year": 2018, "sha1": "d2fc137143efdf192ce8156bddeb3b233ed7c4d9", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0196450&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f522e04b2681f42ce4bcd9d787dbcb552cc2b0e8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
258296995
pes2o/s2orc
v3-fos-license
Self-generated vortex flows in a tokamak magnetic island with a background flow We present a gyrokinetic theory of self-generated E × B vortex flows in a magnetic island in a collisionless tokamak plasma with a background vortex flow. We find that the long-term evolution of the self-generated vortex flows can be classified into two regimes by the background vortex flow potential Φ, with an asymptotic criterion given by eΦcr/T=ϵw/r , where T is temperature, ε is the inverse aspect ratio and r is the radial coordinate. We find that the background vortex flow above the criterion significantly weakens the toroidal precession-induced long-term damping and structure change of the self-generated vortex flows. That is, the finite background vortex flow is beneficial to maintain the self-generated vortex flows, favorable to an internal transport barrier formation. Our result indicates that the island boundary region is a prominent location for triggering the transition to an enhanced confinement state of the magnetic island. Introduction In magnetically confined fusion plasmas, it is widely accepted that the self-generation of the E × B shear flow (streaming on the magnetic surfaces) from the microturbulence [1][2][3][4] with equivalent instantaneous microturbulence reduction, is a trigger of the transition to an enhanced confinement regime accompanied by transport barrier formation [5][6][7][8]. Experiments [9][10][11] and reduced models [12,13] of the H-mode transition [14] have shown that after the triggering by the turbulence-induced E × B zonal flow, the contribution from the profile-induced E × B shear flow (via radial force balance) continues to increase with the profile gradient as a result of the E × B shear suppression of turbulent transport [15,16]. It finally replaces the role of the self-generated zonal Original Content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. flow, terminating the transport barrier formation. This competition between the macroscale E × B shear flow and the mesoscale zonal flow in turbulence reduction has been interpreted as a feature of the two predators-one prey system [17]. The mechanisms of the E × B flow shear suppression of turbulence have been applied to interpret non-trivial transport behaviors in the vicinity of a magnetic island observed in experiments [18][19][20], such as internal transport barrier formation [21,22] and transition between the low and highaccessibility states [23]. Experimental measurements [24,25] have found that often the E × B flow in a magnetic island circulates on the island contours (helical magnetic surfaces), and is therefore called the vortex flow. Meanwhile, nonlinear fluid and gyrokinetic simulations [26,[27][28][29][30][31] have shown that the vortex flow can be self-generated from microturbulence, just like the zonal flows. At this point, we would like to mention that so far, the interpretations suggested from the experimental studies have considered only the profile-induced E × B flow in the candidate mechanisms, while the simulation works have focused on the turbulence-induced E × B flow only. In this paper, we present a gyrokinetic theory showing the effect of the background vortex flow on the evolution of the self-generated vortex flow, which thereby reveals the relationship between the two E × B vortex flows. Gyrokinetics has been extensively used to study selfgenerated flows in toroidal fusion devices, as a pioneering theory work by Rosenbluth and Hinton [32] using gyrokinetics has revealed that a proper treatment of the polarization density, which originates from the finite Larmor radius (FLR) effect [33], is essential to capture residual zonal flows after fast collisionless damping. Extending theories of the residual zonal flows in axisymmetric tokamaks [32][33][34] and 3D stellarators [35][36][37], we have developed gyrokinetic theories of self-generated E × B flows in magnetic configurations with broken symmetries [38,39]. In a previous work [39] on a self-generated vortex flow in a tokamak magnetic island, we have shown that the toroidal precession of the vortex flowcarrying particles breaks the quasi-helical symmetry of the self-generated vortex flow, resulting in a long-term flow damping and significant deviation of the streamlines from the island contours (we call this 'finite surface deviation (FSD)') forming a zonal-vortex flow mixture. This toroidicity-induced vortex flow damping makes turbulence self-regulation via vortex flow self-generation hard. In the present work, we generalize the previous work by including a background E × B vortex flow to the system. As a result, we show that a large enough background E × B vortex flow above a criterion eΦ /T > ϵw/r, where T is temperature, ϵ is the inverse aspect ratio and r is the radial position, could significantly reduce the toroidicityinduced FSD, long-term damping and structure change of the self-generated vortex flow. That is, a finite background E × B flow could lead to a favorable condition for the transition to an enhanced confinement state of a magnetic island triggered by the vortex flow self-generation. This paper is organized as follows. In section 2, we present a gyrokinetic description of the self-generated vortex flow in a stationary tokamak magnetic island in the presence of a background vortex flow. In section 3, we explicitly calculate the residual vortex flow after fast collisionless damping in a short term. In section 4, we present general properties of the motion of a bounce/transit orbit center and obtain approximate explicit expressions of the orbit center motion by defining three asymptotic regimes. In section 5, we obtain the general solution of the long-term flow level and explicitly calculate its expression with weak and strong background vortex flows to show that a large background flow gives a positive synergism with the self-generated flow. In section 6, we discuss indications of our results and future works and close the paper with a summary. Gyrokinetic description of self-generated vortex flow We use the nonlinear gyrokinetic equations [40][41][42] for a precise description of the response of a collisionless magnetized fusion plasma to a source much slower than the gyrofrequency (∂ t ∼ ω ≪ Ω c ). In our study, the total magnetic field is given by where is the tokamak magnetic field and B 1 is a magnetic field perturbation associated with the magnetic island. For simplicity, we consider a low-β circular-concentric high-aspect-ratio tokamak plasma neglecting Shafranov shift, shaping effects, and parallel magnetic compression. Then, the equilibrium magnetic field strength becomes and the island-perturbed magnetic field is where we consider a widely used simple model for the magnetic island perturbation [43], Here, the amplitude of the perturbed poloidal magnetic fluxψ is approximated to be a constant considering a slowly-timevarying magnetic island satisfying constant-ψ approximation [43]. In this study, we consider the magnetic island geometry in which we use the local Cartesian coordinates (x, y, z), where x = r − r s and y = r s (θ − ζ/q s ) denotes the radial distance from the mode rational surface and the distance in helical angle direction, respectively. Here, r s and q s are the radial position and safety factor at the mode rational surface, and θ and ζ are poloidal and toroidal angles, respectively. The third component z = q s Rθ represents the direction of the unperturbed magnetic field at the mode rational surface. Accordingly, the total magnetic field is decomposed as, after an expansion in x/r s ∼ w/r ≪ 1 with respect to the mode rational surface, where Here, B z is the unperturbed magnetic field strength at the mode rational surface, w is the magnetic island half-width defined as w 2 = 4L sψ /RB, L s = q s R/ŝ,ŝ = (r/q)dq/dr is the magnetic shear, and k = m/r s is the wavenumber of the island perturbation B 1 . Note that B z (z), B x ∝ψ, and B y ∝ŝx capture the toroidicity, the island perturbation, and the magnetic shear effects, respectively. Then, the expression of the normalized helical magnetic flux, an appropriate magnetic surface label for the magnetic island geometry [44], becomes In this study, we consider a two-species collisionless tokamak plasma consisting of bulk ions with charge number Z i = 1 (H, D, or T) and electrons. For every species, the gyrokinetic Vlasov equation for the vortex perturbation is, with the use of an eikonal representation and assuming a Maxwellian unperturbed particle distribution function F 0 [39], where g is the envelope of the gyroangle-independent part of the non-adiabatic response Here, R = x − ρ is the gyrocenter position, x is the particle position, ρ is the gyroradius vector, µ = Mv 2 ⊥ /2B is the magnetic moment, v ⊥ is the perpendicular particle velocity, M is the particle mass, and B = |B| is the total magnetic field strength. δg appears as a result of a decomposition of the perturbed particle distribution function δf(x, v, t) into the adiabatic and the non-adiabatic parts [40,42], Here, F 0 is the unperturbed distribution function and k = ∇S is the wavevector. Note that in our case of the vortex perturbation, we have S(R) = S(X), where X is the normalized helical magnetic flux of which expression is presented in equation (8). Similarly, for the potential perturbation, we have considered so that results in the first term in the right-hand side (RHS) of equation (9) with a Bessel function J 0 (k ⊥ ρ) which captures the FLR effect. Note that in equations (10) and (13), the eikonal factor S(X) captures characteristics of the vortex perturbation, and slower variation of the envelopes g and ϕ, initially set to be uniform in space, becomes important in the long term evolution. In equation (9), the parallel velocity is expressed as v ∥ = σ 2(E − eΦ(X) − µB)/M, where σ = ±1 denotes the direction of the parallel streaming, E is the particle energy. b = B/B is the unit vector parallel to the total magnetic field, u E0 = b × ∇Φ(X)/B is the background E × B flow velocity (note that k · u E0 = 0), v d is the magnetic drift velocity and ω D = k · v d is the magnetic drift frequency. The last term in the RHS of equation (9) is the E × B nonlinearity [40] which is the nonlinear generation source of the vortex flow from the microturbulence [32,35,36]. Here, the subscripts k ′ and k ′ ′ in ϕ k ′ and g k ′ ′ clarify that they are envelopes of turbulent perturbations with wavevectors k ′ and k ′ ′ , respectively. Hereafter, for the wavevector of the vortex perturbation, we use a notation k X = ∇S(X) instead of a general notation k to emphasize that we consider the vortex perturbation and not be confused with k = m/r s for the island perturbation. The other part of the self-consistent gyrokinetic system is the gyrokinetic Poisson equation [42], which is equivalent to the quasi-neutrality condition n i = n e . Using equation (11), its perturbed part can be written as where we have neglected the electron FLR effect. Here, n 0 = n i0 = n e0 is the equilibrium density. In the later part of this paper, we further neglect the electron finite-orbit-width (FOW) effect assuming Following previous works [38,39], we would like to obtain a bounce/transit averaged kinetic equation to study the longterm evolution of the vortex perturbation after fast collisionless damping of geodesic acoustic mode (GAM) oscillation by transit resonance [45]. In the magnetic island geometry, the particle streaming along the magnetic field line is mostly in the z-direction, and the projected streaming motions in the x-and y-directions are much slower. Based on this clear scale disparity, we define the bounce/transit average as [46,47] A ≡˛b where b/t represents the lowest order bounce/transit trajectory by parallel streaming. In addition, the magnetic drift gives finite width of the bounce/transit orbit, and this effect is captured in the magnetic drift frequency as [37] Here, Q ∼ k X ∆ b in the first term of the RHS represents the FOW effect and the second term represents the secular magnetic drift of the orbit center. Substituting equation (16) into equation (9), we obtain the lowest-order equation as we are interested in phenomena much slower than the bounce/transit motion. Multiplying the next order part of equation (9) by exp (iQ) and taking the bounce/transit average, we obtain the bounce/transit-kinetic equation as follows. where b ⊥ = (B x + B y )/B and NF 0 denotes the E × B nonlinearity [32,35,36]. In the long-wavelength limit, we simplify the left-hand-side (LHS) of equation (18) so that Here, note that we keep using the general expression of the RHS of equation (18) for a proper description of the polarization shielding. Residual vortex flow In the short term slower than ion bounce motion but faster than the secular drift motions, that is, neglecting spatial inhomogeneity of the envelope h guaranteed from the initial condition of the vortex flow perturbation. Note that equation (20) is the same with equation (6) of Rosenbluth and Hinton [32]. The solution of equation (20) is where P =´dtN. Substituting equation (21) into the quasineutrality condition, equation (14), we obtain where ⟨· · · ⟩ is the flux surface average [32,39]. The source term s is given by where P i,e =´dtN i,e represents the amount of the vortex flow production from ions and electrons. We consider an initial kick as a source following previous works [32,36,39], where f g is the envelope of the perturbed gyrocenter distribution function δf g and n pol is that of the polarization density δn pol . Here, n pol (0) and ϕ(0) denote the polarization density and the potential at the initial time t = 0, respectively. Substituting equation (24) into equation (22), we obtain where are long-wavelength expressions of the classical and neoclassical susceptibilities relevant to the polarization shielding [33,34], which originate from the FLR and FOW effects, respectively. In equation (26), ρ Ti is the ion thermal Larmor radius, F 0i is the unperturbed ion distribution function, and Q i is the ion FOW factor formally defined in equation (16). We then perform explicit calculations of the susceptibilities for the case of the circular concentric tokamak. Taking the bounce/transit average to the magnetic drift frequency ω D , we obtain the explicit expression of the secular magnetic drift frequency where S ′ = dS/dX, and R a and Ω ca are the major radius and the gyrofrequency at the magnetic axis. κ is the pitch angle parameter defined as and K(k) and E(k) are the complete elliptic integral of the first and the second kinds, respectively. Substituting equations (27) and (28) into equation (16), we obtain where Here, σ = sgn(v ∥ ), and ϑ is the bounce/transit angle given as [33] and F(x, k) and E(x, k) are the incomplete elliptic integral of the first and the second kinds, respectively. Note that Q 0 originates from the unperturbed tokamak magnetic field B 0 , and Q 1 from the magnetic island perturbation B 1 . Now, we explicitly calculate the classical and neoclassical susceptibilities χ cl and χ nc . Note that the flux surface average in the magnetic island geometry can be written as where In equation (38), the integration range is limited to [−y t , y t ], where X + cos (k y t ) = 0 at the turning points y t and −y t . The +(−) sign in the integrand denotes the outer(inner) half of the integration curve along the island contour (X = const). Usingˆd where κ 0 = sin (θ/2), one can obtain from equation (26) the trapped and passing particle contributions to the neoclassical susceptibility where k w ≡ 4S ′ /w is a characteristic vortex flow wavenumber. Note that from equation (26) the classical susceptibility is Using polar coordinates for the magnetic island geometry [20,39] where ρ 2 = (X + 1)/2, the weighted y-average becomes Consequently, equations (41)-(43) yield [39] where the geometrical factors G 0 and G 1 are given by In equation (47), small coefficient 0.24 (which corrects 0.12 in Choi and Hahm [39]) for G 1 is because it originates from the helical angle y component of the magnetic drift v dy ∝ ∂ r B ∝ cos θ having even parity in θ. Consequently, the helical angle component mainly gives mean precession and has a minor contribution to the orbit width. Substituting equations (46) and (47) into equation (25), we obtain an explicit expression of the residual vortex flow level in the long-wavelength limit [39], In the no-island limitψ → 0, we recover the famous Rosenbluth-Hinton residual level [32] as a result of vanishing G 1 . (Recall that G 1 ∼ w 2 ∼ψ in equation (49).) In the presence of a magnetic island, due to the small coefficient for G 1 in χ nc , the neoclassical enhancement of polarization shielding compared to the original classical one is weakened. Therefore, the residual vortex flow level presented in equation (50) is higher than Rosenbluth and Hinton. Note that G 1 ∝ w 2 , so it could be interpreted as a finite island width effect which makes the higher residual level. We would like to mention that in our study, we do not consider the contribution of the magnetic island perturbation B 1 to the magnetic drift velocity v d through ∇B because of its negligible magnitude ∼(w/r) 4 ϵ/q ≪ 1 compared to ∇B 0 . That is, the magnetic island effect in equations (47) and (50) comes from k X = S ′ ∇X in ω D = k X · v d . It is a geometrical effect. Bounce/transit center motion and drift surface In this section, we analyze the bounce/transit center orbit to find the drift surface and the orbit frequency. We consider a closed bounce/transit center trajectory and relevant drift action-angle pair (Ω, φ) [48]. That is, from equation (19) where ω φ is the drift orbit frequency. Then, On a drift surface Ω = const, we have Recall that E = const and µ = const for a single particle. Substituting equation (52) into equation (53), we obtain Here, the explicit expression of the precession velocity v d is and the bounce/transit-averaged parallel velocity is To the lowest order in w/r, one may neglect spatial variations of v d and v ∥ in equation (54) and obtain Note that equation (59) yields a general drift surface label consistent with previous drift-kinetic calculations [49,50]. Note that the ratio of the two terms in equation (59) are, from equations (56) and (58), where v ⊥ is the perpendicular particle velocity, and the critical magnetic island half-width is Next, we estimate ordering of the second term in equation (59) compared to the third, where the critical amplitude of the background vortex flow is Note that the critical background flow level is proportional to the inverse aspect ratio ϵ and the relative island width w/r. In this study, we define three asymptotic regimes depending on the magnetic island half-width w and the background vortex potential Φ as follows. Note that while the relative importance of the terms in equation (59) depends on the velocity pitch κ, we simplify our arguments by considering representative values only. Toroidal regime In the toroidal regime, the toroidal precession, the third term of the LHS of equation (59) is dominant, and we can simply approximate Ω ≈ ψ (↔ x) and φ ≈ ζ as in our previous study [39] by keeping only the toroidal precession, the last term in the LHS of equation (59). Here, we have Shifted vortex regime Meanwhile, in the shifted vortex regime, large background vortex flow dominates over the averaged streaming and the toroidal precession, that is, the second term in equation (59) is much larger than the first and the third terms. As a result, we have where the radial shift which satisfies |d| ≪ w, characterizes the relative importance of the magnetic precession along the tokamak magnetic surface compared to the background vortex flow along the island magnetic surface. It is obvious that equation (66) is described by the polar coordinates where φ is the drift angle. Note that the general expression of the drift orbit frequency ω φ can be obtained from equation (51) as follows. Shifted island regime In the shifted island regime, the averaged streaming is dominant. As we are interested in the deviation of the drift surface from the magnetic surface by toroidal precession [39], we neglect the background vortex flow while keeping the toroidal precession. Then, the approximate drift surface label becomes where is the radial shift of the drift surface from the magnetic surface X due to toroidal precession. Again, |d| ≪ w. The drift orbit frequency is approximated to, using equations (7), (68) and (69), Long-term evolution of residual vortex flow In the long term ω −1 D < t, the secular drift ω D participates in the vortex flow evolution. Extending our previous work [39], we quantify the effect of deviation of the drift surface from the magnetic surface by defining the general FSD factor Note that the secular drift frequency is Therefore, in the toroidal regime, we have [39] Λ ≃ −S ′ cos (k y), since ω φ ≃ kv d . In the shifted vortex regime and the shifted island regime, using equations (68), (70) and (73) and the ordering |∂ x d| ≪ 1, we obtain From equations (76) and (77), we realize that in the shifted vortex and the shifted island regimes. That is, the background vortex flow or the parallel streaming could significantly reduce the deviation of the drift surface from the magnetic surface. Using the drift action-angle pair and the FSD factor, equations (51) and (74), we rewrite equation (19), the bounce/transit-averaged kinetic equation, as follows, Multiplying equation (78) by exp (iΛ) and taking the drift average [48] [ we obtain the drift-averaged kinetic equation where h = He −iΛ , which yields a solution Substituting equation (81) into the quasi-neutrality condition, we obtain the general solution for the long-term potential, which can be written in the following compact form [36]. where the dielectricity is and the source is Note that ϕ is outside of the bounce average in equations (83) and (84), provided that it is independent of z in the long term as poloidal angle-dependent GAM sidebands were already damped by transit resonance [45]. Now, we obtain explicit expressions of the long-term potential by approximating equations (83) and (84) for narrow (w cr,e ≪ w ≪ w cr,i ) and thick (w cr,i ≪ w) magnetic islands. For a narrow magnetic island with a weak background vortex flow, ions and trapped electrons are in the toroidal regime while passing electrons are in the drift island regime. Then, we have a large FSD factor Λ ∼ O(1) for ions, and therefore the ion FLR and FOW effects and the electron FSD effect, much smaller than unity, become negligible. As a result, as emphasized in the previous work [39], the ion toroidal precession homogenizes the flow potential envelope along the tokamak magnetic surface ψ. Then, taking the flux surface average to equation (82) over the unperturbed magnetic surface, we finally obtain for the long-term flow envelope ϕ L . Equation (85) indicates that we have a zonal-vortex flow mixture [39] as a result of the long-term evolution of the self-generated vortex flow, which is a combination of the zonal-like envelope ϕ L (ψ) and the vortex-like eikonal part exp [iS(X)]. Note that equation (85) indicates that the final mixture flow level is small due to the factor k 2 w ρ 2 Ti ≪ 1 in our ordering for the long-wavelength vortex flow. (67) and (77). Since the maximal ordering for the background vortex flow is eΦ/T ∼ O(1), with an ordering eΦ cr /T ≫ w cr,e /w the passing electrons are in the shifted island regime having an even smaller FSD factor compared to ions, leading to only a minor correction to the plasma dielectricity, equation (83). Note that in the shifted vortex regime, that is, there is no averaged deviation of the trajectories of flow-carrying particles from the magnetic surface X. This is due to an exact cancellation of the contributions from the toroidal precession of trapped and passing particles. Therefore, the potential surface (streamline) is maintained to be the same as the magnetic surface X in a long term up to the linear order of O(d/w). In other words, the flow structure is maintained as the concentric vortex in a long term. Then, we readily obtain the long-term vortex flow level as follows, from equations (83) and (84) with the flux surface average. where we have defined the drift susceptibility which represents the long-term enhancement of the polarization shielding due to the magnetic precession, reduced by the background vortex flow. Note that minor electron contributions have been neglected for a simple estimation. Then, the explicit expression of the drift susceptibility is where captures the effect of the FSD of the orbit center trajectory from the magnetic surface to the long-term vortex flow level. Here, V D ≡ T/eBR and V E ≡ 4Φ ′ /Bw characterize the magnitude of the toroidal precession and the mean E × B flow, respectively. Note that D ∝ 1/Φ ′ from equation (91), so that the secular drift-induced enhancement of polarization shielding χ d ∝ D 2 decreases with the background vortex flow amplitude. It clearly shows that a background vortex flow is beneficial to maintain self-generated vortex flows. In the large-flow limit Φ ′ → ∞, we have a vanishing drift susceptibility χ d → 0 and accordingly recover ϕ L → ϕ R . That is, there is no further damping (shielding) of the residual vortex flow in the presence of a very large background vortex flow. Thick magnetic island: w ≫ w cr,i In a thick magnetic island, we still find similar aspects of the long-term evolution of the self-generated vortex flow as those in a narrow magnetic island in a collisionless plasma. We have a toroidal precession-induced deviation of the streamlines from the magnetic island contours, which can be significantly reduced by a strong background vortex flow. Notable differences are dominant trapped particle contribution over passing particles and the equal role of electrons with ions in the presence of weak or moderate background vortex flow. Negligible background flow: Φ ≪ Φcr. We have trapped particles in the toroidal regime, and passing particles in the shifted island regime in a thick magnetic island with a weak background vortex flow. From equations (72), (76) and (77), we then have much smaller FSD Λ ∼ d/w for the passing particles than that for the trapped particles Λ ∼ O(1). We thus have a dominant trapped particle contribution to the plasma dielectricity, resulting in That is, in the absence of a strong background vortex flow and a collisional relaxation, toroidal precession again leads to the formation of the zonal-vortex flow mixture in a thick magnetic island. Note that electrons also contribute to the dielectricity in addition to ions as shown in the factor 1 + T i /T e , which gives a lower mixture flow level compared to the case of a narrow magnetic island. Moderate background flow: With a moderate background vortex flow in a thick magnetic island, we have trapped particles in the shifted vortex regime by the strong flow, while passing particles are in the shifted island regime due to a stronger effect from the averaged streaming. Therefore, the FSD factor for the passing particles is smaller than that for the trapped particles overall. As a result, we have a dominant trapped particle contribution to the longterm evolution of a self-generated vortex flow, which leads to equation (88), but with a different expression of the drift susceptibility where contains only trapped particle contribution to the FSD. Discussions In the previous sections, we have studied the effects of the background vortex flow on the evolution of the self-generated vortex flow in the short-term ω −1 bi < t < ω −1 D and the long-term ω −1 D < t. In the short term, we have found that the residual vortex flow level in a magnetic island is unchanged by the background flow. It is largely different from the case of the residual zonal flow in tokamak geometry, which is enhanced by the equilibrium radial electric field [51,52] as a result of the reduction of the neoclassical polarization shielding due to orbit squeezing [53]. In tokamak geometry, it is the poloidal direction that determines the lowest-order bounce motion and thus we have a significant contribution from the equilibrium E × B flow to the projected bounce motion which can be comparable to the projected parallel streaming ∼ v ∥ B θ /B. Meanwhile, in magnetic island geometry, the lowest-order bounce motion is determined in the reference helical magnetic field direction, exactly orthogonal to the background vortex flow. This is the reason for the absence of the mean E × B flow effect on the residual vortex flow level. In the long term, our theory predicts that the toroidicityinduced breaking of the helical symmetry induces further collisionless flow damping toward a zonal-vortex flow mixture with ϕ L /ϕ(t = 0) ∼ χ cl . This symmetry breaking-induced damping makes it harder for the turbulence-induced modulational growth of the vortex flows [54] to overcome the flow damping, resulting in an increase of the bifurcation threshold. However, in the presence of a large enough background vortex flow with eΦ/T ≫ ϵw/r, the toroidicity-induced flow damping is significantly reduced so that ϕ L /ϕ(t = 0) ∼ χ cl /(χ cl + χ nc + χ d ). The suppression of the long-term damping of the self-generated vortex flow lowers the transition threshold. The finite background vortex flow also prevents the structure deformation of the self-generated vortex flow from the concentric vortex, which therefore prevents a parallel collisional relaxation. Then, the dominant collision effect would be a slower collisional flow damping by the neoclassical friction between trapped and passing particles [55]. The above findings indicate that the finite background vortex flow makes a positive synergism with the self-generated vortex flow leading to a more favorable condition for the transport barrier formation. An important point is that the critical background flow level for the prevention of the toroidal precession-induced long-term damping of the self-generated vortex flow is much lower than that required for the background E × B flow shear-induced turbulence suppression, ω E×B = ∆ω T . Here, ω E×B is the E × B shearing rate [20] and ∆ω T is the turbulence decorrelation rate [15,16]. For a simple estimation, let us consider orderings L E ∼ w, ∆ω T ∼ ω * e0 ∼ (k ⊥ /r)T/eB and k ⊥ ρ i ∼ 1, where L E is length scale of the background vortex flow potential, ω * 0 is an unperturbed diamagnetic frequency, and k ⊥ is the perpendicular wavenumber of the microturbulence. Then, the E × B shear suppression criterion yields, roughly, eΦ /T ∼ L 2 E k ⊥ /r ∼ (w/ρ i )w/r, which is much larger than eΦ cr /T = ϵw/r for the prevention of the long-term damping of the self-generated vortex flow. Therefore, the condition Φ > Φ cr addressed in this work could be considered as a preliminary condition for transition to an enhanced confinement state of a magnetic island. It is worth noting that experimental and simulation studies [22,25,29] indicate that this condition can be satisfied more easily near the island separatrix compared to the central region of the island due to a larger profile-induced background electric field. Mechanisms of confinement enhancement in the island region suggested in previous studies rely on turbulence suppression by the large background E × B flow shear around the island boundary [23,56]. Our work indicates that in addition to that, the magnetic island boundary region is also a prime location for triggering the transition (bifurcation) to an enhanced confinement state of the magnetic island accompanied by transport barrier formation. Considering the experimentally observed feature that only a thick enough magnetic island has a significant background electric field [24], a thick magnetic island is preferred over a narrow one for the transport barrier formation (either by the easier triggering through the vortex flow self-generation, or solely by the stronger background flow [13]). However, even with the internal transport barrier and local confinement enhancement, the global tokamak confinement is likely degraded with the thick magnetic island due to profile flattening [57] over a wide radial range in the island [18,22]. A possible strategy to overcome this demerit of a thick magnetic island could be mitigating or suppressing the magnetic island using electron cyclotron current drive [58][59][60] or externally imposed magnetic perturbation [61][62][63] after the internal transport barrier formation. Thanks to the hysteresis of transport barrier dynamics [64][65][66][67][68], we could minimize the demerit of a thick magnetic island while maintaining the benefit. We would like to mention several theory issues for a more thorough understanding of the vortex flow dynamics in the magnetic island region and its impact on confinement. First, the effect of the toroidal precession (or the magnetic drift), with and without the background vortex flow, should also be investigated in the generation part. Recall that theoretical studies have shown a negative effect of the equilibrium E × B flow shear to the modulation growth of the tokamak zonal flow [12,69], and that synergism of the toroidal precession and the E × B flow was analytically studied in the context of the shear suppression of turbulence [70,71]. Second, a theoretical extension of the present work in a tokamak to the selfgenerated vortex flows in a stellarator magnetic island would be interesting where the secular radial drift of the orbit centers enters to the long-term flow evolution. Recall that there have been extensive experimental E × B shear flow and transport studies in stellarators including LHD, TJ-II, and W7-X. Third, note that while we have used an eikonal representation, the scale of spatial inhomogeneity of the flow potential envelope ϕ becomes comparable to that of the eikonal factor S as an initially uniform vortex potential envelope evolves. That is, we are touching the validity limit of the eikonal representation. For a more precise description of the dehomogenization of the envelope, one may need to consider variations of the envelope ϕ and the eikonal factor S on the same footing. Fourth, we have assumed the Maxwellian unperturbed distribution function F 0 in this work, and therefore we have not captured the contribution from the equilibrium parallel current via F 0 . Since the equilibrium parallel current is an essential element of magnetic island physics, we will address its effect on the vortex flow evolution in the near future. Finally, an extension to the burning plasmas with abundant energetic particles should be pursued in the near future for a precise prediction of confinement and potential new operation scenarios in future fusion machines. The energetic particles are expected to amplify the toroidicity-induced helical symmetry breaking and long-term flow damping due to the proportionality of the toroidal precession to the particle energy. To be rigorous, we have to extend our calculation to a shorter wavelength vortex flows k X ρ θ > 1, where ρ θ is the poloidal gyroradius [33,34], to fully consider effects of the energetic particles having large gyroradii. In summary, we have shown by analytic gyrokinetic calculations that in the short term, the residual level of a self-generated vortex flow after fast collisionless damping in a stationary tokamak magnetic island, higher than the Rosenbluth-Hinton level due to a finite island width, is unaffected by a background vortex flow. In the long term, the residual vortex flow evolves to a zonal-vortex flow mixture with further damping by a toroidicity-induced breaking of the helical symmetry. However, in the presence of a finite background vortex flow with eΦ/T > ϵw/r, the long-term flow damping and the structure deformation are significantly reduced. Since the deviation of the streamlines of the self-generated vortex flow from the island magnetic surfaces is suppressed by the finite background vortex flow, its parallel collisional relaxation is also prevented. As the self-generated E × B flows play a crucial role in the triggering of the transport barrier formation, the positive synergism of the background and the selfgenerated vortex flows leads to a more favorable condition for the transition to an enhanced confinement state of a magnetic island. The synergism that we have found could cooperate with previously suggested mechanisms relying on the background shear flow, supporting the argument that the island boundary region is a prominent location for the transition.
2023-04-24T15:03:14.764Z
2023-04-22T00:00:00.000
{ "year": 2023, "sha1": "b9e01e9040689ce4e70ce61b1484dcf0d60fd563", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1741-4326/accf6b/pdf", "oa_status": "HYBRID", "pdf_src": "IOP", "pdf_hash": "772d4e9d86f4acb1a5e085da54c9dd7e7c0fe1e6", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
234839236
pes2o/s2orc
v3-fos-license
Use of Early-Onset Sepsis Risk Calculator for Neonates ≥ 34 Weeks in a Large Tertiary Neonatal Centre, Saudi Arabia Early-onset sepsis (EOS) refers to sepsis with onset before 72 hours of life. Kaiser Permanente Calculator (KPC) or EOS risk calculator is an advanced multivariate risk model for predicting EOS in infants. Objective To examine the EOS risk calculator effect for predicting neonatal EOS, the necessity for laboratory tests, antibiotic usage, and length of hospital stay among the term and late-preterm newborns. Method In this cross-sectional study, we evaluated 44 cases of neonates ≥34 weeks of gestation started on empiric antibiotics within 72 hours after birth due to suspected EOS at the neonatal intensive care unit (NICU). The study site is a 1,500-bed teaching hospital, with around 4,500 annual deliveries, 70 beds in the level II and level III tertiary care NICU. We calculated the risk of the incidence of EOS as one per 1000 live births. Then we retrospectively calculated the probability of neonatal early-onset infection at birth based on the EOS risk calculator and assigned each neonate to one of the recommended categories of the calculator. The primary outcome was to evaluate the infection risk calculator's effect for predicting neonatal EOS and antibiotic usage among the term and late-preterm newborns ≥34 weeks of gestation. Results In our data, EOS calculator showed unnecessary antibiotic usage for 12 (27.3%) neonates [relative risk reduction (RRR) 27.2%; 95% confidence interval (CI) 20.3% - 35.7%)]. EOS risk calculator implementation may decrease in the number of NICU admission (RRR 20.4%; 95% CI 14.3% - 28%), laboratory tests (RRR 20.4%; 95% CI 14.3% - 28%), and length of stay (RRR 25%; 95% CI 38% - 95%). Conclusion EOS calculator could be considered a strategic and objective implementation for managing EOS that can limit unnecessary laboratory tests, reduce antibiotic usage, and length of stay related to EOS. Our findings ensure a multicenter, randomized study evaluating the safety and general use of the calculator for EOS sepsis in Saudi Arabia's clinical practice. Introduction Early-onset sepsis (EOS) remains a severe problem for neonates associated with the significant cause of infant morbidity and mortality in both high and low-income countries [1]. According to World Health Organization (WHO) in 2015, the incidence of EOS overall worldwide was one to 5/1000 live births, with the mortality rate of late preterm babies ≥ 35 weeks around 2-3% [2]. Early-onset neonatal sepsis incidence in Arab countries is 0.5-1.4 per 1000 live births [3]. The EOS risk factors include chorioamnionitis, maternal group B streptococcus (GBS) colonization, inadequate intrapartum antibiotic prophylaxis for GBS, and prolonged rupture of membranes. About 60% of term babies with EOS need admission for respiratory and cardiovascular support, although EOS's clinical manifestation may appear later. Clinical presentations of sepsis may include acidosis, tachycardia or bradycardia, hypoglycemia, jaundice, feeding intolerance, systemic hypotension, lethargy or irritability, respiratory distress, and apnea. However, these nonspecific findings can also be related to non-infectious factors that cause physicians to determine who should receive antibiotics and lead to the overuse of empiric antibiotics among infants even with widely applied antibiotic stewardship programs [4]. Additionally, admission to the neonatal intensive care unit (NICU) may interrupt breastfeeding and parental bonding. Mukhopadhyay et al. demonstrated that EOS evaluation in asymptomatic infants results in delayed breastfeeding initiation almost four-fold and increased formula supplementation two-fold [5]. Finally, around 40% of all neonates were exposed to antibiotics before the delivery due to maternal surgical prophylaxis in cesarean deliveries, maternal GBS intrapartum antibiotic prophylaxis (IAP) proved, and suspected chorioamnionitis [6]. Therefore, neonatal health providers should consider the risk and benefit of initiating antibiotic therapy in newborns with suspected EOS and the duration of antibiotics course in the absence of culture-proved infection. For that, a combination of evidence-based antibiotics programs and clinical approaches can be beneficial in reducing antibiotics use [4]. To decrease unnecessary hospital admission and antibacterial treatment to well-appearing infants, researchers at Kaiser designed the EOS risk calculator, a robust logistic regression model, Kaiser Permanente Calculator (KPC) that provides individualized evaluations of early-onset sepsis risk in neonates ≥ 34 week's gestation [7]. The EOS risk calculator provides an early-onset sepsis risk estimate for each neonate based on the five objective maternal risk factors and four clinical neonatal risk factors. It categorized neonates into three levels of risk with a correlated recommendation, like laboratory tests, start or not to start antibiotic treatment. The EOS calculator is a freely available online validated tool; however, it lacks standard guidelines for its use, which provides some discomfort with the practice change [8]. Based on the available data, the EOS risk calculator's implementation can significantly decrease the unnecessary use of antibiotics in asymptomatic neonates in the first 72 hours [9]. Furthermore, decreased administration of antibiotics with the EOS risk calculator may reduce the rate of hospital admission and costs. The usage of neonatal EOS calculator is increasing in various countries and continents, including Australia, the USA, and Europe. Although this tool may decrease antibiotic administration to neonates at risk for EOS sepsis, related side effects, and shorten the duration of hospital stay, it has not yet been validated in Saudi Arabia. The calculator can be served as the tool of change away from the previously recommended practice, may decrease the need for diagnostic investigations and empirical therapy in neonates in Saudi Arabia. Evidence supporting the effectiveness and safety of the calculator is an essential issue before considering its implementation. Although the available evidence regarding the neonatal calculator's safety is not sufficient, it did not show its inferiority compared to used conventional treatment policies [9]. Another critical point is that the EOS risk calculator should be considered only as a supportive tool that providers can use after looking at the overall clinical picture to decide about the EOS evaluation and further management. Given the significant safe reduction in antibiotic usage and investigations for infection, we decided to investigate the possibility of applying the EOS risk calculator tool in our center. Therefore, our objective was to evaluate whether implementing the EOS risk calculator in neonates with suspected early-onset sepsis would decrease antibiotic administration within 24-72 hours compared to the institutional policy. Materials And Methods From February 1, 2020, through June 30, 2020, there were 2063 neonates born at or after 34 weeks of gestation in a single tertiary teaching center. From those, we identified 44 cases started on empiric antibiotics within 72 hours after birth due to suspected EOS and placed them on the web-based neonatal EOS risk calculator [7]. We observed that the EOS calculator identified that 27.3% of the neonates should not be prescribed antibiotics, with a 10% margin of error, 90% power, 95% confidence level & 5% type I error; the minimum required sample size is 29. Information required for calculating EOS scores included gestational age (GA), highest maternal antepartum temperature, rupture of membranes, maternal GBS status, onset, and type of intrapartum antibiotics. The example of the use of the neonatal EOS risk calculator is demonstrated in Figure 1. FIGURE 1: EOS risk calculator (example of the calculation of the risk of the neonatal EOS) Neonatal clinical parameters used for assessing the risk for early-onset sepsis by EOS calculator include Apgar scores, oxygen saturation, heart rate, respiratory rate, temperature, signs of respiratory distress, mode of respiratory support, inotropic drugs, and presence of the hypoxic-ischemic encephalopathy (HIE). According to these clinical presentations, we categorized neonates into one of the infection risk calculator's recommended states: well-appearing, equivocal, or clinical illness groups [7]. The EOS risk score then incorporated the clinical finding of each case to determine the appropriate management plan. We used the incidence of EOS as one per 1000 live births, which was considered the likely risk at our institution based on the calculated prevalence rate of EOS for the previous year. Additional neonatal data from medical reports included gender, birth weight, antibiotic usage, length of hospital stay, and mortality. The infants with suspected EOS were managed according to the unit guidelines based on the centers for disease control (CDC) 2010 guidelines [10]. The unit guidelines recommend management for neonates with suspected EOS with intravenous ampicillin and gentamycin up to 48 hours if culture is negative, otherwise to continue according to the patient's clinical condition. For those neonates with hypoxic-ischemic encephalopathy, acute renal injury, and concern for infection, unit guidelines suggest the possible substitution of gentamycin with cefotaxime. Diagnosis of suspected or diagnosed chorioamnionitis was considered if it was documented in the maternal medical file. Prolonged rupture of membranes was considered if it lasted longer than 18 hours between the time of rupture and the delivery time. Neonates born less than 34 weeks gestation and neonates born at term not received antibiotics were excluded. Maternal microbiologic data, antibiotic exposure, intrapartum temperature, GBS status, duration rupture of the membrane, we obtained from maternal medical records. Early-onset sepsis is defined as blood or cerebrospinal fluid (CSF) culture-confirmed infection with a pathogenic bacterial species with onset before 72 hours of age [11]. The organisms most frequently involved in early-onset neonatal sepsis (EOS) are group B streptococcus (GBS), Escherichia coli, Listeria monocytogenes, Coagulase-negative Staphylococcus, and Haemophilus influenza. Empirical antibiotic exposure for EOS was defined as antibiotics treatment initiated before culture reports were known and within 72 hours of age. The recommendations for antibiotic therapy were retrospectively compared to both methods: infection risk calculator and unit guidelines for suspected EOS, which did not affect the clinical management. The primary outcome was to determine the proportion of neonates who do not need antibiotic administration within 24-72 hours by using the EOS risk calculator. The secondary outcome was to assess the advantage of implementing a neonatal EOS risk calculator in decreasing blood investigations, duration of antibiotics treatment, and length of hospital stay. The data was analyzed using SPSS 25.0 (IBM SPSS Statistics for Windows, Version 25.0. Armonk, NY: IBM Corp). We classified the 44 neonates into three categories as per clinical assessment. Continuous variables presented as Mean ± standard deviation (SD). The Z-test for proportions used for the nominal variables, the non-parametric independent sample Kruskal-Wallis test was used for the deviated variables and the analysis of variance (ANOVA) for comparing the means of three groups. Parametric Pearson coefficient of correlation was estimated between clinical assessment and EOS calculator. Bland-Altman plot of agreement and linear regression model were predicted. All the above tests were applied and observed for their statistical significance at a 5% level. Results The study included 44 infants born at 34 weeks gestation or later. Of these 44 neonates, 20.4% did not require admission to the NICU based on their calculated clinical risk assessment using the online neonatal EOS calculator [7]. The median duration of antibacterial therapy was six days. We identified no proven positive result of blood culture. TABLE 1: Maternal variables for clinically assessed neonates Suspected chorioamnionitis was present in two (4.5%) of 44 cases; one case was observed in clinically wellappearing and another in the clinically ill neonate's groups. All GBS-positive mothers (100.0%) were in the well-appearing infant group. Neonate variables presented in Table 2 showed that among the 44 infants, 36 (81.8%) were appropriate gestational age. All patients received antibiotic therapy within 12 hours of age. The antibiotics duration exceeded three days was seen in seven (16.0%) cases; six 86.0%) of them were in the clinically ill group. Neonate variables Clinically ill appearing group of neonates TABLE 2: Neonate variables in clinically ill, equivocal, and well-appearing groups The clinically well-appearing group includes 6 (16.7%) neonates with a mean ± SD Apgar scores of 7.8 ± 0.75, 8.6 ± 0.35 on 1st and 5th minutes, respectively. The vital signs and clinical appearance of the neonates did not reveal any significant abnormal findings. None of these patients required any respiratory support, and average oxygen saturation of 95.6% in the first two hours of life was reported while on room air. The main cause of admission was maternal antibiotic use four (66.6 %) and three (50.0%) due to maternal GBS colonization. All pregnant women in these groups received an antibacterial treatment course more than two hours before the deliveries. Blood culture requested in six (100,0%) newborns. All infants in the well-appearing group received antibiotic therapy and laboratory investigations; however, the EOS calculator didn't recommend antibacterial treatment. One-half of well-appearing infants admitted and received antibiotic therapy due to positive maternal GBS status. The mean±SD calculated risk sepsis by EOS calculator for these cases was 0.23 ± 0.09 per 1000 live births without adjustment the clinical data. The equivocal group includes nine (20.45 %) of the total study cases. The patients in this group tend to have lower mean gestational age 36.2 ± 1.3 weeks compared to clinically ill and well-appearing groups 37.2 ± 0.5 and 37.6 ± 0.3 weeks respectively, lower Apgar scores and oxygen saturation in the first two hours of life 93.7% ± 0.6% compared to neonates of the well-appearing group. The leading cause of admission was transient tachypnea of the newborn (TTN) (55.6%). All patients received support with a nasal cannula (2 L/min) that was ceased within 24 hours and antibiotic therapy for more than three days. In the equivocal group, all patients received antibiotic treatment (100.0%) and laboratory tests (100.0%), while the EOS calculator suggests treatment only for three (33.3%) and laboratory investigations for 66.7%. The remaining 33.3% calculator didn't recommend either empirical antibiotics or laboratory investigations but just qualified observations every four hours. The mean ± SD calculated risk sepsis for these cases was 0.44 ± 0.17 per 1000 live births without adjustment to clinical status. Reasons for admission included mainly transient TTN 31.0%, respiratory distress syndrome (RDS)-20.7%, hypoxic-ischemic encephalopathy (HIE)-20.7%, meconium aspiration syndrome-13.8%, and congenital pneumonia-6.9%. The mortality was 6.9% that was associated with causes other than EOS. Twenty-nine cases received antibiotics and passed through the EOS-related laboratory tests. EOS calculator recommended empiric antibiotics in all 29 patients. The non-parametric independent sample Kruskal-Wallis test (P=0.345) performed on the distribution of EOS calculator across categories of clinical assessment was not significant and when performed on the distribution of antibiotics given to baby (P=0.000) and distribution of length of stay (P=0.001) across categories of clinical assessment was found to be statistically different. The magnitude of the correlation between the EOS risk calculator value and the clinical assessment value was estimated by Pearson coefficient r =+0.971 (P=0.000), and Figure 2 displays the scatter diagram for the positive relationship between two EOS risk assessment values. FIGURE 2: Scatter Diagram of the two assessment values Bland-Altman plot (Figure 3) measures the agreement between two EOS risk assessment values displayed in the x-axis against their difference with the mean reference line (-11.445) for the y-axis in the blue line and the limit of agreement 95% confidence interval (-82. 16 to 59.27) in the red line. FIGURE 3: Bland-Altman Plot for the difference between clinical local practice and applied EOS risk calculator Further, these two variables' mean' and 'difference' used in the linear regression model, which led to a regression coefficient β = -1.767(P=0.000), implying a proportional bias exists between the two measures within the limit of agreement. We found a statistically significant reduction of antibiotic use and laboratory investigations in the equivocal and well-appearing groups after implementing the EOS risk calculator ( Table 3). The sepsis calculator did not recommend empiric antibiotics for 27.3% (P=0.0003) out of 44 studied cases. Discussion Antibiotic overuse and resistance to commonly used antibiotics is a global problem that overhangs novel medicine achievements. One of the most common admission diagnoses utilized by neonatal health care providers is "rule out sepsis," despite the low incidence of proven culture-positive sepsis. Implementing the EOS calculator for predicting neonatal sepsis and close clinical observation might decrease NICU admissions in healthy-appearing infants [12,13]. Furthermore, a significant reduction in antibiotic therapy usage from 5% to 2.6% achieved with the EOS calculator usage in several studies [9,14,15]. Also, calculator usage has been associated with a significant reduction of health utilization and associated costs (15). The presented study is the first report from the Kingdom of Saudi Arabia on implementing the EOS calculator among newborns. Our study's results were comparable with other studies evaluating the infection calculator's validity [16,17]. By applying the EOS calculator in well-appearing and equivocal groups, we found that antibiotic usage could significantly decrease from 100.0% to 0 (P=0.0009) and from 100.0% to 33.3% (P=0.0035), respectively. Neonates admitted for EOS evaluation did not have culture-confirmed sepsis. Another critical point is determining the adequate antibiotics duration course without any positive cultures. Simonsen et al. reported that around 10% of all neonates are investigated for presumed EOS; however, only approximately 5% have positive cultures [18]. However, in highly suspected clinical cases with negative culture, antimicrobial therapy may continue for seven to 10 days [19]. We found that the average length of stay in our study among the neonates in the well-appearing group was three days, in equivocal eight days, and the ill-appearing group twelve days. However, clinical conditions of all cases in well-appearing and equivocal groups improved within the first 24 hours; all of them maintained an appropriate oxygen saturation while on room air after two hours of age. We also found that the main possible reason for an extended NICU stay among neonates (well-appearing and equivocal groups) was requested blood culture and c-reactive protein (CRP), which resulted in the further extension of the antibiotic exposure for an average of 72 hours in both groups. Elimination of the routine laboratory tests like complete blood count (CBC) and CRP are supported by their low sensitivity in predicting EOS in late-preterm and term newborns in several studies [19,20]. Furthermore, every prick for blood collection and peripheral catheters' insertion for antibiotics administration is painful. It is well known that repeated painful exposures can potentially adverse events like physiologic instability, altered stress response system, and brain development [21,22]. Dhudasia et al. reported a reduction of laboratory tests almost by 50% in infants admitted to the NICU and around 80% among the well-appearing infants with the utilization of the multivariable risk prediction models of EOS [23]. When we applied the EOS calculator, we found that laboratory evaluations for studied cases can be decreased from 100.0% to 66.7% (P=0.0035) in equivocal; from 100.0% to 0.0% (P=0.0009) in well-appearing groups with a total reduction of 20.5% (P=0.0017) in these groups. Carola et al. reported that some cases with culture-proven EOS could be missed with an infection calculator [24]. At this point, the primary consideration was given to" missed cases" that were defined as cases in which the infection calculator did not recommend antibiotics, but the National Institute for Health and Care Excellence (NICE) guidelines did [14]. However, calculator editors identified that EOS cases due to extended ongoing evaluations of the neonates might not have been truly missed due to the neonates' comprehensive continuous assessment [25]. Guidelines by the center for disease control and prevention (CDC) recommend laboratory tests (blood culture and CBC) and empiric antibiotic therapy for 48 hours of newborns born to mothers with chorioamnionitis (suspected or confirmed); however, these recommendations are being reevaluated nowadays [26]. What about antibiotic treatment in healthy-appearing infants born to mothers with chorioamnionitis? We determined one case born to a mother with chorioamnionitis in the ill-and another in a well-appearing category. The management of newborn infants at risk for EOS continues to be controversial, especially when clinically well. Based on our hospital protocol, all newborns born to mothers with suspected or proven chorioamnionitis were admitted to the NICU for investigations and received suspected EOS treatment for at least 48 hours, regardless of clinical presentation. We found that the leading cause of admission to NICU in our study was primarily non-infectious, and treatment was started due to "rule out sepsis." Based on the presented data, we considered that neonatal practice focused on the empiric treatment of late preterm and term neonates at risk for EOS cannot prevent EOS and should not be used. Currently, the only proven preventive EOS approach is the appropriate maternal intrapartum antibacterial treatment [27]. However, from another side, the safety of that tool's use in neonates can be a significant concern due to the risk of possible missing cases of EOS and potential delays in antibiotic therapy administration. Kuzniewicz et al. concluded that a substantial reduction in antibiotic usage is significantly overweighed compared to a possibility of delay in antibiotic therapy [9]. Furthermore, no significant morbidity or mortality related to culture-proven EOS or readmission or delay in antibiotic treatment was reported using the infection calculator [28]. Our experience showed that the calculator's probability of decreasing antibiotics use is impressive and may reduce antibacterial treatment from 100.0% to 72.8% (P=0.0003). In Saudi Arabia, significant over-treatment with antibiotics for suspected neonatal EOS represents the insistent clinical problem, causing risks for infants as it leads to more nosocomial infections and antibiotics resistance [29]. Implementation of an EOS risk calculator would potentially improve clinical practice and limit the unnecessary usage of antimicrobials in Saudi Arabia. This study has several limitations. It was a short duration, retrospective study conducted in a single center. The sample size was small, which may have been insufficient to detect culture-positive sepsis due to the disease's low incidence rate. Furthermore, a larger prospective trial is needed to evaluate the neonatal EOS calculator on the incidence of mortality and readmission rate. We could not follow proper serial physical examination after birth because the infection calculator was applied in a theoretical manner that may differ in real-time clinical scenarios. Conclusions Implementation of neonatal sepsis calculator is associated with a reduction in laboratory tests, antibiotic use, and length of stay related to EOS evaluation. Our data showed that this simple clinical decision support tool could be considered a strategic and objective implementation of managing EOS that can reduce antibiotic usage by more than 27%. These findings ensure a multicenter, randomized study evaluating the safety and general use of the EOS risk calculator in clinical practice in Saudi Arabia.
2021-05-21T16:56:26.821Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "e32de1d5c5d4e4dc3faf9dc9264189167ba1a821", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/55577-use-of-early-onset-sepsis-risk-calculator-for-neonates--34-weeks-in-a-large-tertiary-neonatal-centre-saudi-arabia.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f5da6fd419ff85fc20eef2123469faf253fe3df1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119444952
pes2o/s2orc
v3-fos-license
Mass-like gap creation by mixed singlet and triplet state in superconducting topological insulator We investigate proximity-induced mixed spin-singlet and spin-triplet superconducting state on the surface states of a topological insulator. Such hybrid structure features fundamentally distinct electron-hole excitations and resulting effective superconducting subgap. Studying the particle-hole and time-reversal symmetry properties of the mixed state Dirac-Bogoliubov-de Gennes effective Hamiltonian gives rise to manifesting possible topological phase exchange of surface states, since the mixed-spin channels leads to appearance of a band gap on the surface states. This is verified by determining topological invariant winding number for chiral eigenstates, which is achieved by introducing a chiral symmetry operator. We interestingly find the role of mixed superconducting state as creating a mass-like gap in topological insulator by means of introducing new mixed-spin channels $\Delta_1$ and $\Delta_2$. The interplay between superconducting spin-singlet and triplet correlations actually results in gaped surface states, where the size of gap can be controlled by tuning the relative $s$ and $p$-waves pairing potentials. We show that the system is in different topology classes by means of chiral and no-chiral spin-triplet symmetry. In addition, the resulting effective superconductor subgap manipulated at the Fermi surface presents a complicated dependency on mixed-spin channels. Furthermore, we investigate the resulting subgap tunneling conductance in N/S and Josephson current in S/I/S junctions to unveil the influence of effective symmetry of mixed superconducting gap. The results can pave the way to realize the effective superconducting gap in noncentrosymmetric superconductors with mixed-spin state. I INTRODUCTION Topological insulators (TIs), as an interesting topologically nontrivial phase of condensed matter represent distinct electronic properties comparing to the conventional band insulators. On the surface of a three-dimensional topological insulator (3DTI), topologically protected quantum channels are formed in a manner that the charge carriers obey from massless Dirac-like fermions. These gapless surface states are protected by time-reversal (TR) symmetry and are robust against disorder and perturbations. There exist odd number of Dirac cones in the Brillouin zone, resulted from inversion symmetry breaking owing to the Rashba-type spin-orbit interaction [1,2]. These peculiar features enable TIs to be potentially used to spintronics [3,4,5] and topological quantum information applications [6,7,8]. Moreover, superconductivity induction by proximity-effect on the surface states of a 3DTI has been of noticeable importance during the last decade. Several experimental probes [9,10,11,12,13,14,15] have evidenced existence of spin-singlet and spin-triplet pairing states in the hybrid structure of a 3DTI and a superconductor. One of key findings in this topic is the manipulation of Majorana fermions in the Andreev bound states (ABS) established at the 3DTI ferromagnet-superconductor (FS) interface [16,17,18]. Merging the spin-singlet Bogoliubov-de Gennes Hamiltonian with gapless TR symmetric surface states gives rise to appearance of triplet-like components of superconductor gap in the resulting Dirac-Bogoliubov-de Gennes (DBdG) Hamiltonian, originated from the requirement of states to be invariant under particle-hole (PH) symmetry. Consequently, the quasiparticle's energy excitation remains gapless, when the proximity-induced superconducting order parameter is taken to be spin-triplet Ô-wave symmetry. Actually, this leads to suppression of Andreev process for energy excitations lower than superconducting effective gap [19]. Unconventional superconductivity in 2D Dirac materials plays an important role [19,20,21,22]. However, regarding the inversion symmetry breaking in TIs, it will not be (at least from the dynamical symmetry point of view) ungraceful to take into mixed spin-singlet and spin-triplet superconducting state contribution to the quasiparticle excitations, since the (× · Ô)-wave state is also found to break the inversion symmetry. The symmetry of Cooper pair states in new systems with broken inversion symmetry, such as noncentrosymmetric (NCS) superconductors can not be classified based on orbital and spin parts. The Cooper pair in these systems is, therefore, a mixture of singlet and triplet spin states. A new type of NCS superconductor Ö ¿ ÁÖ has been very recently reported [23]. It is noted that, in cuprats, which have no inversion center in their crystal structure, the inversion symmetry is broken, leading to appearance of robust asymmetric spin-orbit interaction. Therefore, the superconducting pair potential mixes singlet and triplet states [24,25]. As a noticeable result, the spin-polarized current has been predicted to appear on the surface of a superconductor with mixed singlet and chiral triplet state [26,27]. In addition, in these materials, there is an even number of Majorana fermions in two-dimensional (2D) TR symmetric superconducting bound states [28,29]. Moreover, the exact pairing potentials describing many of superconductors with mixed singlet and chiral triplet states remain unknown. An updated list of recently discovered such superconductors can be found in Ref. [30]. As mentioned, 2D materials with spin-orbit coupling, such as 3DTI, is expected to host a triplet state using a conventional ×-wave superconductor, for example see, Ref. [31]. Interestingly, the surface state of superconducting 3DTI is strongly related to the 2D unconventional superconductors (such as Ô-wave symmetry) with broken inversion symmetry [28]. Hence, the outcomes of transport of charge carriers on the surface of a 3DTI hybrid (with mixed singlet and triplet superconducting states) contact can pave the way to unveil the effective dynamics of superconductor pairing state. Therefore, we proceed, in this paper, to investigate particularly a newly appeared distinct manifestation of presence of mixed × and Ô-wave symmetries on the surface states (see Fig. 1(a)). From this point of view, the interplay between inversion, PH and TR symmetries in the 3DTI mixed superconductor Hamiltonian is expected to introduce a distinct scenario for quasiparticle excitation, where we find it to present a mass-like gap opening in Dirac point by the varying comparative magnitudes of singlet ×-wave and triplet Ô-wave pairing potentials. We try to show the possible topological property of band structure via calculating the winding number, which is related to the Berry phase reflecting the topological structure of wavefunction [32]. Specifically, electro-hole conversion at the normal metal-superconductor (NS) interface, called Andreev reflection (AR), and appearance of chiral Majorana state at the ferromagnetsuperconductor (FS) interface can be considered as essential phenomena, which is directly influenced by the interplay between the spin-singlet and spin-triplet states on top of a 3DTI [33,34,35]. In light of the above attributes, further treatment to study mixed-state 3DTI can be more impressive to evaluate its dynamical and transport properties. We show that the effective superconductor gap at the interface has a more complicated dependency on the magnitude of ×and Ô-wave pair potentials (¡ × and ¡ Ô ). The Dirac-point gapless band structure with ×-wave symmetry is converted to the gaped states with the mixed (× · Ô)-wave symmetry via ¾ topological invariant. In Ref. [36], the authors have just pointed out the energy eigenvalue of hybrid 3DTI mixed superconductor Hamiltonian. Here, we have succeeded to present analytical expression of the corresponding electron(hole) wavefunction in order to capture its topological nature, the resulting Andreev subgap tunneling conductance ( Fig. 1(b)) and, of course, the (¼ )-current-phase relation in a Josephson junction ( Fig. 1(c)). These results have been achieved in a really cumbersome analytical procedure. This paper is organized as follows. Section II is devoted to describe the discrete symmetry properties of 3DTI Hamiltonian in the presence of mixed superconducting state. The chiral symmetry of the system is investigated. Next, the effective superconducting gap and electron-hole energy excitations are introduced. The winding number for electron-hole pairing in a closed Berry connection in Brillouin zone is studied by using the analytically obtained chiral eigenstates. In Sec. IIIA, we represent the explicit expressions of normal and Andreev reflection amplitudes in a corresponding NS junction. The numerical results of subgap tunneling conductance are presented along with a discussion of main characteristics of system. The Josephson junction is considered in Sec. IIIB in order to investigate the property of Andreev bound state (ABS) and resulting supercurrent-phase exhibition. Finally, a brief discussion is given in Sec. IV. II THEORETICAL FORMALISM A Discrete symmetries of mixed state 3DTI We begin by setting up a topological insulator-based model for the proximity effect, that the pairing potential contains both spin-singlet and spin-triplet states. The order parameter for a mixture of such state adopts the general form ¡ Ñ´k µ ³ ¡ ×´k µ ¼ · d´kµ ¡ ℄ ¾ , where the Pauli matrices acting on the spin space and ³ indicates the superconducting phase. The spin-singlet component is an even function of the wave vector, and we assume that the pairing potential ¡ ×´k µ ¡ × to be constant and real. The order parameter of spin-triplet pairing is described by an odd vector function d´kµ of the momentum. For the chiral spin-triplet pairing, d´kµ may then be written in the form d´kµ ¡ Ô Ó× · × Ò ℄ Þ, where ¡ Ô measures the amplitude of the triplet order parameter and labels the orientation of the angular momentum of the Cooper pair (featuring the chirality). The real and positive parameter ¡ Ñ is introduced to quantify the energy scale of the superconducting gap. Throughout present work, the singlet ¡ × and triplrt ¡ Ô pair potential parameters are normalized by ¡ Ñ . We employ the DBdG Hamiltonian À´kµ Ì Á´k µ ¡ Ñ´k µ ¤ ¡ Ñ´k µ ¤ ½ ¤ Ì Á´k µ ¤ ½ (1) in Nambu space for the surface states of a topological insulator to obtain the energy dispersion relation under the influence of superconducting proximity effect. The gapless surface states are described by the 2D linear Hamiltonian Ì Á´k µ Ú ´ ½ Ü · ¾ Ý µ × , ( ½), where Ú and × denote velocity of charge carriers and chemical potential, respectively. PH symmetry operator ¤ is involved by an antiunitary operator, which may act on Dirac Hamiltonian and superconductor order parameter. By acting the PH symmetry operator and defining two complex pair potentials as ¡ ½ ¾ ¡ × ¦ ¡ Ô , the ¢ Hamiltonian of mixed superconducting topological insulator hybrid yields: Spin-singlet and spin-triplet admixture gives rise to two new spin channels ¡ ½ and ¡ ¾ . The effective mixed pairing potential depends on the angle , where, only for ¡ ¾ channel, there exist the possibility to be zero. This case occurs when spin-triplet contribution is dominated, ¡ Ô ¡ × . Both spin channels ¡ ½ and ¡ ¾ have no zero value for every angle ¾ ¾, when spin-singlet potential is dominant. The effective two mixed-spin pair potentials is demonstrated, in detail, in Fig. 1(d). Let now unveil the topological symmetry properties of this given state. The resulting effective Hamiltonian (2) satisfies the PH symmetry relation, which is À £´ kµ ¤À´kµ ¤ ½ , when ¤ ´ ½ ª ¼ µ is the complex conjugation operator. The operator ½ is the Pauli matrix in particle-hole space. In this case, the needed PH symmetry of mixed superconductor gap is provided in the surface states of 3DTI. The square PH symmetry operator is found ¤ ¾ ·½. It is noted, that this symmetry may prove the spin degeneracy of the Fermi surface to be lifted, and consequently it allows for exotic chiral Majorana modes [37]. On the other hand, the TR symmetry operator can be given by ¢ ´ ¼ ª ¾ µ (with ¼ being in particle-hole space), under which the Hamiltonian À´kµ is related to À £´ kµ. Note that, the presence of chiral spin-triplet pairing causes TR symmetry breaking. By means of specific topological invariant, we remember that, in each spatial dimension, there exist five distinct classes of topological insulators, three of which are characterized by an integral topological number, while the remaining two possess a binary ¾ topological quantity. Regarding the particle-hole and chirality symmetries of matrices associated with the proposed Hamiltonian, one can determine the topology class. According given topological classification in Ref. [32], the Hamiltonian (2) is found to be placed in topologically nontrivial symmetry class D. Meanwhile, for two other no chiral case of spin-triplet Ô-wave symmetry (d´kµ ¡ Ô Ó× Þ and ¡ Ô × Ò Þ), we find Hamiltonian to commute with TR symmetry operator. Hence, the new topology class can be possible, and the system is classified in topology class DIII [32]. B Mass-like gap The energy dispersion relation for superconducting excitations can be obtained by diagonalizing the Eq. (2). It is instructive to diagonalize the Hamiltonian À upon a unitary transformation À ¼ ÍÀ Í Ý . We introduce a unitary matrix to do this goal The presence of mixed two spin channels ¡ ½ and ¡ ¾ in diagonal elements implies appearance of band gap on the surface states of 3DTI. Hence, the energy eigenvalue dependency on the mixed spin channels is easily given by where Ê ¡ ¾ ¾ ·´¾Ú × µ ¾ ·´Ú µ ¾ ¡ ½ ¡ ¾ ¾ denotes the renormalized excitation energy related to the mixed state. Simple inspection of the electron-hole excitation spectrum in NCS superconductors indicates, that there is an essential physical distinction in surface states of topological insulators with mixed pairing state. The Hamiltonian of two-dimensional NCS superconductors is decoupled into two spin channels ¡ ½ and ¡ ¾ with different energies. The exchange between two energies is provided by the sign of electron wave vector [25]. Hence, the pairing potential is only related to the direction of motion (i.e. ¦ ). In the presence of a topological insulator, we see that the energy dispersion is affected by two ¡ ½ and ¡ ¾ spin channels in a fundamentally distinct manner. With the alone singlet or triplet pairing state, what that makes energy spectrum electronically interesting is the fact that the conduction and valence bands touch each other at Dirac point. Whereas, no strikingly say, the band topology of mixed state 3DTI undergoes a change, and a sizeable energy gap is manipulated at Dirac point. This gap can be controlled by tuning the relative magnitude of singlet ¡ × and triplet ¡ Ô pair potentials. It seems, that the correlations between the spin-singlet and spin-triplet plays the role of effective Dirac mass in the surface states of topological insulator. This can be of an interesting feature of mixed-spin state superconductors in proximity with Dirac-like materials. It is noted that when two spin channels become equal ¡ ½ ¡ ¾ ½, the superconducting excitation reduces to ×-wave-like one. However, mass-like gap in Dirac point of surface states can be clearly presented by vanishing quasiparticle wavevector. Consequently, the energy in Dirac point is separated into two parts corresponding two mixed spin channels When we set ¡ ½ ¡ × · ¡ Ô ½, the size of mass-like gap dependency on the mixed pair potential can be clearly obtained, as shown in Fig. 2. For ¡ ¾ ¦½, we see the energy gap to be closed. This is in agreement with × and Ô-waves superconducting excitations in topological insulators. In the case of ½ ¡ ¾ ·½, the gap is immediately opened, and has a maximum when the singlet and triplet state contributions are equal. In the next, we proceed to investigate the dynamical property of such C Topological nature of system In this section, we proceed to investigate the possible topological properties of mixed superconductor state 3DTI in order to achieve an answer for question whether the creation of Dirac mass-like gap is accompanied by topological invariant of band phase exchange. Reaching this goal in our proposed system can be feasible by determining the Berry phase, including the non-trivial topological structure of the wavefunction. Using the wavefunction Ò´ ʵ of a system, where Ê is the space set of parameters the quantities so-called Berry connection A Ê and Berry curvature B Ê are given by When a system moves along a close path in the space, the resulting Berry phase acquired in the wavefunction is Here, Ë represents area in the parameter space, enclosed by the contour . In our system, the parameter set is specified by momentum , and the Berry phase is also called the Zak phase [38]. Therefore, the topological invariant related to the Zak phase is winding number The integration is performed over a closed path including wavevectors belonging to the first Brillouin zone. The spectral projection operator Õ´ µ defines a map from the reciprocal space in Brillouin zone to the space of unitary matrices belonging to the symmetry group. The Õ´ µ matrix is determined via the several constraints concerning to the discrete symmetries imposed on Hamiltonian. The chiral symmetric Hamiltonian is a needed condition to calculate the winding number, so that it needs to be in block off-diagonal form. Therefore, we may construct formal chiral symmetry operator we are able to introduce an unitary transformation Í , using the eigenvectors of above chiral symmetry matrix The energy eigenvalue of filled states is given by The corresponding chiral eigenstates of Hamiltonian (6) are easily given by To facilitate the calculation, it helps to introduce the projection operator Ô´ µ For what follows, it is convenient to introduce the É matrix by É´ µ ¾ Ô´ µ ½. Corresponding to the block-off-diagonal chiral symmetric Hamiltonian (6), the É´ µ matrix is also block-off-diagonal: There can be a topological invariant, which is obtained only in the presence of a symmetry. Indeed, the chiral symmetry gives rise to result in winding number topological invariant. We are now set to calculate the topological invariant winding number via the Õ´ µ matrix. Having more complicated chiral eigenfunctions (7), we try to find a huge expression for Õ´ µ matrix, and neglect to write it here. Inevitably, the numerical method may be used to find the winding number. The analytical expression for the off-diagonal block of spectral projector matrix is unwieldy, and further treatment about evaluating the topological invariant of this system are processing now. D Effective subgap To more clarify the mixed superconducting state exhibition, we focus on superconductor effective gap originated from singlet and triplet correlations. Actually, magnitude of mixed effective gap depends on the relative amplitude between the singlet and triplet components, which can control the height of forming subgap at the interface, playing a crucial role in hole-reflection for incident electrons. In order to derive the exact form of effective gap, we need to refer energy spectra of topological insulator in proximity of a ×-wave and Ô-wave superconductor, separately [36] ¾´× We reconstruct the energy spectra of Eq. (4) as in order to exploit an exact expression for effective mixed gap as following: The normalized chemical potential ¼ × ¾ ¾ × · ½ ¾ ¡ ½ ¡ ¾ ¾ indicates mixed spin channels. The position of superconducting gap in point corresponds to the relation ½ ¾´ It should be noted that in the limit of alone singlet or triplet case, the point only depends on , while in the case of mixing potential, existence of two mixed components ¡ ½ and ¡ ¾ causes to shift the position of superconducting gap. We find ¡ to become zero in the absence of spin-singlet contribution achieving by ¡ ½ ¡ ¾ . Interestingly, in the lake of spin-triplet contribution, which is obtained by ¡ ½ ¡ ¾ , the effective gap is clearly reduced to the isotropic order parameter. This is completely in agreement with previously reported results, that the former corresponds to the gapless topological insulator superconductor state, and the latter means conventional ×-wave superconducting excitations one. The behavior of these cases is shown in Fig. 4. III TRANSPORT PROPERTIES A Andreev tunneling conductance In this section, we will focus on the transport properties of the simplest hybrid normal/superconductor structure deposited on top of a topological insulator in order to investigate how Andreev reflection and conductance spectroscopy are influenced by the superconducting mixed order parameter. The unconventional mixed superconductivity in TIs should manifest itself in the observable phenomena at the boundaries of a hybrid structure. We analyze Andreev reflection probability in the surface states by employing a scattering matrix formulation along the lines of Blonder-Tinkham-Klapwijk (BTK) theory. To this end, let us now proceed to introduce the eigenstates of Hamiltonian (2). The wave function in the topological insulator mixed superconducting is achieved from a set of ¢ coupled matrix equations. Here, there are four unknowns to derive the eigenfunction in the electron-hole basis (Nambu basis), Ñ Ü Ý Ý . The normalization condition, ¾ · ¾ · £ ¾ · £ ¾ ¾ is used to conserve the intensity of the edge states. From equation À Ñ Ü Ñ Ü Ñ Ü , we, after cumbersome analytical calculations, express eigenfunction of a electron(hole)-like quasiparticle states in terms of following equation: where is the normalization constant and Å ½ ¡ ½ ¡ ¾ ¾ · ¡ ¾ Ú ¾ ¾ · ¡ ½´ ¾ × ¾ Ñ Ü µ Å ¾ ¡ ½´ × Ñ Ü µ · ¡ ¾´ × · Ñ Ü µ Å ¿ ¡ ½ ¡ £ ¾ · Ú ¾ ¾ ´ × · Ñ Ü µ ¾ Å ´ × · Ñ Ü µ ¡ ¾ ¾ · ¾ × ¾ Ñ Ü · Ú ¾ ¾ ¾ × Ú ¾ ¾ Due to relativistic dynamics, two independent spin channels ¡ ½ and ¡ ¾ are simultaneously appeared in the wave function. Because the motion of quasiparticles is determined by incidence angle , the resulting wave function is related to the direction of motion. If we define angle for right movers, then left movers is described by . Accordingly, pairing potentials spatially depend on direction of motion. Therefore, two spin channels in Eq. (9) are defined only for right movers, and we can replace them for left movers by ¡ ½´¾ µ ¡ ¾´½ µ (see Fig. 1(a)). Also, the explicit wavevector of quasiparticles in terms of superconducting excitation energy and mixed-spin channels is given by To accommodate superconductivity by means of the proximity effect experimentally, it is necessary to realize the condition × ¡ ½ ¾ to have a sufficiently large density of states. In this way, a superconductor electrode deposited on top of the topological insulator would be suitable experimentally, as Fig. 1(b). The total wave function in the normal region of junction (Ü ¼) by regarding two possible fates upon scattering, normal and Andreev reflections of an incident electron, may then be written as: where « and « denote the electron and hole angles of incidence, while Ö and Ö are the normal and Andreev scattering coefficients, respectively. Due to the broken translational symmetry, the Ü-component of the momentum in normal region ( Ü AE ) is non-conserved, whereas Ý-component ( Ý AE ) is conserved, and can be acquired from normal region eigenstate. The Fermi momentum in the normal and superconducting part of the system can be controlled by means of chemical potential in each region. Setting up the scattering wavefunctions and utilizing appropriate boundary condition, © AE © Ë at Ü ¼, where © Ë Ø Ñ Ü · Ø Ñ Ü , one is able to extract the normal and Andreev reflection coefficients, which depend on the angle of incidence and the mixed state channels excitation energy. We find following solutions for normal and Andreev reflection coefficients: It follows, according to the BTK formalism [39], the normalized conductance ( ¼ ) can be calculated, and the normalization constant is chosen as ¼ AE´ µÛ ¾ ¾ , where AE´ µ ¾ ´ Ú ¾ µ is density of state with Û being width of the junction. As we show now, the effect of the two distinct spin channels can be nicely seen in the experimentally accessible electrical conductance. In Fig. 5(a), we plot the subgap conductance spectra of the NS structure resulting from the Andreev process, calculated with superconductor and normal region chemical potentials × ½¼¡ Ñ and AE ½¡ Ñ , respectively. The maximum suppression of conductance happens for the case of opposite spin channels, ¡ ½ ¡ ¾ . In this case, however, it seems that the appearance of unconventional superconductivity is manifested by an enhancement of the zero-bias conductance peak. Importantly, in the mixed state range ¼ ¡ ¾ ½, the two coherence conductance peaks exist at Ñ Ü ¡ , and a transition of conductance peak into the zero-bias conductance can also be achieved by increasing the amplitude of ¡ ¾ . By focusing on the effective gap relation, Eq. (8) [40]. The presence of ×-wave pairing with subdominant Ô-wave admixture order parameter has been predicted on AE /topological insulator/ Ù devices, where the topological insulator is either alloyed ½ Ë ¼ Ì ½ Ë ½ ¿ or Ë Ì Ë ¾ . Indeed, the conductance dips at the induced-gap value and the increased conductance near zero energy in above both spectra of samples, can be explained by the dominant triplet superconducting components in 3DTI [40]. In analogy, in NCS superconductors with broken inversion symmetry, the transport signatures in N/S junction depend on the degree of mixing of singlet and triplet pair potentials. In Ref. [25], Burset et al have analyzed tunneling conductance of normal/noncentrosymmetric superconductor junction, and reported a zero-bias conductance peak for the case ¡ × ¡ Ô , analogous to our finding, here. In Fig. 5(b), we present the signature of doping level of N region in resulting normal conductance and formation of zero-bias conductance peak. A sharp conductance peak in zero-bias can be nicely seen in a low doping, whereas the zero dip of conductance is appeared by increasing the normal region doping. It is interesting to note that the conductance peaks can also be controlled by changing the pairing potential admixture. For comparison, we have included in Fig. 5(c) the conductance of junction with two possible Ô-wave symmetry functions. For d´kµ ¡ Ô Ó× Þ, the conductance peaks located at the effective mixed gap is smaller than that for d´kµ ¡ Ô × Ò Þ. This scenario becomes completely vice versa for resulting zero-bias conductance. To more clarify the signature of two mixed-state channels in conductance peak displacement, we present, in Fig. 5(d), subgap conductance curve in terms of ¡ ¾ and bias energy. This figure clearly demonstrates conductance peak displacement towards zero-bias with the increase of magnitude of triplet pair potential. B Andreev Bound States in Josephson junction We now consider the strictly one-dimensional superconductor/insulator/superconductor (S/I/S) Josephson junction in the Ü-direction on the surface of 3D topological insulator, as sketched in Fig. 1(c). The measurement of the supercurrent which is carried by Cooper pairs can be one of the useful tools to reveal effective symmetry manipulated by inducing an actual superconductivity. The mixed superconductivity in topological insulator particularly manifests itself in the Josephson effect. The pairing potential vanishes in the insulator middle region and is nonzero in the two superconductor terminals. The order parameter is assumed to have different phases and the same amplitude in the left and right superconductors. The insulator region length Ä (distance between two superconductor terminals) is assumed to be much smaller than the superconducting coherence length Ú ¡. For make contact with experimental parameters, the junction length should be smaller than ¼ Ñ. We introduce a gate potential Í ¼ potential is a combination of singlet and triplet states which adopts the following form for each left and right S regions The pairing potential is assumed to have different phases in the left and right regions, and the current flowing the Josephson junction depends on the phase difference ¡³ ³ Ö ³ Ð . It, then, remains to introduce the wave function for the left superconductor region (Ü ¼), which reads © Ð Ë Ø Ð Ñ Ü · Ø Ð Ñ Ü . To identify the energy spectrum for the Andreev bound state, we match the wave functions around Ü ¼, which yields where the barrier strength is defined as dimensionless parameter . By inserting the superconducting wave functions into Eq. (14), we arrive at four linear algebraic equations for the four constants Ø Ö , Ø Ö , Ø Ð and Ø Ð . For the case of ¡ Ô ¼ and ¡ × ½, which we have no longer mixed state, the ABS solutions arrive at the well known previously reported equation [16]. When the spin state is mixed, finding the analytical expression for ABS becomes impossible. The cumbersome and time-consuming analytical calculations has been done in this relation, and finally, from Eq. (14), we obtain an equation where ½ and ¾ are more complicated functions of bound energy, barrier parameter and incidence angle. We can numerically obtain ABS spectrum as a function of superconducting phase difference ¡³ and propagation angle . We show that the same outcomes similar to those previously obtained for Josephson effect in topological insulator with alone ×-wave symmetry are achieved [41]. The -periodic gapless bound energies in normal incidence ¼ are appeared, which are protected by the TR symmetry (see, Fig. 6(a)). Also, these states correspond to the chiral Majorana bound energy modes, so that the energy curves of electron and hole are continuously connected. The range of superconductor state admixture is controlled by the magnitude of spin channel ¡ ¾ , where we take the other mixed spin channel to be unit, ¡ ½ ½. Hence, when ¡ ¾ is varied from ½ to ½, the mixed state level is continuously changed from ×-wave symmetry to Ô-wave one. Independent of admixture level tuned by ¡ ¾ , the ABS spectra exhibits zero energy and maximum slope for superconductor phase difference ¡³ ´¾Ò · ½µ (Ò is integer number). Whereas, ¡³ ¾Ò results in flat energy curve. It is noticed, that the amplitude of ABS oscillations significantly diminishes in the mixed spin state. These features are presented in Fig. 6(a), where ABS plot are given as a function of phase difference for the superconductor chemical potential and middle region insulator strength parameter magnitudes × ½ and ¼ , respectively. For the critical case of mixed spin channel ¡ ¾ ½, which our Josephson junction will be in pure spin-triplet symmetric state, the ABS curvature goes to flattening. These behaviors of mixed superconducting ABS can be originated from Dirac band gap creation and strongly effective subgap decreasing in the system. Furthermore, in Fig. 6(b), we plot bound state energy for finite angle of incidence as a function of phase difference. As expected, the signature of nonzero incidences of quasiparticles to the superconductor/insulator interface is observed as vanishing chiral Majorana mode via the opening a large gap in ABS. Consequently, the period of ABS oscillations becomes ¾ in the presence of a momentum mismatch, which is due to finite backscattering. The decrease of the amplitude of ABS is determined by the mixing level and the angle of incidence. It should, however, be noted, that the change of amplitude of ABS curves with the incidence angle strongly depends on the magnitude of ¡ ¾ . We show increasing the angle of incidence in the range from ¼ to ¼ ¾ enhances the value of the bulk gap from ¼ to ¼ ¿ for the mixing state (¡ ¾ ¼ ¾), whereas for ¡ ¾ ½ (×-wave superconductivity dominant case), it takes place from ¼ to ¼ . (15) where Á ¼ Ï ¡Ñ is the normal current in a sheet of TI of width Ï , à and Ì are the Boltzmann constant and temperature, respectively. In Fig. 7(a), the Josephson current as a function of superconducting phase difference is demonstrated for several magnitudes of ¡ ¾ . As a usual result in similar systems, the ¾ -periodic current-phase curve is found for every admixture level, in spite of the presence of the spin-triplet component of the pair potential. The main difference between the mixed-spin channel and pure spin-singlet one (¡ ¾ ½), as shown in Fig. 7(a), is that the Josephson supercurrent is strongly suppressed as the amplitude of the spin-triplet contribution grows upto ¡ Ô ¼ . In Fig. 7(b), we repeat the previous calculation of Josephson current for different values of insulator barrier strengths . Here, it occurs an interesting scenario, where the exact sinusoidal curve of supercurrent is achieved in the case of large . According previous specific work [43], the abrupt crossover phase-current curve originated from the gapless ABS is observed in the low value of barrier strength. Finally, to clarify the signature of mixed superconductivity on the critical current, which is defined as the maximum of Josephson current, we analyze numerically and plot the barrier strength dependence of critical current. Figure 7(c) shows the normalized critical current Á Á ¼ for different magnitudes of mixed-spin characteristic parameter ¡ ¾ . We show that the critical current strongly decreases with the increase of spin-triplet contribution. The reason of this effect may be described by decreasing the effective superconducting gap in the mixed state. IV SUMMARY AND CONCLUSIONS In summary, from a more fundamental perspective, the distinction between the energy spectrum in the mixed-spin state superconductors and surface states of topological insulators teaches us something new about the interplay between mixed state of superconductivity and topologically protected by time-reversal symmetry Dirac-like fermions. In one hand, the inversion symmetry breaking in a noncentrosymmetric superconductor, and gapless surface state resulted from spin-orbit coupling on the other hand, can be strongly inter-correlated to capture the new effects in spin magnetization and spin transportation. Magnetoelectric effect caused by supercurrent in NCS superconductors has been reported, recently [44]. There is a delicate point, that the Hamiltonian of two-dimensional NCS superconductors is decoupled into two spin channels ¡ ½ and ¡ ¾ with different energies. Whereas, in the presence of topological insulator, two spin channels are strongly coupled with the same energy, and both right-moving and left-moving electron(hole) quasiparticles may experience the two spin channels. In this paper, we have analyzed the effect of proximity-induced mixed spin-singlet and spin-triplet symmetry on the surface states of a topological insulator. The particle-hole and chiral symmetric properties of Dirac-Bogoliubov-de Gennes Hamiltonian has been investigated to capture the topology class. We have introduced the new spin channels ¡ ½ and ¡ ¾ for mixed state in the presence of topological insulator. Particularly, we have found a transformation matrix, under which the Hamiltonian is diagonalized, and, interestingly, the new mixed-spin channels were located at the diagonal elements. Consequently, it is formally expected to appear a Dirac mass-like gap in the surface states. This can be considered as a key feature of the present structure. It is noticed that there exist similar situation, when a magnetization in Þ-direction Ñ Þ is induced to 3DTI [16]. Therefore, one can report that mixed-state superconductivity induction may play simultaneous role of magnetic field appearance in topological insulators. This is considered a significant feature, particularly in NCS superconductors [23,45,46,47]. Next, we have further tried to clarify possible phase transition from original gapless in conventional superconducting to gaped surface states in unconventional mixed one via the evaluating the topological invariant winding number for the chiral eigenstates. To this end, we have constructed a chiral unitary matrix, under which the Hamiltonian is transformed to its block-off-diagonal form. Because the spectral projection matrix Õ´ µ has been obtained in an unwieldy analytical expression, then the winding number will be presented in another work. Regarding the fact that superconducting electron-hole excitation in topological insulator is gapless with Ô-wave pairing symmetry, it was necessary to reveal the effective subgap in the mixed state case, which is identified to have a complicated dependency on mixed spin channels. However, we see a sizable subgap on the Fermi surface, and it diminishes when the Ô-wave symmetry contribution is dominated. We have thus systematically proceeded to investigate the characteristic transport properties for subgap tunneling in N/S and Josephson S/I/S junctions. Our proposal has clear advantages in experimental accessibility. The Josephson current on the surface of the 3DTI has been experimentally observed, where Josephson junction AE /epitaxial´ ¼ Ë ¼ µ ¾ Ì ¿ /AE in the two steps have been fabricated and good Á Î characteristics presented [11]. Also, tunneling conductance spectroscopy has been performed across hetero-AE /topological insulator/ Ù, recently [40]. spin channels ¡ ½ and ¡ ¾ with the incidence angle is shown for three different values of ¡ × and ¡ Ô . The solid lines denote the ¡ ½ , while the crossed lines correspond to the ¡ ¾ . Black curves represent pair potential for ¡ × ¼ ¡ Ô ¼ ¾, violet curves for ¡ × ¡ Ô ¼ and blue curves for ¡ × ¼ ¾ ¡ Ô ¼ . Figure 2 (color online) The excitation spectra in superconductor topological insulator, calculated from Eq. (4). The mixed superconducting state features a mass-like gap in topological insulator. Figure 3(a), (b), (c) (color online) Contour plot of superconducting excitation spectra on the surface state of topological insulator for (a) ¡ ¾ ¼, (b) ¡ ¾ ½ and (c) ¡ ¾ ½. It is seen that the superconducting zero energy only occurs in ¡ ¾ ½.
2019-03-09T09:21:50.000Z
2019-03-04T00:00:00.000
{ "year": 2019, "sha1": "a4e699f89da153fbfeb0448ba95c3e6499425858", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a4e699f89da153fbfeb0448ba95c3e6499425858", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
267994807
pes2o/s2orc
v3-fos-license
Study the effectiveness of marijuana and rosemary alchoholic extracts in control of red rusty flour beetle Tribolium castaneum (Herbst) (Coleoptera: Tenebrionidae) Laboratory experiments were conducted to determine the effectiveness of plant extracts of vitex and rosemary in red rust flour beetle Tribolium castaneum (Herbst) control. Results showed a significant effect of extracts oncentrations in the percentage of larval stage mortality . concentration of 5%, caused the highest mortality rates in the larval stage reached to 71.11, 76.66, 83.33 % insects / plate after 24, 48 and 72 hours, respectively. The mortality rates were equal for both concentrations 2.5 and 5%, which amounted to 90 % after 72 hours. Vitex plant extarct was significantly superior to the rosemary plant Rosmarinus officinalis in the mortality rates for all concentrations and periods post application in this study. is widely used as a food additive, having protective traits for human body This due to its antioxidant activity as they contain a large amount of phenolic compounds, flavonoids and natural acids [6]. The use of chemicals against insects has become ineffective due to the development of resistance in different strains [7]. Many researchers indicated that plant-derived materials do not cause insect resistance, with a broad-spectrum activity, and are safe for biological enemies. This made such plant compounds to be suitable for IPM as biological control agents [8]. The study aimed, there for, to evaluate the effect of different concentrations of alcoholic extract of Vitex agnus -castus L. and rosemary or Rosmarinus officinalis on the biological performance of the rusty red flour beetle. Materials and methods Insect collection, raring and identification Adults of the T. castaneum insect were collected from infected flour stored in flour stores in Karbala. The insect was diagnosed by the professors of the Department of Protection / College of Agriculture / University of Karbala, using the classification keys of the family Tenebrionidae [1]. For the purpose of perpetuating the colony of the insect, 250 gm of bran was placed inside a sterilized glass bottle of 8 cm dia. and 15 cm height, and then 50 pairs of insect adults aged between 48-24 hours were released into each Bottles were covered with a muslin cloth, sealed with elastic band to prevent insects from escaping, incubated at 2+28°C and a relative humidity of 5+70%. The feed material (bran) was renewed every two months to obtain young insects for subsequent tests. Processing, identification and storing plant samples The Vitex and rosemary plant parts were obtained from the local markets in Karbala governorate on 2020. The plant parts were ground by a large electric mill till became a fine powder, placed in sterile and labeled cloth bags and kept in a cool place until use. Preparation of alcoholic extracts Absolute ethyl alcohol was selected in the extraction process according to the modified [9] method. 10 g of plant powder was taken per 200 ml of alcohol and placed in an electric shaker for 2 hours. Then the sample was dried in an electric oven at a temperature of 50-45°C. The process was repeated several times to obtain a sufficient amount of the raw material that were kept in the refrigerator until use. For the purpose of estimating the efficacy of alcoholic extracts of vitex and rosemary plants on the red rust flour beetle, 5 gm of dry residue was taken from each extract separately and dissolved in 5 ml of ethyl alcohol and supplemented the volume to 100 ml of distilled water to obtain a stock solution of 5%. Concentrations (2.5 and 1.25%) were prepared for the treatments, while the control treatment was using 5 ml of solvent and 95 ml of distilled water. Effect of vitex and rosemary plant extract on mortality rates of T. castaneum larvae and adults Ten larvae/replicate of T. castaneum were taken with three replicates for each concentration (1.25%, 2.5%, 5%) of prepared plant extract. The larvae were placed in a disposable petri dish, treated directly (topical spray), using a small sprayer with a capacity of 5 ml and sprayed from a height of approximately 25 cm. The dishes were incubated at 28± 5C° and a relative humidity of 70±5%. The mortality rates were recorded after 24, 48 and 72 hours of treatment, and the results were corrected according to Abbott's equation [10]. Experiment design and statistical analysis The experiment parameters were distributed using the Completely Randomized Design (CRD). The data were analyzed using the SAS statistical analysis program. The least significant difference (L-S-D) test was used at the probability level (0.05) for differences between the treatments [11]. The death percentages were corrected according to the Abbott Formula [12] Corrected death percentages were calculated according to the following equation: The corrected death percentages were converted to data log values that are not included in the statistical analysis. Effect of vitex and rosemary plant extract on mortality rates of T. castaneum larvae 24h post treatment The results (Table 2) showed that there was a significant effect of the plant extract type Vitex agnus -castus L. and Rosmarinus officinalis and the different concentrations 1.25, 2.5 and 5% and their interaction on mortality percentage of the flour beetle Tribolium castaneum larvae. The results showed that Vitex extract led to the highest larvae mortality rate after all periods under study (24, 48 and 72 h post treatment), while the lowest mortality rate resulted from rosemary extract treatments. The results showed a significant effect of the concentrations of the extract used compared to the control treatment. The highest larvae mortality rate was at the concentration of 5% mg/ml, followed by the concentration 2.5% and 1.25% which resulted in the lowest mortality rate. The highest mortality rate after 24 hours was 86.66% in the treatment of vitex at the highest concentration (5%) compared to 43.33% mortality rates in the treatment of rosemary at the concentration 2.5%. Similarly, the highest mortality rate after 48 hours of application was in the treatment of vitex with the highest concentration, which was 90% compared to the lowest mortality rate of 45% in the treatment of rosemary at the same concentration. In general, the mortality percentage increased significantly after 72 hours of application in the case of treatment with rosemary and for all concentrations, while the mortality rate after 72 hours did not sign a significant effect in vitex extract from what resulted after 48 hours post treatment. The study showed that Vitex agnus -castus L. leaf powder extract had an effect on the percentage of death and led to a decrease in the population of the first generation F1, as well as its repellent effect on the red rusty flour beetle Tribolium castaneum [13]. Table (2): Effect of plant extracts at different concentrations and period post application on larvae mortality rate (%) of the rusty flour beetle T.castaneum [14] reported that the use of ML/L34 of palm oil led to the death of 96.6% of red flour beetle adults within 24 hours of treatment. [15] found that the LC 50 of rosemary oil extract in fumigation against Callosobruchus maculatus (F.) was 15.69ML/L. The variation in the percentages of mortality may be due to the toxic effect through contact of the extract with the surface of the body and the penetration of the chemical compounds of the cuticle through penetration into the flexible areas in it or through the respiratory openings, causing paralysis and rapid death. Glycosides are active compounds that act as feeding or repellent inhibitors that lead to inhibition of the egg-laying process, hatchability, the process of molting of larvae and the death of adults [16]. These results were in line with what was mentioned by [17] who found that the use of plant extracts to control the southern cowpea beetle, including rue and garlic, led to an increase in the death rate of wholes with an increase in exposure time and a high concentration of the extracts used, and the death rate reached 60% for rue and 92% for garlic at the highest concentration (1800 parts per million) and this effect decreased in the lowest concentrations until it reached 16% for solute and 18% as a maximum for garlic at the lowest concentration of 5.112 ppm.
2023-06-04T15:18:14.060Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "b163bbaa96e6027bba39cc802dffb66b36ec5d85", "oa_license": "CCBYNC", "oa_url": "https://journals.uokerbala.edu.iq/index.php/Agriculture/article/download/923/416", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "19d8b5b9de63248b9058f0612c70903319d73916", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [] }
255172979
pes2o/s2orc
v3-fos-license
On the estimation of hip joint loads through musculoskeletal modeling Noninvasive estimation of joint loads is still an open challenge in biomechanics. Although musculoskeletal modeling represents a solid resource, multiple improvements are still necessary to obtain accurate predictions of joint loads and to translate such potential into practical utility. The present study, focused on the hip joint, is aimed at reviewing the state-of-the-art literature on the estimation of hip joint reaction forces through musculoskeletal modeling. Our literature inspection, based on well-defined selection criteria, returned seventeen works, which were compared in terms of methods and results. Deviations between predicted and in vivo measured hip joint loads, taken from the OrthoLoad database, were assessed through quantitative deviation indices. Despite the numerous modeling and computational improvements made over the last two decades, predicted hip joint loads still deviate from their experimental counterparts and typically overestimate them. Several critical aspects have emerged that affect muscle force estimation, hence joint loads. Among them, the physical fidelity of the musculoskeletal model, with its parameters and geometry, plays a crucial role. Also, predicted joint loads are markedly affected by the selected muscle recruitment strategy, which reflects the underlying motor control policy. Practical guidelines for researchers interested in noninvasive estimation of hip joint loads are also provided. Introduction In a musculoskeletal (MSK) system, joint reaction forces (JRFs) are the internal loads due to the contact actions exchanged by the articulating surfaces during motion (Open-Sim 2021; Vigotsky et al. 2019). Their knowledge is an important aspect in the study of human movement. Indeed, JRFs can provide valuable insights into movement disorders such as cerebral palsy (Steele et al. 2012) or into degenerative articular diseases such as osteoarthritis (Kumar et al. 2013). They are also important to define the loading conditions for preclinical experimental tests on artificial joints (Affatato and Ruggiero 2019;Bergmann et al. 2016;Heller et al. 2005), or they could serve as an input to finite-element models aimed at evaluating wear processes in joint implants (Lin et al. 2018;Mattei et al. 2021;Ruggiero et al. 2018) or at predicting bone adaptations (Geraldes and Phillips 2010). The actual joint loads are hardly measurable in vivo, and to date, noninvasive estimation of JRFs remains one of the main challenges in biomechanics (Heller et al. 2001;Hug et al. 2015). Over the last two decades, instrumented joint prostheses (Bergmann et al. 1988) have emerged as powerful tools to measure JRFs during daily activities. These special prostheses have been implanted at the knee, hip, shoulder, and vertebral joints, and free public databases [e.g., (Fregly et al. 2012;OrthoLoad 2021)] have been created: they include loading, kinematics and kinetics data registered during the analyzed motor tasks. However, due to ethical and practical concerns, instrumented prostheses have been used in a limited number of individuals and motor tasks. Furthermore, since these measurements reflect only a small cohort of the population, namely pathological subjects who had their natural hips replaced by artificial ones, they may fail to represent typical hip joint loads of unimpaired subjects. Computational approaches based on MSK computer modeling (Bassani and Galbusera 2018;Delp et al. 2007) represent a promising alternative for noninvasively investigating the relationship between body motion and biomechanical loads [e.g., (DeMers et al. 2014;Imani Nejad et al. 2020;Li 2021;Lin et al. 2018;Marra et al. 2015;Martelli et al. 2011;Stansfield et al. 2003;van Veen et al. 2019;Weinhandl and Bennett 2019;Zhang et al. 2015)]. As clearly outlined in Fregly (2021), the potential of using computational MSK modeling techniques in clinical and non-clinical practices is appealing, but progresses are still needed to gain practical utility: indeed, before adopting such models and procedures, their ability to obtain reliable, replicable, and accurate estimations shall be assessed through appropriate validation against experimental data. Apparently, the work by Moissenet et al. (Moissenet et al. 2017) is the only systematic review paper on the use of MSK modeling for the estimation of JRFs at the knee and hip joints. In particular, the authors investigated what alterations of a generic MSK model provided more accurate estimations of the JRFs at the two joints. However, only 5 of the included studies (24 in total) concerned the hip joint, while the others focused on the knee joint: this was probably motivated by the greater availability of experimental data from both motion capture and instrumented knee prostheses [e.g., (Fregly et al. 2012;Taylor et al. 2017)]. The present paper is an extensive review of the literature on the use of MSK models for computing JRFs at the hip joint. Our main goal was to assess how well MSK models developed for specific patients with artificial instrumented implants have been able to reproduce the measured hip joint loads, considered as target values. Additionally, we wanted to investigate the effects of specific modeling choices on the prediction of hip JRFs. This should also serve to identify the features that mostly improve JRF estimation and to provide guidelines for researchers. It should be highlighted that the present analysis was specifically focused on dynamic MSK multibody simulations, but also static models, like the recent one by Fischer et al. (2021), could represent a valuable alternative when slow dynamic activities (with limited inertial effects) are analyzed, such as standing and slow walking. This study consists of three steps: (i) select and classify state-of-the-art research papers based on well-defined criteria, (ii) compare them in terms of methods and results, and (iii) identify potential research gaps and suggest future directions for research in this area. Section 2 reviews the typical computational workflow for estimating JRFs at the hip through MSK modeling. Section 3 presents the criteria used to select papers from the current literature, and how they were classified. Section 4 describes how results from the selected studies have been quantitatively compared. Lastly, Sects. 5 and 6 are devoted to the discussion and conclusions, respectively. 2 Typical computational pipeline to estimate joint loads through MSK modeling Figure 1 shows the typical computational pipeline adopted to estimate JRFs through MSK modeling starting from experimental kinematic (i.e., marker trajectories) and kinetic (i.e., ground reaction forces, GRFs) data (Sylvester et al. 2021). The term forces in JRFs and GRFs refers to both forces and moments. The first preliminary step consists in scaling a generic MSK model to match the subject's anthropometry: geometric features (bone dimensions, joint distances, muscle attachment points) and inertial properties of body segments (segment mass, center of mass location, and inertia tensor) are typically adapted. The most common scaling procedure is based on surface markers placed on anatomical reference points. Errors due to an incorrect marker placement as well as to soft tissue artifacts may be introduced. Scaling is a very critical step that affects all the model's outputs, as it is Fig. 1 Computational workflow typically adopted to estimate JRFs through MSK modeling frequently discussed in the literature where different scaling laws and procedures are investigated (e.g., Fischer et al. 2021;Kainz et al. 2017;Lund et al. 2015)). After scaling, an inverse kinematics (IK) analysis is performed to estimate the joint angles that best fit the experimental marker trajectories. Similar to scaling, IK can be affected by marker movement (Fiorentino et al. 2020;Lamberto et al. 2017), the adopted marker sets (e.g., Helen-Hayes and its modifications (Davis et al. 1991;Kadaba et al. 1990), CAST (Cappozzo et al. 1995), LAMB (Rabuffetti and Crenna, 2004), etc.) (Mantovani and Lamontagne, 2017;Stief, 2018) as well as by experimental data preprocessing (Rácz and Kiss, 2021) and model choice (Falisse et al. 2018;Roelker et al. 2017;Wagner et al. 2013). Measured GRFs are then used in an inverse dynamics (ID) algorithm to calculate the net joint forces and torques (generalized forces) required to drive the model toward the desired kinematics. Since such forces also depend on the estimated inertial parameters of the body segments, affected by unavoidable uncertainties, a residual reduction algorithm ("How RRA Works-OpenSim Documentation" 2021) is frequently adopted to solve ID: it applies small modifications to the inertial parameters and joint angle trajectories to improve the model's consistency with the rigid-body dynamic equations. Although the effect of the variation of such inertial parameters is still controversial, it seems that joint torque estimation is more affected by uncertainty in marker placement than in inertial parameters (Camomilla et al. 2017), at least for activities of daily living. Scaling, IK and ID are based on rigid body mechanics and can be equally applied to humans and robots. Differences arise when mechanical joint actuators are replaced by "special cables" representing the actions of musculotendon actuators on bones. This peculiarity of MSK modeling has a double implication: the characteristics of musculotendon geometry and physiology should be considered, and the problem becomes statically indeterminate due to the intrinsic redundancy of the muscles. In most MSK approaches, a 3-element Hill-type musculotendon model (Millard et al. 2013;Thelen 2003;Zajac 1989) is implemented (Fig. 2). The total force F T exerted by the musculotendon unit includes both active and passive contributions from the muscle according to the force-length-velocity functions, and it is modulated by the neural activation a where F M 0 is the muscle maximum isometric force, f T is the tendon force-length function, f L and f V are the muscle active force-length and force-velocity functions, f PE is the muscle passive force-length function, l T and l M are the tendon and muscle lengths, respectively ( l T s and l M 0 are their slack length and optimal fiber length), v M is the muscle fiber velocity and v M max its maximum value, and is the pennation angle. The values of such parameters are typically derived from generic cadaver studies and/or MRI image repositories [e.g., (Handsfield et al. 2014;Ward et al. 2009)]. In several cases, F M 0 is adapted to the specific subject according to specific laws, that, for example, take into account the subject's mass and height (Correa and Pandy 2011;Handsfield et al. 2014). In some cases, an elementary model is assumed that simply expresses the musculotendon force as Obviously, musculotendon actuators require a welldefined geometry: origin and insertion points within the skeletal structure, as well as their specific paths. Such geometrical information is crucial to properly define muscle moment arms. Like musculotendon parameters, musculotendon geometry is often extracted from cadaver datasets and/ or medical imaging databases used to define the reference model. Musculotendon geometry can also be adapted to the characteristics of a specific subject by using subject-specific (SS) medical imaging data (Martín-Sosa et al. 2019). To resolve muscle redundancy, an optimization approach, namely static optimization (SO), is typically used. Its cost function, minimized at each instant of time t i , is expressed as the sum of muscle activations or of muscle stresses (muscle force divided by physiological cross-sectional area) raised to the pth power. A slightly different approach is the so-called (1) Hill-type musculotendon model (Millard et al. 2013) computed muscle control (CMC), which uses SO along with feedforward and feedback control to drive the MSK model toward the experimental kinematics Thelen and Anderson 2006). EMG-informed strategies were also devised to tackle the problem of muscle load sharing (Pizzolato et al. 2015): the recorded EMG signals are input into the MSK model with the goal of informing the model about the SS muscle recruitment strategy. In contrast to SO approaches, dynamic optimization (Anderson and Pandy 2001a) and optimal control (Falisse et al. 2019) can synthesize muscle forces while minimizing a time-dependent performance criterion over the whole activity duration and imposing the MSK system dynamics in the form of differential constraint equations. Regardless of the method used to solve the muscle recruitment problem, once muscle forces are known, JRFs can be derived from the equilibrium equations (the reader is referred to "Appendix" for a more detailed description, with a specific focus on the hip joint). The hip joint reaction forces (HJRFs) are usually represented by the resultant force F h applied at the hip joint center H, and a torque T h equal to the resultant moment about H. They represent a system of forces equivalent to the contact actions at the articular surfaces (normal, due to pressure, and tangential, due to the friction/viscous actions of the synovial fluid) and the ligament actions. Typically, articular friction and ligament actions are neglected (compared to the other forces) so that T h ≈ 0 . However, if one seeks to discern contact and ligament contributions, such elements should be properly modeled. Selected studies and their classification As briefly commented in the previous section, numerical prediction of HJRFs, and more in general of joint loads, is affected by several uncertainties arising from experimental data, model choice and scaling, MSK characteristics, etc. In this study, we focused on those models whose accuracy could be quantitatively assessed by comparing their results with experimental data from instrumented prostheses. Thus, relevant literature studies on the estimation of HJRFs were selected according to the following inclusion criteria: I. Research journal publications (conference proceedings were excluded). II. Published in the last two decades (2000-today). III. MSK simulations based on dynamic models, IV.a. Simulation results validated against experimental data from instrumented prostheses, for each patient separately [works that used a "typical patient" were excluded, 1 e.g., Heller et al. (2005) and Wesseling et al. (2015)]. Our inspection of such literature returned a first set of works collected in Table 1 (non-colored rows, eleven works). Then, we enlarged such set by expanding the criterion IV.a as follows: IV.b. Simulation results compared with experimental data from instrumented prostheses but based on kinematic-kinetic data from other subjects (either pathological or healthy). This additional criterion provided a second set of works, also collected in Table 1 (colored rows, six works). It is worth highlighting that a proper model validation could only be performed for the studies included by the criterion IV.a. The results of the studies included according to the criterion IV.b were still compared with experimental data from instrumented prostheses, although it remains questionable whether this is appropriate for different subjects, particularly healthy ones. In the whole, the extended selection comprises seventeen papers (Table 1). Those research papers from the same author(s) and based on the same model and/or procedures were unified in the same row if they did not provide any additional information for the purpose of our analysis. Studies in Table 1 were then classified according to several criteria, which will be detailed in the following sections: The modeling aspects presented in Sects. 3.3.1 through 3.3.5 are then discussed in the corresponding Sects. 5.1 through 5.5. Experimental datasets for simulation and validation Estimated HJRFs can be validated against experimental data from instrumented hip prostheses. These prostheses are provided with strain gauges and transmit the acquired data to an external unit through a telemetric system (Bergmann et al. 1988). Thanks to a calibration process, strain measures can ultimately be used to obtain HJRFs. A first set of hip load measures was collected starting from 1997 (Bergmann et al. 2001a): a hip implant of type Hip I (Bergmann et al. 1988) or Hip II (Graichen et al. 1999) ( Table 2) was made with telemetric sensors that could wirelessly relay force data. Such prostheses were implanted in four subjects, and force data, along with kinematics and kinetics, were collected during activities of daily living. More recently, an advanced instrumented hip prosthesis, namely Hip III (Damm et al. 2010), able to measure friction moments in addition to contact forces, was implanted in ten patients and load data were collected (Bergmann et al. 2016). While kinematics and kinetics were made available for all the investigated activities in Bergmann et al. (2001a), this was done only for walking and standing in Bergmann et al. (2016). Table 2 compares the two datasets in terms of the adopted hip implant, the analyzed subjects, the investigated motor tasks, and the collected motion capture data ("MOCAP"). Both datasets are freely available on the OrthoLoad platform (OrthoLoad, 2021). All the collected studies used the in vivo measured loads from OrthoLoad datasets (Bergmann et al. 2016(Bergmann et al. , 2001a as target values for their estimations (Table 1, "Experimental dataset" column). Simulation frameworks Most of the selected works performed MSK modeling on dedicated software (Table 1, "Software" column): seven are based on the open-source framework OpenSim (Delp et al. 2007), while six adopted the commercial software AnyBody First row: types of instrumented hip prostheses (OrthoLoad 2021). Hip I and Hip II are provided with 3 strain gauges inside the neck to measure the 3 force components acting at the center of the ceramic ball. Hip II has a fourth strain gauge to detect the strain of the stem. Hip III has 6 strain gauges inside the neck and measures the 3 components of the friction moment in addition to the contact forces (Damsgaard et al. 2006). Furthermore, the work in Li (2021) is based on FEBio (Maas et al. 2012), a software tool for nonlinear finite element analysis in biomechanics, while (LaCour et al. 2020) is based on AUTOLEV (Levinson and Kane 1990), an interactive symbolic dynamics program. Finally, a few studies rely on in-house platforms for MSK simulations (Heller et al. 2001;Martelli et al. 2011;Stansfield et al. 2003). MSK models For the following sections, the reader is referred to the "MSK model" column in Table 1. Lower-body versus full-body models Models including the lower limbs and pelvis only (lowerbody, LB) were the most used within the selected studies (Heller et al. 2001;Li 2021;Martelli et al. 2011;Modenese et al. 2013Modenese et al. , 2011Modenese and Phillips, 2012;Stansfield et al. 2003;Zhang et al. 2015), while some recent works rely on models that include also the trunk and the head (De Pieri et al. 2019Fischer 2018;Hoang et al. 2019LaCour et al. 2020;Lunn et al. 2020;Wesseling et al. 2016). Many studies simplified LB models to have only one leg (LB unilateral). In that case, additional actuators acting on the pelvis were introduced to model the dynamic contributions associated with the missing torso and the contralateral leg. None of the studies has included the arms into the model (no full-body, FB, model was used), and their inertial properties were lumped with those of the torso and the head (if any). Models of the hip joint In most cases, the hip joint was modeled as an ideal 3-dof spherical joint without explicitly modeling either articular contact, lubrication, or ligaments. However, three of the selected studies implemented a more complex hip joint architecture, as detailed below. Zhang et al. (Zhang et al. 2015) assumed a 6-dof hip joint, and HJRFs were estimated adopting a force-dependent-kinematic (FDK) approach (Skipper Andersen et al. 2017). In particular, HJRFs were predicted using a linear force-volume penetration law (similar to the elastic foundation theory) to estimate the contact actions between the compliant surface of the polyethylene acetabular cup and the rigid surface of the femoral head. Moreover, a linear spring (with stiffness 5·10 4 N/m) connecting the centers of the acetabular cup and the femoral head was added to model the overall action of the capsule ligaments (Fig. 3A). A very recent study (Li 2021) incorporated an MSK model with rigid bones and a compliant hip joint with SS cartilage geometry into a finite-element software to simultaneously solve for rigid body dynamics, muscle forces and HJRFs. Lastly, in LaCour et al. (2020), the three translational dofs of the hip joint were controlled by a contact-detection algorithm based on a spring-damper representation of the acetabular surface. In particular, the contact surfaces of the acetabular cup and the femoral component were represented as a polynomial surface and a point cloud, respectively (Fig. 3B). Furthermore, the primary hip capsular ligaments were modeled as nonlinear springs, with parameters derived from published works. MSK geometry Most of the selected works (De Pieri et al. 2019Fischer 2018;Heller et al. 2001;Hoang et al. 2019Li 2021;Lunn et al. 2020;Martelli et al. 2011;Modenese et al. 2013Modenese et al. , 2011Modenese and Phillips 2012;Weinhandl and Bennett, 2019) are based on models with MSK geometry (i.e., bone dimensions and musculotendon geometry) derived from generic cadaver repositories or other available datasets based on medical images. Such model templates have been linearly scaled to the subject's anthropometry by using surface markers and/or osseous landmarks. The model in Heller et al. (2001) is based on the Visible Human (VH) cadaver dataset ("The National Library of Medicines Visible Human Project" 2022), while most of the collected studies derived musculotendon geometry from the recent cadaver dataset TLEM 2.0 (Carbone et al. 2015) and its precursor TLEM (Klein Horsman et al. 2007). TLEM-based models were implemented both in AnyBody (De Pieri et al. 2019Fischer 2018;Li 2021;Lunn et al. 2020;Zhang et al. 2015) and OpenSim (Modenese et al. 2013(Modenese et al. , 2011Modenese and Phillips 2012). TLEM 2.0 is based on medical imaging data from a single subject, and it provides muscular geometrical information with the highest level of detail currently available. Among the collected studies, SS CT scans were used in Fischer (2018) and Stansfield et al. (2003) to adjust the hip joint geometry and the location of the hip joint centers and in Zhang et al. (2015) and LaCour et al. (2020) to determine bone dimensions. Furthermore, both CT and MRI SS images were used in Wesseling et al. (2016) to define bone geometry and joint positions as well as muscle paths, origins and insertions. Musculotendon model In all studies, a 3-element Hill-type musculotendon model was implemented. However, muscle contraction dynamics and the contribution from the parallel passive element were neglected in AnyBody-based studies as well as in a few OpenSim-based studies (Modenese et al. 2013(Modenese et al. , 2011Modenese and Phillips 2012;Weinhandl and Bennett 2019). The influence on HJRFs of the choice of the muscle model, simplified vs. complex, is discussed in Fischer (2018). The generic l M 0 and l T s parameters were linearly scaled with segment lengths in all studies: in the OpenSim-based studies, the ratio l M 0 ∕l T s of the generic model is preserved in the scaled model, while in AnyBody it is adjusted to maintain the joint angle at which muscle force peaks. F M 0 was left to its generic value in all the studies except for (De Pieri et al. 2018), where it was linearly scaled accounting for the body mass and fat percentage. To assess inter-subject variability, Hoang et al. (Hoang et al. 2019 investigated the effect of SS, EMG-driven calibration (Pizzolato et al. 2015) of musculotendon parameters by simultaneously minimizing the error on joint torques and the magnitude of the resultant HJRF. Muscle recruitment strategies The collected works have adopted two different muscle recruitment strategies, namely SO and EMG-driven (Table 1, "Muscle recruitment" column). However, as evident from Table 1, SO was the most adopted. Many researchers minimized the sum of squared muscle activations (or muscle stresses) (Hoang et al. 2019Li 2021;Martelli et al. 2011;Modenese et al. 2013;Modenese and Phillips 2012;Weinhandl and Bennett 2019;Wesseling et al. 2016;Zhang et al. 2015), while others minimized the sum of cubed muscle activations (or muscle stresses) (De Pieri et al. 2019Fischer 2018;Lunn et al. 2020). In Heller et al. (2001) and Stansfield et al. (2003), minimization of the total muscle force was selected as recruitment strategy (to prevent excessive loading of individual muscles, Heller et al. (2001) included a limitation on the maximum muscle force). The minimax criterion was used in Fischer (2018;Stansfield et al. (2003); Zhang et al. (2015). A variation of the classical squared muscle activation cost function is described in Wesseling et al. (2016), where the magnitude of the resultant HJRF was weighted and concurrently minimized at each instant of time. Only the works by Hoang et al. (2019) compared the HJRFs estimated through SO with those obtained through EMG-informed approaches. Quantitative comparison of the results across studies As also pointed out in Moissenet et al. (2017), a direct comparison of the obtained HJRFs across the selected studies was, unfortunately, limited by the different ways in which results have been presented, as well as by the lack of a common error metric to assess the deviations between in vivo data and simulated results. When available, specific deviation indices were extrapolated from the studies included by criterion IV.a (for which a proper model validation was possible) in order to assess the goodness of fit of the predicted HJRFs with respect to their experimental counterparts. In most cases, the adopted deviation index was obtained, for each patient, as the average across the trials of the mean deviation obtained in each trial. Only in Zhang et al. (2015) the deviations were calculated on the mean trial for each subject. Deviations were computed on the resultant HJRF, except for Heller et al. (2001), where the force components were considered. Specifically, the following deviation indexes were considered: • RMSE (Root mean square error) it is an indicator of the global deviation over the activity cycle and is expressed as a percentage of the body weight (%BW). • RPPD (Relative peak-to-peak deviation) it is an indicator of a local deviation between the experimental and the corresponding simulated peak force values. When two peaks manifest themselves, as typically happens in walking tasks, the worse index was considered. RPPD is expressed as a percentage of the experimental peak value (%EP). • RDEP (Relative deviation at experimental peak) similar to RPPD, it is calculated at the instant of experimental peak value and is expressed as a percentage of such experimental value (%EP) as well. As opposed to RPPD, any potential time shift between simulated and experimental curves is ignored. Figure 4 details the two local deviation indices considering a typical experimental resultant HJRF and the corresponding one simulated for a specific patient (and a specific trial). Table 3 shows such deviation indices for the studies that made them available (i.e., nine out of the eleven studies in non-colored rows from Table 1), and for each analyzed motor task. LaCour et al. (2020) and Modenese et al. (2013) did not report any of the deviation index. Depending on the results made available by the authors of the selected studies, both RPPD and RDEP in Table 3 were estimated by (i) averaging the (absolute) relative deviations across all subjects (mean), and by (ii) taking the maximum (absolute) relative deviation across all subjects (max), while the reported values for RMSE represent the average deviation across all trials and patients for a specific activity. However, caution should be used when comparing data in Table 3, for two reasons: (i) when averaging RPPD and RDEP across trials and subjects, cancellations due to opposite signs may result if signed deviations are considered (instead of their absolute values), as in Heller et al. (2001) and Zhang et al. (2015); (ii) if a significant time shift between experimental and simulated data is present, the RDEP index does not reflect the actual deviation, and the RPPD index should be considered instead. To further facilitate the comparison of the results from the studies in Table 3, we plotted together the resultant HJRF predicted by the authors that analyzed the same subjects and compared them with the corresponding in vivo measurements. This was done for patients HSR (sex = male, age = 55, height = 174 cm, weight = 860 N) and KWR (sex = male, Yellow squares indicate the experimental peak value (EP) and the simulated one, while the blue circle indicates the value assumed by the simulated force at the instant of EP age = 61, height = 165 cm, weight = 702 N) from Bergmann et al. (2001a). Both walking and stair climbing were considered. When multiple simulations were performed within a specific study, the scenario that produced the best fit was selected, i.e., p = 2 in Modenese et al. (2011) and the LLLM model in Weinhandl and Bennett (2019). According to the data made available by the authors, mean curves across trials are reported in Fig. 5, except those predicted by Heller et al. (2001) and Stansfield et al. (2003), which are obtained from one trial only (magenta and cyan curves). The deviation indices listed above were also calculated, together with the time shift at first peak and the Pearson's correlation coefficient (R) to assess similarity between curves (Table 4). In particular, RDPP and RDEP were estimated by using the absolute deviation between experimental and simulated force values. Discussion Some general considerations regarding the ability of MSK models to reproduce experimental HJRFs can be drawn by inspection of Fig. 5 and Table 4: different models and techniques produce significantly different predictions of HJRFs, both during walking and stair climbing. All studies included in this quantitative analysis are based on LB unilateral models, and, at least for Modenese et al. (2011); Weinhandl and Bennett (2019); Zhang et al. (2015), the same muscle recruitment criterion was adopted. This suggests that differences in HJRF predictions are mainly related to the different MSK characteristics of the models, as will be discussed in the next sections. Deviations from experimental measurements emerge both in the shape and in the magnitude of the predicted forces, as also supported by the values in Table 4. In general, MSK models tend to overestimate the in vivo measurements, especially during the weight acceptance phases of walking and stair climbing. Underestimation is more evident during the swing phase. Although obtained from one trial only, forces predicted by the model in Heller et al. (2001), based on the oldest cadaver template (among the selected studies), follow more closely the experimental reference with respect to the other models: the lowest RMSE, time shift and R are obtained for both patients in walking, while this is the case only for patient HSR in stair climbing ( Table 4). As it can be observed in Fig. 5, the HJRFs predicted for patient KWR during stair climbing are not well correlated to the in vivo ones, even when using more recent models. Regardless of the model, HJRF predictions are more accurate for the HSR patient compared to the KWR patient during stair climbing, while the opposite is true for walking. It is also worth noting that HJRFs predicted in Table 4 Deviation indices, time shift at 1st peak, and correlation coefficient R calculated for patients HSR and KWR for walking and stair climbing Negative values in the time shift indicate that the peak of simulated force anticipates the experimental one minimization of squared muscle activations). In particular, Modenese et al. (2011) obtained better results: this might be due to other inaccuracies in the modeling procedures (e.g., scaling or operator-dependent errors). The quantitative analysis carried out above seems to suggest, rather surprisingly, that the model in Heller et al. (2001), although the oldest among the selected ones, produces more accurate predictions of HJRFs both in walking and in stair climbing. Apparently, developing more detailed models could help obtain better predictions, but introducing several parameters with uncertain values may have the opposite effect (Roelker et al. 2017), with larger deviations from the experimental reference. Identifying the modeling features that best reproduce experimental HJRFs is quite challenging. Indeed, models and approaches adopted in the collected studies differ in many respects, and a one-to-one determination of cause-effect relationships is not trivial at all. However, from a more general perspective, the influence of certain modeling choices can be investigated, as detailed in the sequel. Lower-body vs full-body models Investigating the influence of using LB or FB models on HJRF estimation was not straightforward. All studies included in the quantitative analysis of Table 3 are based on LB unilateral models, except for Fischer (2018) andDe Pieri et al. (2018) which used models including also the trunk and the head. In the latter cases, a relatively low RMSE with respect to the other studies was observed, but it is not significant enough to draw definitive conclusions. Also, the studies in Fig. 5 adopted LB unilateral models, thus hindering conclusive statements about the effect of the model structure on HJRF prediction. LB unilateral models are certainly more efficient from a computational standpoint, but their advantages over LB bilateral models is not clear. However, their adoption could be justified in the analysis of symmetric tasks, such as unimpaired walking. When motor tasks demanding the cooperation of both limbs are considered, or when subjects with significant asymmetry in lower limbs are analyzed, LB bilateral models may be preferred. None of the selected studies used FB models, probably because upper limb kinematics was not registered in any of the experimental datasets. Current literature suggests that upper limb configuration does not have a relevant effect on HJRF prediction in walking at normal speed (Angelini et al. 2018), but additional studies are needed that compare HJRF predicted by LB and FB models. Indeed, the distribution of inertial properties may substantially change when upper limbs are modeled. Moreover, other activities of daily living where arm movement plays an important role should be investigated (e.g., fast walking, running, squat). Models of the hip joint More complex models including ligaments and elastic (but frictionless) interfaces at the hip joint (LaCour et al. 2020;Li 2021;Zhang et al. 2015) had limited success in improving the accuracy of the estimated HJRFs if compared to a simple 3-dof hip joint model with rigid contact and no ligaments. In both Zhang et al. (2015) and Li (2021), differences in the peak resultant HJRF predicted by the simple and complex hip models were less than 5% and 1%, respectively, both in walking and in stair climbing. These findings could be explained considering that (i) the resultant HJRF with elastic interfaces is basically the same as with rigid contact surfaces when elastic deformations are small and (ii) ligament forces are quite small compared to muscle forces when activities of daily living with small hip ranges of motion are performed. As a matter of fact, predictions obtained from these complex models are still affected by large deviations, both globally over the whole activity cycle and locally at specific time instants: as shown in Table 3, the RMSE in Zhang et al. (2015) is in the range 42-49% BW for walking, with a max RDEP of 9%, and in the range 38-55% BW for stair climbing, with a max RDEP of 23%, whereas the max RDEP for walking is 25% and 32% in Li (2021) and LaCour et al. (2020), respectively. Such findings are consistent with those from Fig. 5: the forces predicted by Zhang et al. (2015) did not significantly improve on the other models with simpler hip joints. MSK geometry MSK characteristics and tissue properties can markedly vary among individuals (Duda et al. 1996;Scheys et al. 2008), and for this reason, the accuracy of scaled generic models has been questioned, particularly when substantial geometric between-limb differences or MSK pathologies are present. SS models allow inclusion of individual MSK anatomy and properties (Akhundov et al. 2022). Model personalization can be performed fundamentally at two different levels: (i) MSK geometry (e.g., bone dimensions and musculotendon paths) and (ii) musculotendon parameters (e.g., the characteristic parameters of the Hill-type model), dealt with in Sect. 5.4. The prediction of HJRFs revealed high sensitivity to the MSK geometry (Carbone et al. 2012), in particular to the hip joint geometry [hip joint center, neck length, neck shaft angle, femoral anteversion (Heller et al. 2001;Lenaerts et al. 2009Lenaerts et al. , 2008] and to the musculotendon geometry [paths and attachment points of the muscles spanning the hip (Carbone et al. 2012;Martín-Sosa et al. 2019)]. Also, bone dimensions have been shown to impact HJRF prediction (Koller et al. 2021). All these parameters are interconnected, as they affect the lines of action of muscles, hence their moment arms. MSK geometry also substantially varies between the different cadaver templates on which generic models are based and identifying which one is best suited for the prediction of HJRFs is not trivial. Quantitatively, using the LLLM model decreased the max RPPD from 101% (ALLM) to 44% in slow walking, from 113% (hip2372) to 59% in normal walking, and from 121% (hip2372) to 49% in fast walking. However, the RMSE range was 42-65% BW, while the mean RPPD was around 45% for normal walking even with such detailed model (Table 3). The authors attribute such differences to the significant variability in muscle moment arms across the cadaver templates. Interestingly, the performance of TLEM-based models seems to be exceeded by that of the model in Heller et al. (2001), which is based on the older VH cadaver template ("The National Library of Medicines Visible Human Project" 2022) (as evident from Fig. 5). The importance of properly modeling the musculotendon geometry is also highlighted in De Pieri et al. (2018): the refinement of muscle paths and the introduction of wrapping surfaces into the TLEM model reduced the RMSE from 36% BW to 30% BW over the whole gait cycle, with a particular effect at the second peak (where the RPPD decreased from 14 to 10%) and during swing. The recent study (Martín-Sosa et al. 2019) found that informing the model with muscle attachment points from SS MRI significantly affected the estimation of HJRFs: a difference of 35% BW between the scaled generic model and the SS model was observed. Such findings are consistent with those in Kainz et al. (2021), where the inclusion of personalized geometry (bones and muscle paths from MRI) of impaired subjects had a significant impact. In Wesseling et al. (2016), gradually increasing the level of SS detail, specifically through CT and MR images, improved HJRFs with respect to the scaled generic model, while accounting for the hip capsule geometry through wrapping surfaces further reduced discrepancies at the second peak of HJRFs in walking. Unfortunately, no quantitative analysis was possible since different datasets were used for simulation and comparison in Wesseling et al. (2016). Among the studies compared in Fig. 5, only Stansfield et al. (2003) and Zhang et al. (2015) used CT images to define hip joint centers and bone dimensions, respectively. Nonetheless, such model choices did not significantly improve the HJRF prediction when compared to the other generic models, probably because of the insufficient level of SS details. Musculotendon model Muscle force generation is also affected by the characteristic parameters of the Hill-type musculotendon model, especially l T S , l M 0 , and F M 0 (Carbone et al. 2016;De Groote et al. 2010;Scovil and Ronsky 2006), and the level of sensitivity depends on the role each muscle plays during the analyzed task (Carbone et al. 2016). Currently, musculotendon parameters cannot be directly derived from MRI, and their generic values are typically obtained from repositories based on dissected cadavers (Klein Horsman et al. 2007). Only reference values for F M 0 have been estimated from MRI-segmented muscle volumes or regression equations (Handsfield et al. 2014). Given the considerable uncertainty in musculotendon parameter values, the use of a simplified muscle model, where muscle dynamics is neglected, seems a reasonable choice, at least for low-dynamic motor tasks. In Fischer (2018), the simplified muscle model improved the simulated HJRFs with respect to the 3-element model: at the local level, the mean RDEP decreased from 24 to 8% in walking and from 30 to 12% in standing, while at the global level the RMSE decreased from 54 to 40% BW in walking, and from 74 to 41% BW in standing. These results suggest that adopting erroneous musculotendon parameters may be worse than not estimating them at all. However, ignoring muscle dynamics is not advisable when analyzing more dynamic activities: in Modenese and Phillips (2012) and the deviation of predicted vs. experimental HJRFs increased with walking speed. Personalization of musculotendon parameters can be achieved through EMG-driven calibration, as done in Hoang et al. (2019. It can potentially help obtain more accurate HJRFs, but future studies are needed that quantitatively assess the benefits of such strategy. When musculotendon parameters were calibrated by also minimizing the peak of the simulated resultant HJRF , the estimated loads were more comparable to their experimental counterparts (as expected). Also, F M 0 can be adapted to the specific subject according to different scaling laws based on segmented muscle volumes from SS MRI (Modenese et al. 2018). Construction of SS models is quite expensive and timeconsuming, as a large collection of imaging data is necessary. This is probably the reason why generic models are mostly used. Currently, automated workflows for the creation of SS models are being developed, and they may increase the repeatability of SS simulations (Modenese and Kohout 2020;Modenese and Renault 2021). Although current literature findings steer researchers toward using SS models, the level of model personalization should be calibrated depending on the research objectives as well as on the characteristics of the analyzed subjects. The possibility of using "low-cost" generic MSK models should always be considered within the modeling workflow. Muscle recruitment strategies Understanding muscle recruitment is another crucial aspect in the estimation of accurate muscle forces. The actual criteria underlying human movement are still an open research topic. Assuming that the central nervous system identifies an optimal pattern of activations by optimizing a certain criterion, all the selected studies resolved muscle recruitment by minimizing pre-selected cost functions (Sect. 2). However, while such cost functions can be easily identified for simple motor tasks (e.g., jumping as high as possible), they become challenging in more complex activities such as walking, where the central nervous system controls the MSK system in a unique way. The studies (Fischer 2018;Modenese et al. 2011;Zhang et al. 2015) found that objective functions including muscle activations (or stresses) raised to a low exponent p provide more accurate HJRFs during walking, stair climbing and one-leg standing. In particular, Modenese et al. (2011) showed that switching from p = 1 to p = 15 resulted in an RMSE range increasing from 25-47% BW to 288-452% BW in walking, and from 22-66% BW to 335-445% BW in stair climbing. This is well depicted in Fig. 7, reproduced from Modenese et al. (2011), where HJRFs are plotted with different, increasing exponents of the cost function. Similar results were found in Fischer (2018) and Zhang et al. (2015), where better predictions were obtained with p = 2 and p = 3, respectively. Minimization of the total muscle force (i.e., p = 1) tends to maximize the contribution of one most effective muscle while minimizing the number of activated muscles. Increasing the exponent p favors a muscle force distribution in which all muscles contribute a little rather than having one (or a few) dominant muscle: in this sense, synergies are encouraged to keep the contributions of each single muscle as small as possible (van Bolhuis and Gielen 1999). Lastly, an infinite value for p is equivalent to the minimax criterion (i.e., minimization of maximum muscle activation) and is often associated with muscle fatigue (Ackermann and van den Bogert 2010;Damsgaard et al. 2006;Rasmussen et al. 2001). In this sense, very large p values allow more synergistic as well as antagonistic activity (i.e., coactivation) which, consequently, leads to the prediction of larger HJRFs, as discussed in Pedersen et al. (1987) and Wesseling et al. (2015). Attempts to reduce the overestimation of experimental HJRFs were made by augmenting the polynomial cost function with a term penalizing the magnitude of the resultant HJRF (Wesseling et al. 2016), but no significant improvement was obtained. Despite their ease of implementation and computational efficiency, SO-based procedures have been extensively criticized on several aspects (Anderson and Pandy 2001a): (i) the accuracy of the results strongly depends on the accuracy of the experimental data, (ii) muscle dynamics is not fully taken into account, and (iii) the goal of the motor task may not be properly characterized as the performance criterion is not an integral cost. An early comparison of static and dynamic optimization was made by Anderson and Pandy (Anderson and Pandy 2001b): squared muscle activations were minimized in SO, while metabolic energy per unit distance traveled was minimized in the dynamic optimization approach. They found that the muscle activations predicted by SO and dynamic optimization were rather similar for low-dynamics activities such as walking, but the obtained JRFs were not validated against experimental data. Although excluded by the present analysis because based on a typical patient, the work by Wesseling et al. (2015) also investigated the effect of using a dynamic optimization approach [the Physiological Inverse Approach, PIA, by De Groote et al. (2009)], with respect to SO and CMC, for estimating HJRFs. The authors found that both SO and PIA obtained HJRFs closest to the experimental ones, at least for the typical patient. Such findings were motivated considering that CMC (i) allows muscle-generated moments to deviate from the net joint moments obtained by inverse dynamics, and (ii) it accounts for both muscle dynamics and passive forces in a way that induces a larger co-contraction of muscles. However, all three formulations overestimated the experimental HJRFs, probably due to an excessive co-contraction favored by the selected muscle recruitment criterion. Another drawback of SO approaches is that they assume identical neuromuscular control strategies between individuals and tasks. This assumption, however, might not be appropriate for participants with neurological disorders. EMG-informed approaches allow to partly overcome such limitation as they take into account individual muscle cocontractions. Findings from Kainz et al. (2021), where children affected by cerebral palsy were analyzed, suggest that informing the model with SS neuromotor control has a minor effect on hip and knee JRF estimation when compared to the personalization of MSK geometry. It is rather the combination of both personalization strategies that significantly impacts JRF predictions. However, despite the use of EMGinformed formulations, large deviations from the experimental data are still (qualitatively) observable (Hoang et al. 2019. In case EMG-informed approaches are used, particular attention should be paid on the correct placement of the electrodes in order to minimize crosstalk phenomena. To shed light on the motor control policy encoded in the central nervous system, approaches based on optimal control techniques, which hold a strong predictive potential, have been implemented (Dembia et al. 2020;Falisse et al. 2019;Mombaur and Clever 2017;Nguyen et al. 2019;Tomasi and Artoni 2022). They rigorously treat model dynamics and integral cost functionals, favoring a more principled investigation of the motor control objectives underlying human movement, hence also a deeper understanding of muscle recruitment both in healthy and pathological subjects. However, the application of such strategies on the estimation of HJRFs has not been investigated yet. 5.6 Considerations on the use of experimental load measurements from instrumented implants Criterion IV.b was added to include the studies that can provide further indications on the use of experimental loads from instrumented implants as target trends, also when analyzing different subjects, either unimpaired or with pathologies. For example, the two papers ( Fig. 8 for the walking activity. The mean curves of the predicted and experimental HJRFs are quite similar, particularly in the stance phase of walking, despite their different ranges of variation (Fig. 8A). However, the two mean curves in Fig. 8-A should be compared with some degree of skepticism, indeed, (i) the two datasets are very different in terms of number of subjects, i.e., only 10 for the OrthoLoad one and 132 for the other one; (ii) the MSK model, although previously validated in De Pieri et al. (2018), estimates HJRFs with an error of about 0.5 BW at the two peaks in walking; (iii) the results are presented in Newtons, thus potentially "masking" the discrepancies. Stratifications by age and functional ability were also reported ( Fig. 8B and C), and lower loads were obtained for aged and limited-function subjects, suggesting they adopt a "cautious" locomotion strategy. Although the present review focused on those studies where a comparison, at least qualitative, with in vivo HJRFs was possible, it remains questionable if such comparison is meaningful for healthy subjects. As highlighted in , subjects with artificial joints generally have weaker muscles and altered neuromotor control with respect to healthy ones, and this may explain why the HJRFs they estimated for healthy subjects were generally higher than the available experimental measures. Also Wesseling et al. (Wesseling et al. 2016), where CT/ MRI-based models of healthy subjects were used, stress the consequence of neuromuscular impairments on joint loads, suggesting that these may be higher in healthy subjects than in subjects with artificial joints. In both studies, such differences are much more evident at the characteristic second peak in walking (i.e., terminal stance of the gait cycle). Likewise, Martelli et al. (Martelli et al. 2011) motivate the discrepancies between predicted and experimental measurements, especially in the stance-to-swing phase, considering the different health conditions of the analyzed subject and of the implanted subjects from OrthoLoad. In the same study, the influence of the neuromotor control strategy on joint loads was investigated: deviation from a reference neuromotor control resulted to drive the intensity of the internal body forces to higher levels. However, what control policy is physiologically and clinically plausible is still an open research question that should be addressed by the biomechanics community. Conclusions The present analysis has revealed that the estimation of accurate hip joint loads is still an open issue in biomechanics. A one-to-one investigation of cause-effect relationships was hindered by the fact that models and procedures used in the selected studies differed in more than one respect. Although several important improvements in musculoskeletal modeling and computational resources have been made over the last two decades, simulated hip joint loads still overestimate their experimental counterparts measured during activities of daily living. While it is difficult to ascertain the reasons of such overestimation, all the following aspects deserve attention: • In vivo measures of joint loads may be affected by experimental errors, e.g., those deriving from the calibration process of the instrumented implants. • Quality of motion capture data is crucial, as accurate marker placement and measurement of GRFs affect all the modeling phases. • When analyzing low-dynamic activities, neglecting muscle dynamics can be a reasonable simplification, and models without upper limbs seem suitable. • More detailed musculoskeletal models do not necessarily yield a more accurate estimation of joint loads: modeling complex hip joints with contact and ligaments seems unnecessary for improving hip joint load estimates. • Musculoskeletal geometry significantly impacts hip joint load prediction. Its adaptation to a specific subject is cohort compared with those measured from the OrthoLoad dataset (Bergmann et al. 2016); solid lines represent mean curves while shaded areas their range of variation. B and C Predicted resultant HJRF across the LLJ patient's cohort stratified by age and by functional ability, respectively more feasible than personalizing musculotendon parameters. However, effort and costs of model personalization should be justified by the purpose of the investigation. • Until the neuromotor control policy(ies) of the central nervous system has not been clearly identified, static optimization approaches are a simple and efficient strategy for solving muscle redundancy. However, cost functions with lower exponents should be used. If reliable EMG signals are available, EMG-informed approaches should also be considered for solving muscle redundancy. It is worth pointing out that the above guidelines are general in scope, and they can be applied to the estimation of joint loads in other body districts. Future research should be devoted to developing automated, standardized, and accessible tools for model personalization, specifically in terms of musculoskeletal geometry. Also, additional studies are needed for the identification of motor control objectives in human movement that allow to obtain more physiologically plausible muscle forces, hence joint loads. It is also advisable that researchers share a common error metric to facilitate comparison between various joint load predictions. In conclusion, we hope that the body of knowledge reviewed in this work can constitute a resource for biomechanists dealing with in silico estimation of joint loads. Appendix The present Appendix details the process to obtain the hip joint reaction forces (HJRFs). At first, Fig. 9A graphically clarifies the contributors to hip joint reaction forces, i.e., the contact actions at the articular surfaces and the ligament actions, which are equivalent to system in Fig. 9B represented by the resultant force F h applied at the hip joint center H, and a torque T h equal to the resultant moment about H. As mentioned in Sect. 2, it is commonly assumed that T h ≈ 0. The equivalent system is considered in the following equilibrium of the limb, where the symbols F and T denote forces and torques, respectively. Figure 10 shows the system made of thigh, shank and foot and its corresponding free-body diagram. External GRFs ( F g and T g ) are measured experimentally through a force plate; the total weight P and the inertial actions F i and T i (reduced to the center of mass G) are known given system's inertial properties (mass, center of mass position, inertia matrix), geometry, and kinematics of each body segment; the forces F m i exerted by the N muscles crossing the hip joint are to be estimated. The interest is in obtaining the HJRFs F h and T h ; hence, a method to obtain the unknown F m i is necessary. The typical approach to estimate F m i is based on a recursive process starting from the most distal body (i.e., the foot, for which F g and T g are known) and proceeding proximally toward the body of interest (i.e., the femur). It includes two stages: (i) inverse dynamics, which requires the model kinematics, inertial properties and applied external actions, and (ii) an optimization-based strategy to solve for muscle forces. This two-phase process is schematically shown in Fig. 11 for the femur body. At the inverse dynamics level (Fig. 11A), muscles are replaced by ideal joint torque actuators, and net joint forces and torques (denoted by a tilde) are obtained: these include contributions from the muscles crossing the joint and from all other unmodeled elements such as articular contact and ligaments. In the optimization phase (Fig. 11B), muscles are introduced, and their actions are estimated by optimizing a certain performance criterion (e.g., through static optimization) that redistributes T h across the muscles spanning the joint and acting on the femur body. It is worth highlighting that the two systems in Fig. 11A and B are dynamically equivalent. Once muscle forces are known, F h and T h can be obtained by solving the Newton-Euler equations where moments are calculated with respect to the center of the femoral head H: HP i , HG f , HK are the position vectors pointing from H to the points of application of muscle forces ( P i ), to the center of mass ( G f ), and to the knee joint center (K), respectively. It is worth noting that inverse approaches on which estimation of joint reaction forces is based are intrinsically Fig. 9 A System of forces acting at the hip joint exerted by ligaments (green arrows) and by articular contact with the acetabular cup (orange arrows: normal contact forces, red arrows: tangential contact forces). B Corresponding equivalent system about H affected by cumulative errors that increase toward proximal joints. Thus, estimation of joint loads at the hip is affected by a larger error than the estimation of joint loads at the ankle. Fig. 10 A System made of thigh, shank, and foot. Representative muscles are shown in red through their lines of action. B Free-body diagram of the system (G is its center of mass). Muscles actuating the hip joint have been replaced by their corresponding forces F m i Fig. 11 Free-body diagrams of the femur ( G f is its center of mass) for inverse dynamics (A) and optimization (B)
2022-12-28T16:02:42.053Z
2022-12-26T00:00:00.000
{ "year": 2022, "sha1": "20e447848a32f88cc273f894b2524d34490f92e7", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-1767514/latest.pdf", "oa_status": "GREEN", "pdf_src": "Springer", "pdf_hash": "09e7b4388741a5355c89ee958e0c706c332e72d3", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
266938940
pes2o/s2orc
v3-fos-license
Pediatric intensive care unit admissions network—rationale, framework and method of operation of a nationwide collaborative pediatric intensive care research network in Germany The Pediatric Intensive Care Unit Admissions (PIA) network aims to establish a nationwide database in Germany to gather epidemiological, clinical, and outcome data on pediatric critical illness. The heterogeneity of pediatric patients in intensive care units (PICU) poses challenges in obtaining sufficient case numbers for reliable research. Multicentered approaches, such as patient registries, have proven effective in collecting large-scale data. However, Germany lacks a systematic registration system for pediatric intensive care admissions, hindering epidemiological and outcome assessments. The PIA network intends to address these gaps and provide a framework for clinical and epidemiological research in pediatric intensive care. The network will interconnect PICUs across Germany and collect structured data on diagnoses, treatment, clinical course, and short-term outcomes. It aims to identify areas for improvement in care, enable disease surveillance, and potentially serve as a quality control tool. The PIA network builds upon the existing infrastructure of the German Pediatric Surveillance Unit ESPED and utilizes digitalized data collection techniques. Participating units will complete surveys on their organizational structure and equipment. The study population includes patients aged ≥28 days admitted to participating PICUs, with a more detailed survey for cases meeting specific criteria. Data will be collected by local PIA investigators, anonymized, and entered into a central database. The data protection protocol complies with regulations and ensures patient privacy. Quarterly data checks and customized quality reports will be conducted to monitor data completeness and plausibility. The network will evaluate its performance, data collection feasibility, and data quality. Eligible investigators can submit proposals for data analyses, which will be reviewed and analyzed by trained statisticians or epidemiologists. The PIA network aims to improve pediatric intensive care medicine in Germany by providing a comprehensive understanding of critical illness, benchmarking treatment quality, and enabling disease surveillance. The Pediatric Intensive Care Unit Admissions (PIA) network aims to establish a nationwide database in Germany to gather epidemiological, clinical, and outcome data on pediatric critical illness.The heterogeneity of pediatric patients in intensive care units (PICU) poses challenges in obtaining sufficient case numbers for reliable research.Multicentered approaches, such as patient registries, have proven effective in collecting large-scale data.However, Germany lacks a systematic registration system for pediatric intensive care admissions, hindering epidemiological and outcome assessments.The PIA network intends to address these gaps and provide a framework for clinical and epidemiological research in pediatric intensive care.The network will interconnect PICUs across Germany and collect structured data on diagnoses, treatment, clinical course, and short-term outcomes.It aims to identify areas for improvement in care, enable disease surveillance, and potentially serve as a quality control tool.The PIA network builds upon the existing infrastructure of the German Pediatric Surveillance Unit ESPED and utilizes digitalized data collection techniques.Participating units will complete surveys on their organizational structure and equipment.The study population includes patients aged ≥28 days admitted to participating PICUs, with a more detailed survey for cases meeting specific criteria.Data will be collected by local PIA investigators, anonymized, and entered into a central database.The data protection protocol complies with regulations and ensures patient privacy.Quarterly data checks and customized quality reports will be conducted to monitor data completeness and plausibility.The network will evaluate its performance, data collection feasibility, and data quality.Eligible investigators can submit proposals for data analyses, which will be reviewed and analyzed by trained statisticians or epidemiologists.The PIA network aims to improve pediatric intensive care medicine in Germany by providing a comprehensive understanding of critical illness, benchmarking treatment quality, and enabling disease surveillance. KEYWORDS pediatric critical illness, pediatric intensive care unit, surveillance, epidemiology, PICU outcome, network, quality control, admission 1 Background and rationale Childhood critical illness can entail life-long sequelae, necessitating optimal treatment based on high-level evidence.However, clinical research in pediatric intensive care suffers from a heterogenous patient population that brings along difficulties to achieve sufficient case numbers for reliable results (1).Even though pediatric critical illness itself is not a rare disease, many underlying conditions or their combination that cause admission to a pediatric intensive care unit are rare.This frequently requires treatment decisions based on expert opinion rather than evidence, despite international efforts to improve the exploitation of data sources that are available for pediatric intensive care research and foster randomized controlled trials (2)(3)(4)(5). To overcome this obstacle, multicentered approaches are indispensable.In this context, patient registries have proven to be a powerful instrument for collecting large scale data tailored to the particularities of a specific patient population.A few examples of successful registries that include critically ill children are the British PICAnet, Australian/New Zealandian ANZPIC registry, the US American Virtual Pediatrics System, and the NEAR4KIDS data base.In Germany, subgroups of critically ill children are reported to the German TraumaRegister DGU ® , German Resuscitation registry and the German Neonatal Network, and the German Burn Registry.Results retrieved from these registries have advanced the field and some have permanently impacted patient care (6)(7)(8)(9)(10)(11)(12)(13)(14). In Germany, intensive care research is clearly underdeveloped in both adult and pediatric care (15).No systematic registration of pediatric intensive care admissions exists, complicating the retrieval of information on the epidemiology, course, and outcomes of pediatric critical illness in Germany.Consistent with the structural underdevelopment of intensive care research, the DIVI (German Interdisciplinary Association of Intensive Care and Emergency Medicine) calls for the implementation of intensive care registries to improve clinical research in this field (16). Besides the structural requisites for high level clinical research, quality control and disease surveillance structures are insufficient in the field of pediatric intensive care in Germany.At present, it is impossible to assess the quality of care and adherence to treatment guidelines.Further, the pandemic and recent infectious waves of common viruses have revealed that existing disease surveillance structures to early detect rapid changes in diseasespecific incidence rates are not suitable to provide timely information to allow rapid response, e.g., by reallocation of resources to manage infectious waves.These shortcomings make it impossible to guarantee the delivery of optimum care for each critically ill child in Germany. The aim of the Pediatric Intensive Care Unit Admissions (PIA) network is to provide a nationwide database on the epidemiology, course, and short-term outcomes of pediatric critical illness.This protocol describes the framework and method of operation of the proposed PIA network.It is designed as a nationwide observational research network to provide clinical and epidemiological information along with potential indicators of treatment quality and guideline adherence in Germany. Functioning as an open collaborative research network, it grants all contributors the right to conduct research with the collected data.The overarching goal of this project is to create a permanent research network that enables large-scale high-quality clinical and epidemiological research in the field of pediatric intensive care and may possibly serve as the basis for quality control measures in German pediatric intensive care units in the future. Methods and analysis 2.1 Overarching goals and specific aims The PIA network was initiated to interconnect pediatric intensive care units (PICUs) in Germany and form a research infrastructure that continuously captures all admissions to these units in a structured manner.It aims to improve the quality of pediatric intensive care in Germany by providing a comprehensive overview of the medical care provided in these units, to identify areas for improvement, to optimize care for pediatric patients, and to enable disease surveillance in pediatric intensive care. Primary aim The primary aim of the PIA network is to collect timely nationwide data on diagnoses, treatment, clinical course and short-term outcomes of pediatric critical illness to answer relevant clinical and epidemiological research questions. Secondary (long-term) aims After the implementation of quality indicators for pediatric intensive care units, the network may serve as a tool to measure and control treatment quality in PICUs.Suitable indicators are currently being developed by the Association of the Scientific Medical Societies in Germany (AWMF) in collaboration with the German Neonatal and Pediatric Intensive Care Society (Gesellschaft für Neonatologie und Pädiatrische Intensivmedizin, GNPI) and the German Interdisciplinary Association of Intensive Care and Emergency Medicine (Deutsche Interdisziplinäre Vereinigung für Intensiv-und Notfallmedizin, DIVI). Network, IT-infrastructure, and participation of PICUs PIA builds on the existing network of children's hospitals, ITinfrastructure, data collection techniques, and regulatory policies of the German Pediatric Surveillance Unit ESPED (www.unimedizinmainz.de/esped).Established in 1992 to support research activities in the field of rare diseases in the general pediatric population, ESPED is the official disease surveillance and research unit of the German Society of Pediatrics and Adolescent Medicine (DGKJ).ESPED provides the infrastructure to conduct nationwide surveillance studies including almost all children's hospitals in Germany and has advanced the field of pediatrics in various subspecialties by providing otherwise unobtainable data on rare pediatric diseases (17)(18)(19)(20)(21)(22). PIA is coordinated by a steering committee consisting of six individuals from four medical faculties (Essen, Dresden, Mainz, Munich).A local representative of each participating PICU (PIA investigator) ensures data entry and serves as contact person for the network.All German PICUs will be invited to become part of the network, with stepwise PICU enrollment for data entry. Structural survey of participating PICUs Upon entrance to the network and annually, a survey on the organizational structure, personnel and equipment of each PICU must be completed by local PIA investigators.The survey is based on the defining characteristics and requirements for PICU levels which are currently being developed by the Association of the Scientific Medical Societies in Germany (AWMF) and will then be made publicly available. Study population All patients ≥28 days and >41 + 0 weeks corrected gestational age admitted to a participating PICU are eligible for a basic survey consisting of six items.If criteria for a detailed survey are not fulfilled, data entry is closed after completion of the basic survey (Figure 1).For cases that fulfill the criteria, a more detailed survey will be performed.Criteria for the detailed survey include age <18 years, duration of PICU stay ≥48 h or death within the first two days after PICU admission (Figure 1).Patients discharged from the PICU and readmitted during the same hospital stay are considered new cases.Unplanned PICU readmission of a patient within 24 h is assessed by the local investigators and entered as yes/no into a dedicated variable. Data collection and data protection Upon PICU discharge, local PIA investigators enter fully anonymized patient data (i.e., without name, detailed date of birth, home address or other identifiers) via eCRFs into a central database stored on a server at the Institute of Medical Biostatistics, Epidemiology, und Informatics (IMBEI) of the University Medical Centre Mainz (Germany), which serves as data custodian (23). Patient data are anonymized in a way that reported cases cannot be re-identified, neither by IMBEI personnel nor by scientists analyzing the data.Since only anonymized data from routine clinical care are collected, no informed consent is needed. The study and data protection protocol were approved by the Ethics Committee at the State Medical Association of Rhineland-Palatinate (study ID: 2022-16893) and the State Representative for Data Protection in Rhineland-Palatinate (study ID: 8223-0001#2023/0002-0104 LfDI).During the course of the project, the entry of the basic survey data will be shifted toward the timepoint of admission in order to comply with demands of the surveillance purpose of the network.Frontiers in Pediatrics 03 frontiersin.org Items Demographic and clinical variables were drafted and refined by the authors, who are experts in the fields of pediatric intensive care and pediatric epidemiology, until consensus was achieved.After testing the practicability and feasibility of the proposed survey on real cases at the University children's hospitals of Dresden, Essen, and Dr. von Haunersches Kinderspital Munich, three more rounds of refinement including literature search and expert discussions were conducted. The definitions of variables (data dictionary) are deposited in English and German language at the homepage of the PIA network and Mendeley data (doi: 10.17632/nwh3krvz97.1).Updates of the variable list will be deposited for maximum transparency and to provide common data elements for use in pediatric intensive care research. Planned refinement of variables during ongoing data collection After the first year of data collection, data will be assessed for completeness and plausibility and local investigators of the participating PICUs will be queried for potential improvement of variables.Selected variables will then be revised as appropriate, and local PIA investigators queried regularly to continuously improve the registry. Data monitoring and quality assurance Collected data are checked for completeness and plausibility on a quarterly basis (i.e., 4-times per year in April, July, October and January for the respective past quarter).Customized center-specific data quality reports are generated and sent to the PIA steering committee and local PIA investigators.Lack of completeness or plausibility (e.g., >2% of missing/implausible values) are discussed and possible solutions elaborated (e.g., improvement of survey items or the e-CRFs). Steps of the evaluation According to the early stage of the development of the PIA network (including structures and processes), proof-of-concept aspects will be evaluated in a first step.This includes performance of the network, feasibility of data collection and quality of collected data.After this proof, data will be evaluated concerning their potential in the fields of clinical research, quality control and disease surveillance (see below). Assessment of survey methodology The quality of the survey methods is investigated on an annual basis (i.e., for the preceding calendar year).Quality indicators comprise (i) the nationwide coverage of the PIA network (based on eligible and participating centers), (ii) the completeness of reported cases (by comparing PICU admissions with the number of cases captured by the central database), (iii) the completeness of values per case and possible reasons for incompleteness and (iv) the amount of and reasons for implausible values. For the latter two, multi-level regression analysis will be used with binary indicators for missingness and implausibility as dependent variables and PICU-related context factors and individual demographic and clinical characteristics of cases as independent variables. Data sharing, accessibility and publication policy Each participating center has the right to analyze entered data from the own center at any time without restrictions. Proposals for data analyses of the complete data set can be submitted by eligible investigators specified in a publication guideline.Briefly, all local PIA investigators and members of the steering committee are entitled to submit a proposal.Each proposal will be evaluated for methodological feasibility and scientific relevance by an internal review board.After endorsement, the data will be analyzed by the designated statistician/epidemiologist in collaboration with the initiators of the study.A manuscript draft must be submitted to the internal review board for approval within one year after the initial endorsement.After approval by the internal review board, the manuscript can be submitted to a journal for publication. Data analysis Data analysis is only carried out by trained statisticians or epidemiologists using the appropriate methods to answer the respective research question.In general, effect estimation is preferred over statistical hypothesis testing whenever appropriate.The original dataset will not be passed on to investigators or made publicly available. Annual reports Reports of the network will be published annually.An anonymized ranking with relevant PICU outcomes will be provided to each PICU to promote benchmarking and identify fields for potential improvement. Affiliation with medical societies The PIA network is officially endorsed by the GNPI and the Pediatric section of the DIVI. Funding The conceptualization of the network was kindly supported by the Stiftung BINZ (Ulm, Germany).For the nationwide roll-out and permanent consolidation of the registry, additional funding will be sought. Discussion The presented PIA network aims to improve pediatric intensive care medicine in Germany on three levels: For the first time, it will create the possibility to conduct nationwide observational studies on critically ill children, measure and compare treatment quality between pediatric intensive care units, and improve disease surveillance.The initiators' network in the field of pediatric intensive care, experience with multicentered and nationwide surveillance studies, the existing ESPED infrastructure that requires only adaptation instead of a new setup, and broad acceptance of ESPED among pediatricians are prerequisites that promote a successful implementation of the network into the pediatric intensive care landscape in Germany.All criteria for successful PICU registries published by Wetzel (24) are fulfilled. Worldwide, networks and registries are acknowledged as powerful instruments to conduct research in specific patient populations.In pediatric medicine, several registries that include critically ill children have been established to closer monitor subpopulations, e.g., neonates or injured children: The German Neonatal Network (GNN) has strongly influenced neonatal practice by continuously providing evidence on benefits of treatment interventions and unveiling previously unknown associations between risk factors and diseases (10,(25)(26)(27).Likewise, the TraumaRegister DGU ® allows in-depth analyses of severely injured children on all aspects of pre-hospital care, shock room management and the subsequent intensive care unit stay (28-31).The impact of the obtained results has reached far beyond scientific purposes but lead to adjustments of emergency care structures and processes.Deep insights into the airway management of critically ill children have been obtained from the National Emergency Airway Registry for Children (NEAR4KIDS) database located in the United States (13,32,33).Nationwide projects on critically ill children in the actual sense of PICU registries are the Australian and New Zealand Paediatric Intensive Care Registry (ANZPICR) (33)(34)(35) and the British PICAnet, which frequently outputs high impact research (9,11,(36)(37)(38). With these influential networks and registries as role models, the PIA network aims to contribute a building block to advance pediatric intensive care research and ultimately improve the treatment of critically ill children.Nationwide data on all aspects of pediatric critical illness in Germany will be collected and made available for research purposes.This will unveil different treatment strategies between centers and enable comparisons regarding short-term outcomes and complications.With Germany being one of the European countries with the largest population, findings from the PIA network may bring along important insights on an international level and advance population-level pediatric intensive care research.Considering the recent call of the DIVI to establish intensive care registries for research purposes, the PIA network excels this claim with its goals of improving inter-PICU networking, disease surveillance, and quality control. In the future, the different levels of PICUs and their contribution to pediatric intensive care provision in Germany can be characterized with the help of PIA for the first time.Annual reports will benchmark PICUs to identify their own strengths and weaknesses to optimize patient care accordingly.The assessment of patient outcomes makes it possible to measure the quality of care and provides the basis for potential PICU certifications in the future.As a beneficial side effect, the fact that outcomes are measured may increase awareness for the need to follow-up PICU patients.Even though not directly recorded, this may also bring into consciousness the recently described post-PICU syndrome (39, 40), hopefully fostering the implementation of structured PICU aftercare programs. Despite thorough planning, the PIA network has some limitations that could not be avoided at the time of conceptualization.The largest limitation is the need for retrospective manual data entry, which further burdens the already-limited human resources in German hospitals.For that reason, the patient population that is monitored in detail is limited to severely ill children.Less severely ill children with PICU stays shorter than 48 h will only be registered with a short survey.Also due to limited resources, follow-up is only short-term, potentially missing out on important long-term sequelae among PICU survivors.In order to not interfere with neonatal registries (German Neonatal Network, Hypothermie Registry), only nonneonatal cases are eligible for the PIA network.This may cause certain subgroups, such as infants with congenital malformations, to remain uncaptured by either registry and require refinement in the future.Because participation in the network is voluntary, a comprehensive surveillance of all German PICUs and all admitted patients will likely remain unachievable.The local PIA investigators are responsible for data entry and no monitoring is available at the timepoint of the network's implementation to ensure the completeness and quality of entered data -close interaction with the participating centers will be maintained in order to encourage active participation in the network.Further, no government-or institutional funding is available at the timepoint of implementation, putting the long-term continuation at risk. To overcome these limitations, the PIA network will require constant refinement and advancement of methods.For example, as much data as possible should be automatically transferred to minimize documentation efforts of PICU staff.This requires the consequent pursuit of digitalization which should include the data integration centers located at German university hospitals.With ongoing digitalization of hospital documentation, automated data export may reduce the burden of manual data entry, for example by designing PIA-compatible digital admission forms, extracting routine electronical patient documentation or hospital billing information.These processes will also be fundamental cornerstones to achieve and maintain high up- to-dateness of the registry.To enable prompt reactions, e.g., to infectious waves, real-time or near real-time information is indispensable.The authors' vision is to further develop the registry towards a real-time monitoring tool that represents the current state of pediatric intensive care utilization along with important real-time information of public interest in a dashboard on the homepage of the PIA network.In summary, the PIA network is a well-planned nationwide pediatric intensive care network and registry that envisions to improve the care for critically ill children in Germany in terms of improved research opportunities, quality measurement, and enhanced surveillance of PICU resource utilization.The organizational embedding into the long-established and acknowledged structures of the national surveillance unit ESPED, which belongs to the German Society of Pediatrics and Adolescent Medicine, and the ideational support of the two major German medical societies involved in the care of critically ill children make a successful implementation likely.However, the future success and long-term continuation of the network will depend on its ability to motivate PICU practitioners to engage in the network and to realize technological advancements to facilitate data acquisition. FIGURE 1 Flow FIGURE 1Flow chart of survey allocation for eligible patients to the PIA surveys.PICU, pediatric intensive care unit, *The latest version of the detailed survey is available at https://data.mendeley.com/datasets/nwh3krvz97/1, doi: 10.17632/nwh3krvz97.1.
2024-01-12T16:10:52.099Z
2024-01-10T00:00:00.000
{ "year": 2024, "sha1": "3b3fdd04a4414c87db6d2663d64ceb079cb80cf6", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fped.2023.1254935/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0ccd914f2fe04219f53a9b7d60f9698ee1d110b5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
39257486
pes2o/s2orc
v3-fos-license
Reduction,Induction and Ricci flat symplectic connections In this paper we present a construction of Ricci-flat connections through an induction procedure. Given a symplectic manifold $(M,\omega)$ of dimension $2n$, we define induction as a way to construct a symplectic manifold $(P,\mu)$ of dimension $2n+2$. Given any symplectic connection $\nabla$ on $(M,\omega)$, we define an induced connection $\nabla^P$ which is a Ricci-flat symplectic connection on $(P,\mu)$. Introduction A symplectic connection on a symplectic manifold (M, ω) is a torsionless linear connection ∇ on M for which the symplectic 2-form ω is parrallel. A symplectic connection exists on any symplectic manifold and the space of such connections is an affine space modelled on the space of symmetric 3-tensorfields on M. In all what follows, the dimension 2n of the manifold M is assumed to be ≥ 4 unless explicitely stated. The curvature tensor R ∇ of a symplectic connection ∇ decomposes [4] under the action of the symplectic group into 2 irreducible components, R ∇ = E ∇ +W ∇ . The E ∇ component is defined only in terms of the Ricci-tensor r ∇ of ∇. All traces of the W ∇ component vanish. Two particular types of symplectic connections thus arize: -symplectic connections for which W ∇ = 0; we call them Ricci-type symplectic connections; -symplectic connections for which E ∇ = 0; they are called Ricci-flat since E ∇ = 0 ⇔ r ∇ = 0. When studying [1] local and global models for Ricci-type symplectic connections, (or more generally [2] so called special symplectic connections) , Lorenz Schwachhöfer and the present authors were lead to consider examples of the following construction: • start with a symplectic manifold (M, ω) of dimension 2n; • build a (cooriented) contact manifold (N, α) of dimension 2n + 1 and a submersion π : N → M such that dα = π * ω; • define on the manifold P = N × R a natural symplectic structure µ. It was observed [1] that if (M, ω) admits a symplectic connection of Ricci type one could "lift" this connection to P and the lifted connection is symplectic (relative to µ) and flat. The aim of this paper is to generalize this result. More precisely we formalize the notion of induction for symplectic manifolds. Starting from a symplectic manifold (M, ω), we define a contact quadruple (M, N, α, π), where N, α and π are as above, and we build the corresponding 2n + 2 dimensional symplectic manifold (P, µ). We prove the following: Theorem 4.1 Let (M, ω) be a symplectic manifold which is the first element of a contact quadruple (M, N, α, π). Let ∇ be an artitrary symplectic connection on (M, ω). Then one can lift ∇ to a symplectic connection on (P, µ) which is Ricci-flat. This theorem has various applications. In particular one has Theorem 5.3 Let (P, µ) be a symplectic manifold admitting a conformal vector field S which is complete, a symplectic vector field E which commutes with S and assume that, for any x ∈ P, µ x (S, E) > 0. Assume the reduction of Σ = {x ∈ P | µ x (S, E) = 1} by the flow of E has a manifold structure M with π : Σ → M a surjective submersion. The paper is organized as follows. In section 1 we study sufficient conditions for a symplectic manifold (M, ω) to be the first element of a contact quadruple and we give examples of such quadruples. Section 2 is devoted to the lift of hamiltonian (resp conformal) vector fields from (M, ω) to the induced symplectic manifold (P, µ) constructed via a contact quadruple. We show that if (M, ω) is conformal homogeneous, so is (P, µ). Section 3 describes the structure of conformal homogeneous symplectic manifolds; this part is certainly known but as we had no immediate reference we decided to include it. Section 4 gives some constructions of lifts of symplectic connections of (M, ω) to symplectic connections on the induced symplectic manifold (P, µ) constructed via a contact quadruple. We also prove theorem 4.1. In section 5 we give conditions for a symplectic manifold (P, µ) to be obtained by induction from a contact quadruple (M, N, α, π). We give also a proof of theorem 5.3. 1 Induction and contact quadruples Definition 1.1 A contact quadruple is a quadruple (M, N, α, π) where M is a 2n dimensional smooth manifold, N is a smooth 2n + 1 dimensional manifold, α is a cooriented contact structure on N (i.e. α is a 1-form on N such that α ∧ (dα) n is nowhere vanishing), π : N → M is a smooth submersion and dα = π * ω where ω is a symplectic 2-form on M. Definition 1.2 Given a contact quadruple (M, N, α, π) the induced symplectic manifold is the 2n + 2 dimensional manifold P := N × R endowed with the (exact) symplectic structure µ := 2e 2s ds ∧ p * 1 α + e 2s dp * where s denotes the variable along R and p 1 : P → N the projection on the first factor. Induction in the sense of building a 2n + 2-dimensional symplectic manifold from a symplectic manifold of dimension 2n is also considered by Kostant in [3]. Remark 1.3 • The vector field S := ∂ s on P is such that i(S)µ = 2e 2s (p * 1 α); hence L S µ = 2µ and S is a conformal vector field. • The Reeb vector field Z on N (i.e. the vector field Z on N such that i(Z)dα = 0 and i(Z)α = 1) lifts to a vector field E on P such that: p 1 * E = Z and ds(E) = 0. Since i(E)µ = −d(e 2s ), E is a Hamiltonian vector field on (P, µ). Furthermore • Observe also that if Σ = { y ∈ P | s(y) = 0 }, the reduction of (P, µ) relative to the constraint manifold Σ (which is isomorphic to N) is precisely (M, ω). • For y ∈ P define H y (⊂ T y P ) => E, S < ⊥µ . Then H y is symplectic and (π • p 1 ) * y defines a linear isomorphism between H y and T πp 1 (y) M. Vector fields on M thus admit "horizontal" lifts to P . We shall now make some remarks on the existence of a contact quadruple the first term of which corresponds to a given symplectic manifold (M, ω). Lemma 1.4 Let (M, ω) be a smooth symplectic manifold of dimension 2n and let N be a smooth (2n + 1) dimensional manifold admitting a smooth surjective submersion π on M. Let H be a smooth 2n dimensional distribution on N such that π * x : H x → T π(x) M is a linear isomorphism (remark that such a distribution may always be constructed by choosing a smooth riemannian metric g on N and setting H x = (ker π * x ) ⊥ ). Then either there exists a smooth nowhere vanishing 1-form α and a smooth vector field Z such that ∀x ∈ N we have (i) ker α x = H x (ii) Z x ∈ ker π * x (iii) α x (Z x ) = 1 or the same is true for a double cover of N. Proof Choose an auxiliary riemannian metric g on M and consider N ′ = {Z ∈ T N | Z ∈ ker π * and g(Z, Z) = 1}. If N ′ has two components, one can choose a global vector field Z ∈ ker π * on N and define a smooth 1-form α with ker α = H and α(Z) = 1. If N ′ is connected, N ′ is a double cover of N (p : N ′ → N : Z x → x) and we can choose coherently Z ′ ∈ T Z N ′ by the rule that its projection on T x N is precisely Z. ✷ This says that if we have a pair (M, N) with a surjective submersion π : N → M we can always assume (by passing eventually to a double cover of N) that there exists a nowhere vanishing vector field Z ∈ ker π * and a nowhere vanishing 1-form α such that α(Z) = 1 and ker α projects isomorphically on the tangent space to M. The vector field Z is determined up to non zero multiplicative factor by the submersion π; on the other hand, having chosen Z, the 1-form α can be modified by the addition of an arbitrary 1-form β vanishing on Z. Ifα = α + β is another choice, the 2-form dα is the pull back of a 2-form on M iff i(Z)dα = 0; i. e. iff: This can always be solved locally. We shall assume this can be solved globally. We shall now give examples of contact quadruples for given symplectic manifolds. The associated induced manifold is P = N × R = M × R 2 ; with coordinates (t, s) on R 2 and obvious identification Example 2 Let (M, ω) be a quantizable symplectic manifold; this means that there is a complex line bundle L p −→ M with hermitean structure h and a connection ∇ on L preserving h whose curvature is proportional to iω. is a contact quadruple. The associated induced manifold P is in bijection with L 0 = L\ zero section; indeed, consider Ψ : Clearly L 0 is a C * principal bundle on M; denote byα the C * -valued 1-form on L 0 representing ∇; if j 1 : N → L 0 is the natural injection and similarly j 2 : iR → C the obvious injection, we have On the other hand the 1-form e 2s p * 1 α = 1 ik e 2s p * 1 j * 2 α ′ ; this shows how the symplectic form µ = d(e 2s p * 1 α) on P is related to the connection form on . Such examples have been studied by Kostant [3]. Notice that Ω vanishes as soon as one of its arguments is in h (=Lie algebra of H). Let Let G 1 be the connected and simply connected group of algebra g, and let H ′ be the Then G 1 /H ′ admits a natural structure of smooth manifold; define N := G 1 /H ′ . Let p 1 : G 1 → G be the homomorphism whose differential is the projection g 1 → g on the first factor; clearly it is a surjective submersion. We shall now construct the contact form α on N: p * 1 • p * ω is a left invariant closed 2-form on G 1 vanishing on the fibers of p • p 1 : G 1 → M. Its value Ω 1 at the neutral element e 1 of G 1 is a Chevalley 2-cocycle of g 1 with values in R. Define the 1-cochain Then i. e. Ω 1 = δα 1 is a coboundary. Letα 1 be the left invariant 1-form on G 1 corresponding to α 1 . Let q : G 1 → G 1 /H ′ = N be the natural projection. We shall show that there exists a 1-form α on N so that q * α =α 1 . For any U ∈ g 1 denote byŨ the corresponding left invariant vector field on G 1 . For any so that indeedα 1 is the pullback by q of a 1-form α on N = G 1 /H ′ . Furthermore dα = π * ω because both are G 1 invariant 2-forms on N and: where we denote by U * N the fundamental vector field on N associated to U ∈ g 1 . Thus Lemma 1.7 Let (M = G/H, ω) be a homogeneous symplectic manifold; let Ω be the value at the neutral element of G of the pull back of ω to G. This is a Chevalley 2 cocycle of the Lie algebra g of G. If g 1 = g ⊕ R is the central extension of g defined by this 2 cocycle and G 1 is the corresponding connected and simply connected group let H ′ be the connected subgroup of Remark 1.8 The center of G 1 is connected and simply connected, hence the central subgroup expt(0, 1) is isomorphic to R. The subgroup p −1 1 (H) is a closed Lie subgroup of G 1 whose connected component is Let X be a hamiltonian vector field on M; i. e. Consider the horizontal liftX of X to N defined by α(X) = 0 π * (X) = X, and the liftX ofX to P defined by Let Z be the Reeb vector field on (N, α) and let E be its lift to P defined by Definition 2.1 Define the liftX of a hamiltonian vector field X on (M, ω) as the vector field on P defined by: The vector fieldX is a hamiltonian vector field on (P, µ). Furthermore if g is a Lie algebra of vector fields X on M having a strongly hamiltonian action, then the set of vector fieldsX on P form an algebra isomorphic to g and its action on (P, µ) is strongly hamiltonian. which shows thatX is hamiltonian and that the hamiltonian function is fX = e 2sf X . Also if X, Y ∈ g: ✷ If C is a conformal vector field on (M, ω) we may assume By analogy of what we just did, define the liftC 1 of C to (P, µ) by: Then ThusC 1 is a conformal vector field provided: Or equivalently The left hand side is a closed 1-form. If this form is exact we are able to lift C to a conformal vector fieldC 1 on P . Notice that the rate of variation of b along the flow of the Reeb vector field is prescribed: A variation of this construction reads as follows. Let Then: If we choose l = −1/2 ThusC 2 is a symplectic vector field on (P, µ) if the closed 1-form π * i(C)ω − α is exact. If this is the case the liftC 2 is hamiltonian and Let g be an algebra of conformal vector fields on (M, ω). Let X ∈ g be such that . Then g = RX ⊕ g 1 , where the vector fields associated to the elements of g 1 , are symplectic. We shall assume here that they Consider the lifts of these vector fields to (P, µ). Notice as before that L E µ = 0 and L ∂s µ = −2µ. (ii) If X is a conformal vector field on (M, ω) it admits a conformal (resp. symplectic) lift to (P, µ) if the closed 1-form π * (i(X)ω) − α is exact. The symplectic lift is in fact hamiltonian. (iii) The vector field E on P is hamiltonian and the vector field ∂ s is conformal. The stability of the class of conformally homogeneous spaces under this construction leads us to the study of these spaces. 3 Conformally homogeneous symplectic manifolds Definition 3.1 Let (M, ω) be a smooth connected 2n ≥ 4 dimensional symplectic manifold. A connected Lie group G is said to act conformally on (M, ω) if (i) ∀g ∈ G, g * ω = c(g)ω (ii) There exists at least one g ∈ G such that c(g) = 1. As ω is closed c(g) ∈ R; also c : G → R is a character of G. Let G 1 = ker c; it is a closed, normal, codimension 1 subgroup of G. The 1-parametric group exp tX is such that (exp tX) * ω = e t ω and this group exp tX is thus isomorphic to R. Hence the group G 1 is connected and if G is simply connected so is G 1 . If X * is the fundamental vector field on M associated to X, remark that L X * ω = −ω since X * x = d dt exp −tX · x| 0 . Case (i) By transitivity the dimension of all G 1 orbits is (2n − 1). If we write as above g = g 1 ⊕RX, the vector field X * is everywhere transversal to the G 1 orbits. In particular it is everywhere = 0. Since g 1 is an ideal in g, the group exp tX permutes the G 1 orbits. Notice that for any Y ∈ g 1 . Hence This says that the various orbits of G 1 have "conformally" equivalent contact structure; i. e. as [X * , Y * ]is tangent to the orbit. This says that [X * , Z] is proportional to Z; also Hence [Y * , Z] must be proportional to Z and thus [Y * , Z] = 0 which says that the Reeb vector is G 1 stable. Case (ii) G 1 admits an open orbit. We shall assume that this orbit coincides with M. Thus (M, ω) is a G 1 homogeneous sympletic manifold and ω is exact. Assume that the action of G 1 is strongly hamiltonian; i. e. ∀Y ∈ g 1 where U * denotes the fundamental vector field associated to U ∈ g 1 on θ 1 . Then We also haveL X * η = −η. It is no restriction to assume X * ξ = 0 (since one can replace X by X + Y for any Y ∈ g 1 and any tangent vector at ξ can be written in the form Y * ξ ). Assuming G (hence G 1 ) to be connected and simply connected the derivation D exponentiates to a 1-parametric automorphism group of g 1 given by e tD and these "exponentiate" to a 1-parametric automorphism group of G 1 which will be denoted a(t). The product law in G = G 1 · R reads: (g 1 , t 1 )(g 2 , t 2 ) = (g 1 a(t 1 )g 2 , t 1 + t 2 ). As X * ξ = 0 we have: In particular if The above relation at ξ reads: But on θ 1 , ω is the Kostant-Souriau symplectic form; hence That is ξ − ξD vanishes identically on the derived algebra g ′ 1 . Conversely suppose we are given an algebra g 1 , an element ξ ∈ g * 1 and a derivation D of Then, if, as above, H 1 denotes the stabilizer of ξ in G 1 and h 1 its Lie algebra, one observes that Y ∈ h 1 implies DY ∈ h 1 . On the orbit θ 1 = G 1 · ξ = G 1 /H 1 define the vector fieldX atξ = g 1 · ξ by: This can be expressed in a nicer way as: Observe that this expression has a meaning; indeed if we assume that g ∈ H 1 (= stabilizer of ξ) ThusX ξ = 0 and, if h ∈ H 1 : and similarly at any other point, so thatX is a conformal vector field (LX ω = −ω). We conclude by (ii) If the maximum dimension of the G 1 orbits is (2n − 1) M is a union of (2n − 1) dimensional G 1 orbits; each of these orbits is a contact manifold. (iii) If G 1 acts transitively on M in a strongly hamiltonian way, M is a covering of a G 1 orbit θ in g * 1 (= dual of the Lie algebra g 1 of G 1 ). Furthermore if ξ ∈ θ, there exists a derivation D of g 1 such that ξ − ξ • D vanishes on the derived algebra. Conversely if we are given an element ξ ∈ g * 1 and a derivation such that ξ − ξ • D vanishes on the derived algebra, the orbit θ has the structure of a conformal homogeneous symplectic manifold. Induced connections We consider the situation where we have a smooth symplectic manifold (M, ω) of dim 2n, a contact quadruple (M, N, α, π) and the corresponding induced symplectic manifold (P, µ). Recall that P = N × R and µ = 2e 2s ds ∧ p * 1 α + e 2s dp * 1 α where s is the variable along R and p 1 : P → N the projection on the first factor. Let ∇ be a smooth symplectic connection on (M, ω). We shall now define a connection ∇ P on P induced by ∇. Let us first recall some notations: Denote by p the projection p = π • p 1 : P → M. If X is a vector field on M,X is the vector field on P such that We denote by E the vector field on P such that Clearly the values at any point of P of the vector fieldsX, E, S = ∂ s span the tangent space to P at that point and we have The formulas for ∇ P are: where f is a function on M, U is a vector field on M,ŝ is a symmetric 2-tensor on M, and σ is the endomorphism of T M associated to s, henceŝ(X, Y ) = ω(X, σY ). Notice first that these formulas have the correct linearity properties and yield a torsion free linear connection on P . One checks readily that ∇ P µ = 0 so that ∇ P is a symplectic connection on (P, µ). We now compute the curvature R ∇ P of this connection ∇ P . We get The Ricci tensor r ∇ P of the connection ∇ P is given by Theorem 4.1 In the framework described above, ∇ P is a symplectic connection on (P, µ) for any choice ofŝ, U and f . The vector field E on P is affine ( LẼ∇ P = 0) and symplectic ( LẼµ = 0); the vector field ∂ s on P is affine and conformal (L ∂s µ = 2µ). Furthermore, choosinĝ we have: • the connection ∇ P on (P, µ) is Ricci flat (i.e. has zero Ricci tensor); • if the symplectic connection ∇ on (M, ω) is of Ricci type, then the connection ∇ P on (P, µ) is flat. • if the connection ∇ P is locally symmetric, the connection ∇ is of Ricci type, hence ∇ P is flat. Proof The first point is an immediate consequences of the formulas above for r ∇ P . The second point is a consequence of the differential identities satisfied by the Ricci The third point comes from the fact that (∇ P Z R ∇ P )(X,Ȳ )T contains only one term in E whose coefficient is 1 2 W ∇ P (X, Y, T, Z). ✷ A reduction construction We present here a procedure to construct symplectic connections on some reduced symplectic manifolds; this is a generalisation of the construction given by P. Baguis and M. Let (P, µ) be a symplectic manifold of dimension (2n + 2). Assume P admits a complete conformal vector field S: Assume also that P admits a symplectic vector field E commuting with S Then Assume P ′ := {x ∈ P |µ x (S, E) > 0} = ∅ and let: Thus Σ = ∅ and it is a closed hypersurface (called the constraint hypersurface). Remark The tangent space to the hypersurface Σ is given by The restriction of µ x to T x Σ has rank 2n − 2 and a radical spanned by E x . Remark thus that the restriction of α to Σ is a contact 1-form on Σ. Let ∼ be the equivalence relation defined on Σ by the flow of E. Assume that the quotient Σ/ ∼ has a 2n dimensional manifold M structure so that π : Σ → Σ/ ∼= M is a smooth submersion. Define as usual the reduced 2-form ω on M by The definition of ω x does not depend on the choice of y. Indeed Clearly ω is of maximal rank 2n as H is a symplectic subspace. Finally Hence ω is closed and thus symplectic. Clearly π * ω = µ |Σ = d(α |Σ ). Remark 5. 1 The symplectic manifold (M, ω) is the first element of a contact quadruple (M, Σ, 1 2 α | Σ , π) and the associated symplectic (2n+2)-dimensional manifold is (P ′ , µ | P ′ ). We shall now consider the reduction of a connection. Let (P, µ), E, S, Σ, M, ω be as above. Let ∇ P be a symplectic connection on P and assume that the vecor field E is affine (L E ∇ P = 0). Then define a connection ∇ Σ on Σ by i. e. ∇ Σ is a torsion free connection and E is an affine vector field for ∇ Σ . Define a connection ∇ M on M by: . If x ∈ M, this definition does not depend on the choice of y ∈ π −1 (x). Also i. e. the connection ∇ M is symplectic. Lemma 5.2 Let (P, µ) be a symplectic manifold admitting a symplectic connection ∇ P , a conformal vector field S which is complete, a symplectic vector field E which is affine and commutes with S. If the constraint manifold Σ = {x ∈ P |µ x (S, E) = 1} is not empty, and if the reduction of Σ is a manifold M, this manifold admits a symplectic structure ω and a natural reduced symplectic connection ∇ M . In particular Theorem 5.3 Let (P, µ) be a symplectic manifold admitting a conformal vector field S (L S µ = 2µ) which is complete, a symplectic vector field E which commutes with S and assume that, for any x ∈ P, µ x (S, E) > 0. If the reduction of Σ = {x ∈ P | µ x (S, E) = 1} by the flow of E has a manifold structure M with π : Σ → M a surjective submersion, then M admits a reduced symplectic structure ω and (P, µ) is obtained by induction from (M, ω) using the contact quadruple (M, Σ, 1 2 i(S)µ | Σ , π). In particular (P, µ) admits a Ricci-flat connection. Reducing (P, µ) as above and inducing back we see that theorem 4.1 immediately proves this.
2017-09-07T05:45:31.810Z
2005-09-01T00:00:00.000
{ "year": 2005, "sha1": "a5653074dc134a80eed1aeb628851567621c1f4c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/math/0509014v1.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a5653074dc134a80eed1aeb628851567621c1f4c", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
12080666
pes2o/s2orc
v3-fos-license
Combining independent de novo assemblies optimizes the coding transcriptome for nonconventional model eukaryotic organisms Background Next-generation sequencing (NGS) technologies are arguably the most revolutionary technical development to join the list of tools available to molecular biologists since PCR. For researchers working with nonconventional model organisms one major problem with the currently dominant NGS platform (Illumina) stems from the obligatory fragmentation of nucleic acid material that occurs prior to sequencing during library preparation. This step creates a significant bioinformatic challenge for accurate de novo assembly of novel transcriptome data. This challenge becomes apparent when a variety of modern assembly tools (of which there is no shortage) are applied to the same raw NGS dataset. With the same assembly parameters these tools can generate markedly different assembly outputs. Results In this study we present an approach that generates an optimized consensus de novo assembly of eukaryotic coding transcriptomes. This approach does not represent a new assembler, rather it combines the outputs of a variety of established assembly packages, and removes redundancy via a series of clustering steps. We test and validate our approach using Illumina datasets from six phylogenetically diverse eukaryotes (three metazoans, two plants and a yeast) and two simulated datasets derived from metazoan reference genome annotations. All of these datasets were assembled using three currently popular assembly packages (CLC, Trinity and IDBA-tran). In addition, we experimentally demonstrate that transcripts unique to one particular assembly package are likely to be bioinformatic artefacts. For all eight datasets our pipeline generates more concise transcriptomes that in fact possess more unique annotatable protein domains than any of the three individual assemblers we employed. Another measure of assembly completeness (using the purpose built BUSCO databases) also confirmed that our approach yields more information. Conclusions Our approach yields coding transcriptome assemblies that are more likely to be closer to biological reality than any of the three individual assembly packages we investigated. This approach (freely available as a simple perl script) will be of use to researchers working with species for which there is little or no reference data against which the assembly of a transcriptome can be performed. Electronic supplementary material The online version of this article (doi:10.1186/s12859-016-1406-x) contains supplementary material, which is available to authorized users. Background RNA-Seq is a flavour of NGS that can generate extremely powerful datasets for a variety of research themes. Gene discovery, digital gene expression profiling of entire tissues or developmental stages and population genetics [1,2] are some of the broad applications to which this technology can be applied. For researchers working with nonconventional model organisms RNA-Seq is alluring because such analyses are often touted as being possible in the absence of an assembled genome to which such transcriptome data is ideally mapped. In these cases the researcher faces the significant bioinformatic challenge of accurately assembling an RNA-Seq dataset "de novo" [3]. This is a challenge because the currently dominant NGS platform (Illumina) requires nucleic acid samples to be fragmented prior to sequencing, a process that needs to be accurately bioinformatically reversed in order to reconstruct the original transcripts. Additionally, typical Illumina read lengths are much less than 500 bp long [4]. These features result in both genome guided and de novo transcriptome assembly approaches displaying a large number of bioinformatically derived artefacts, a phemonenon that is well known [3]. The challenge of generating an accurate assembly of a transcriptome has generated many responses from the scientific community [5][6][7][8][9], with each assembly package having its own strengths and weaknesses. One de novo assembly strategy has been to generate multiple assemblies with different k-mer values, to combine these and then remove the redundancy of the resulting merged assembly [10]. However this approach first requires the user to identify an appropriate range of k-mer values (not a trivial exercise), and may ultimately require the production of up to 62 transcriptomes for a single dataset [10]. Related to this issue is assessment of assembly quality. This issue is highlighted when one considers that different assembly packages applied to the same raw dataset usually generate markedly different outputs, even with the same assembly parameters, [10][11][12]. Any critical user would ask "which assembly is appropriate for my project?". For datasets with high proportions of "novel" genes (often the case for nonconventional model organisms), this problem has few solutions that can be generally applied to all datasets. Statistics such as the N50, average transcript size or coverage are not usually informative nor relevant when assessing the quality of an RNA-Seq assembly [13]. Another approach is to focus on the annotatability of a given assembly. In combination with standard sequence similarity searches against public databases, the recently released BUSCO (Benchmarking Universal Single-Copy Orthologs) package falls under this umbrella, and can be used to assess the completeness of a given transcriptome or genome assembly [14]. Having been through the process of de novo transcriptome assembly optimization with our nonconventional model Lymnaea stagnalis (a freshwater pulmonate mollusc), we have developed a simple strategy that takes the consensus coding features of a set of three (or more) independent assembly packages, and discards redundancy. This is not a new assembly method, but a way to survey the outputs of different assembly packages in order to generate a transcritpome that aims to be closer to the biological truth. We test our approach on simulated reads derived from the reference genomes of a fly and a nematode, and also on previously analyzed and publicly available raw RNA-Seq data derived from four eukaryotic lineages: two plants, a yeast, a fly and a nematode. In addition, we analyzed new RNA-Seq data derived from our model organism, Lymnaea stagnalis. For each dataset, we performed de novo assemblies with three independently developed and widely used software packages (Trinity [15], CLC Genomics Workbench V8.5 and IDBA-tran V1.1.1 [16]). The outputs of these assemblies were then processed through our pipeline. We demonstrate both bioinformatically (using a range of annotation based comparisons) and by validation in the lab for the L. stagnalis data, that this approach does indeed capture the most 'biologically correct' set of transcripts. Raw data acquisition We used Illumina NGS data previously reported from three well-established model organisms: Drosophila simulans, Caenorhabditis sp and the unicellular eukaryote Saccharomyces cerevisiae. To increase the phylogenetic diversity of this selection we also included two plants, Hippophae rhamnoides and Nicotiana benthamiana. We also sampled the foot tissue of an individual Lymnaea stagnalis from our lab culture, and extracted total RNA following the protocol described in [17]. A stranded Truseq polyA library was constructed and paired end sequencing was performed on the Illumina HiSeq2000 platform. 46.5 million pairs raw reads were generated. 42.3 millon of these passed trimming and quality filtering and were used in all subsequent assembly analyses (Additional file 1: Table S1). The raw RNA-seq data for Drosophila simulans, Caenorhabditis sp., Saccharomyces cerevisiae, Hippophae rhamnoides and Nicotiana benthamiana were obtained from the NCBI sequence read archive (SRA) (respective accession numbers: SRR1956911, ERR690851, SRR1924287, SRP011938 and SRA066161 (single end data omitted). The FASTQ data files were extracted using the SRA tool kit provided by NCBI. For all datasets, individual reads were quality filtered using Trimmomatic V0.32 [18] with the following parameters: LEADING:5 TRAILING:5 MINLEN:36 (step 2 in Fig. 1). For L. stagnalis, TruSeq primer sequences were clipped with the following parameter: ILLUMINACLIP:primer_file:2:30:10. The five datasets used in this study contained between 46,166,144 and 230,477,122 pairs of Illumina RNA-seq reads with read lengths of 100 bp, except for S. cerevisiae which had read lengths of 50 bp (Additional file 1: Table S1). Between 83 and 100% of the read pairs passed Trimmomatic quality checks (Additional file 1: Table S1). These quality filtered reads were used for our analyses. Generation of simulated Illumina reads from genomic references We also generated artificial reads derived from the reference genomes of two well-established model organisms: Drosophila melanogaster and Caenorhabditis elegans. Genomic reference sequences and gff annotations were donwloaded from NCBI database for D. melanogaster (GCF_000001215.4_Release_6_plus_ISO1_MT) and C. elegans (GCF_000002985.6_WBcel235). Gff annotations were transformed into gtf format using the 'rsem-gff3-to-gtf ' command from the Rsem package with the option mRNA for the RNA-pattern parameter [19]. Some annotations had to be deleted because strand information was not consistent with other records of the same transcript or CDS. The gtf files contained 30,421 transcripts for D. melanogaster and 28,014 for C. elegans. The D. melanogaster genome is composed of 1870 sequences and the C. elegans genome is composed of 7 sequences. Transcripts were extracted from D. melanogaster and C. elegans genomes using 'rsem-prepare-reference' from the Rsem package with the options mRNA for gff3-RNApatterns and RefSeq for trusted-sources. Fifty Million read pairs were generated using the Flux simulator complete pipeline with simulated expression [20]. Library construction and sequencing simulation parameters for D. melanogaster are provided in Additional file 2. These artificially generated reads were also analyzed to calculate the read density per transcript. In order to represent variation in gene expression levels Flux simulator does not simulate reads on all input transcripts. We therefore removed transcripts that lacked simulated reads for all downstream analyses. As for Illumina datasets, simulated reads were quality filtered using Trimmomatic V0.32 [18] with the following parameters: LEADING:5 TRAILING:5 MINLEN:36 (step 2 in Fig. 1). In both datasets, 99.4% of the read pairs passed Trimmomatic quality checks (Additional file 1: Table S1). These reads were used for our analyses. Transcriptome assemblies We selected three assembly packages with unique assembly strategies for our investigation: Trinity V2.0.3 [15], CLC Genomics Workbench V8.5 and IDBA-tran [16]. While all of these packages employ the De Bruijn method to perform their assemblies, CLC and Trinity use a single k-mer method whereas IDBA-tran uses a multiple k-mer method. In addition, while CLC and IDBA-tran produce a single De Bruijn graph (per k-mer for IDBA-tran and for the whole dataset for CLC), Trinity produces one De Bruijn graph per transcript, which are subsequently processed independently in order to extract all splice isoforms and to separate paralogous genes. For each of the eight datasets we performed one assembly with each assembly package, resulting in three independent assemblies per dataset (step 3 in Fig. 1). CLC assemblies were run using a word of 20 base pairs (bp), a bubble size of 50 bp, with reads mapped back to the transcriptome using default parameters. IDBA_tran assemblies were run with k-mer values ranging from 20 to 100 bp with a step size of 10 bp. Trinity assemblies were run with default parameters and a k-mer value of 25. For each of these independent assemblies we recorded a variety of statistics (number of transcripts, smallest transcript, largest transcript, median transcript size, total assembly size, N50; step 4 in Fig. 1). Concatenated-assembly generation Once the three individual assemblies for a given dataset had been generated we next produced a concatenated assembly. To do this we harmonized all assembly outputs into the same format. Transcript names were also modified so that the origin of each sequence in the concatenated-assembly could be traced (step 4 in Fig. 1). We then performed an intra-assembly clustering step in order to remove all strictly redundant transcripts present within each of the individual sub-assemblies for each dataset (step 5 in Fig. 1). For this clustering step we used CD-HIT-EST [21] with ten threads (-T), a maximum memory of 2549 megabytes (-M), local sequence identity (-G 0) with identity parameter of 100% (-c 1.00), minimal coverage ratio of the shorter sequence of 100% (-aS 1.00) and minimal coverage ratio of the longest sequence of 0.005% (-aL 0.005). The minimal ratio of the longest sequence was chosen in order to allow clustering of the whole range of transcript sizes. The resulting unique transcripts derived from each of the 3 assemblies for each dataset were then concatenated (step 6 in Fig. 1). TransDecoder V2.0.1 [22] was then used to detect open reading frames (ORFs) greater than 100 amino acids (step 7 in Fig. 1). The resulting coding sequence (CDSi.e. with 5' and 3' UTRs removed), were then clustered again using CD-HIT-EST with minimal coverage ratio of the longest sequence of 0.005% (-aL 0.005), but a slightly lower sequence identity than the previous clustering step (-c 0.98) in order to take in consideration the Illumina sequencing error rate (step 8 in Fig. 1). The only parameter that can vary between clustering runs at this stage was the minimal coverage ratio of the shorter sequence (-aS). This parameter had values that ranged from 75 to 100% (100, 99, 98, 97, 96, 95, 90, 85, 80 and 75%). An -aS value was retrospectively selected in order to generate the most concise assembly (see below under in silico testing of assemblies). The resulting cluster info file (*.clstr) was retained in order to identify the transcript that generated the longest CDS of each cluster, and also all other transcripts of this cluster for further analyses. We mined the cluster information file to determine the assembly origins of each CDS and to calculate the CDS extension size (see below). The consensus of each cluster was then classified into one of seven categories: 1. The cluster consensus was present in all three assembler outputs 2. The cluster consensus was present in CLC and IDBA_tran outputs 3. The cluster consensus was present in CLC and Trinity outputs 4. The cluster consensus was present IDBA_tran and Trinity outputs 5. The cluster consensus was present only present in the CLC output 6. The cluster consensus was present only present in the IDBA_tran output 7. The cluster consensus was present only present in the Trinity output The perl script (concatenator.pl) used to perform all of these steps is provided in the Additional file 3. As input this script requires the path to a directory containing the assembly outputs to concatenate, and the paths to two binary files: CD-HIT-EST and TransDecoder. Variable options include the nucleotide identity and the minimal coverage ratio of the shortest sequence for the CDS clustering step (step 8 in Fig. 1). Transcriptome assembly quality control In order to test the quality of the assemblies generated by our pipeline we adopted two approaches, an annotatability based approach (applied to all datasets), and in vitro validation (applied to our L. stagnalis dataset). Annotatability of assemblies These analyses were performed with two different goals in mind. The first was to retrospectively determine the best minimal coverage ratio (aS value) for the final clustering step (in order to minimize redundancy and loss of information). To this end, we performed BLASTx searches for each of the above listed aS values, and BUSCO analyses for all assemblies based on Illumina datasets [14]. For BLASTx searches the e-value was set to 1e-3. A perl script was used to count the number of CDSs with a BLASTx hit. In addition, the number of unique BLASTx hits were counted. These values were compared across the different assemblies in order to identify at which aS value the concatenated-transcriptome began to lose information. The second goal was to evaluate any improvement that our concatenated-assembly approach gave relative to each of the individual assemblers. We applied Transdecoder to the transcripts generated by each individual assembler with the same parameters as described above. Subsequent BLASTx searches were also performed as described above for the concatenated-assembly. In addition, BUSCO analyses of individual and concatenated transcriptomes were also compared. in vitro validation of the L. stagnalis assemblies We performed a small scale in vitro validation of our new L. stagnalis transcriptome data using 10 randomly selected transcripts from each of the following categories outlined above: 1, 5, 6 and 7. Although this is a small sample compared to the overall transcriptome size, the resulting trends were informative. Transcripts were selected randomly using a perl script (Additional file 4). We designed primer pairs for each of these 40 selected transcripts with a melting temperature of 60°C using Primer3 [23]. RT-PCR was performed on foot total RNA isolated from three L. stagnalis individuals (RNA derived from the individual used for NGS sequencing was not used in this exercise). Reverse transcription reactions were performed in a final volume of 25 μL as follows. One microgram of high quality total RNA was combined with 200 μmols of random hexamer and water to a final volume of 10 μL. This mix was put at 70°C for 5 min in order to melt RNA secondary structure and allow primer annealing. The mix was then cooled to room temperature. We then added to each reaction Promega 5X MMLV-RT buffer (final concentration 1X), dNTPs (final concentration of 0.4 mM), 200 Units of MMLV-RT H − mutant (Promega), and water to a final volume of 25 μL. For each reaction we performed a positive reverse transcription (RT+) containing all components mentioned above, and a negative reaction where MMLV-RT was replaced by water (RT-) to control for contaminating genomic DNA. Both RT+ and RT-reactions were incubated at room temperature for 10 min, and then heated to 42°C for 90 min. The reactions were then heated to 70°C for 15 min to inactivate the MMLV-RT. Single stranded cDNA was stored at -20°C. PCR reactions were performed in a final volume of 25 μL containing the following: a final concentration of 1X enzyme reaction buffer, 0.2 mM dNTPs, 0.2 μM forward and reverse primers, 0.5 U Q5 polymerase (NEB), 1 μL of cDNA template and water to a final volume of 25 μL. Thermocycling was were performed in a Senso-Quest thermocycler with the following steps: 94°C for 10 min, 35 cycles with denaturation at 94°C for 30 s, primer annealing at 55°C for 30 s, DNA synthesis at 72°C for 3 min with a final elongation step at 72°C for 10 min. PCR products were loaded onto a 2% agarose gel containing ethidium bromide and electrophoresed at 130 V for 40 to 50 min and then visualized under UV light. For each primer pair, results were considered congruent when all three replicate RT+ reactions contained a distinct band at the expected size, and all three replicate RT-reactions were negative. A result was considered incongruent in any other case for the RT+ reactions. Reactions with negative and incongruent results were repeated a second time to confirm the results. Individual transcriptome assemblies In order to broadly compare the outputs of the individual assemblers (CLC, Trinity and IDBA_tran) with our concatenated assemblies, we calculated some standard assembly metrics that are commonly used to characterize these kinds of datasets [13]. While each assembly output displayed different characteristics, a consistent pattern could be observed. Assemblies produced by Trinity always produced the highest numbers of transcripts and the largest transcriptome sizes (as measured by cumulating transcript lengths), whereas CLC generated assemblies with the lowest numbers of transcripts (except for the S. cerevisiae and C. elegans samples), and the smallest transcriptome sizes (Additional file 5: Table S2). Number of input reads did not have any influence on these metrics. Indeed, S. cerevisiae dataset has approximately twice the number of input reads than all other samples, and the smallest transcriptome output regardless of the assembly software. In general CLC and IDBA_tran produced 2.2 to 4.6 and 1.8 to 3.6 times fewer transcripts than Trinity respectively. N50 values for all assemblers lay between 405 and 4056 bp. IDBA_tran consistently generated the longest N50s, and CLC generally the smallest (except for the S. cerevisiae, H. rhamnoides and C. elegans datasets; Additional file 5: Table S2). However we must point out that the number of transcripts and N50 values will be biased by differences in the smallest transcript size assembled by each software (300 bp for IDBA-tran, 211 for CLC and 201 for Trinity; Additional file 5: Table S2), and also by the biological realities of these transcriptomes -longer N50 values do not necessarily reflect a better transcriptome assembly. The longest transcript sizes varied from 8609 to 51,362 bp, with the longest transcripts generated by Trinity (except for the H. rhamnoides, N. benthamiana, D. melanogaster and C. elegans datasets where it was generated by IDBAtran). Interestingly, for some datasets the longest transcript size varied by more than two fold according to the assembler used (Additional file 5: Table S2). These general observations confirm previous reports that the use of different assemblers (even though they are all based on the construction of de Bruijn graphs), generate significantly different final assemblies [12,[24][25][26]. This led us to explore the possibility of combining these assemblies and removing any redundancy. Concatenated transcriptome assemblies The main goal of our concatenated assembly approach was to improve assembly accuracy without generating a bloated assembly. In order to first remove intraassembly redundancy, a stringent clustering step (100% sequence identity on 100% of the shorter sequence length) was performed for each individual sub-assembly (step 5 in Fig. 1). For all datasets, the redundancy rate was zero for all IDBA_tran assemblies and below 0.02% for all CLC assemblies (Additional file 6: Table S3). For Trinity transcriptomes, the redundancy rate was always significantly higher and ranged between 0.02 and 30% (Additional file 6: Table S3). The redundancy in the Trinity assemblies was also higher in the two simulated datasets (27 and 30%) than in the Illumina datasets (maximum 11%) (Additional file 6: Table S3). Higher intra-Trinity redundancy is probably due to the fact that Trinity is the only assembler to produce one de Bruijn graph per transcript, and subsequently processes them one by one, whereas CLC and IDBA_trans produce only one graph overall. The non-redundant transcripts produced by each assembler for each dataset were then pooled and TransDecoder was used to detect putative ORFs with a size of ≥100 amino acids. The resulting datasets had concatenated transcriptomes with 25,854 to 885,944 transcripts, and TransDecoder detected between 22,180 and 379,596 putative ORFs (Additional file 7: Table S4). Both simulated datasets fell within the range described by the Illumina datasets, while the proportion of the simulated transcriptomes predicted to be coding was higher (Additional file 7: Table S4). This is most probably due to the fact that simulated reads were derived from mRNA molecules. The next critical step was to cluster the CDSs produced by Transdecoder in order to obtain the most concise coding transcriptome while minimizing information loss (step 8 in Fig. 1). To do this we used CD-HIT-EST with the nucleotide identity level set to 98% in order to be more conservative than the average Illumina sequencing error rate of 1%. The size ratio of the longest transcript to the overall transcript was set to 0.5% in order to include the shortest transcripts. The size ratio of the shortest transcript to the overall transcript (-aS) varied from 100 to 75% (see below). To evaluate the amount of information lost at this step, we used annotation-based metrics [13] that make more biological sense than metrics such as N50 or transcript size (however these can be found in Additional file 5: Table S2). BLASTx searches against Swiss-Prot database for each aS value were performed to determine the impact of the aS value in the above clustering step. It showed that the number of unique database entries decrease at 99% for both Illumina derived and simulated datasets (Additional file 8: Table S5). In addition, BUSCO analyses also showed that the completeness of each assembly began to decrease at an aS value of 99% for each sample. On the basis of these analyses (and to be the most conservative), the smallest aS value was set to 100% for all datasets. Nevertheless, it should be kept in mind that according to the dataset and the type of downstream analysis to be performed, a lower aS value may be more appropriate. After this clustering exercise, between 54 and 68% of the CDSs from each dataset were found to be redundant at the nucleotide level (Table 1). C. elegans dataset is in the range of the Illumina datasets whereas D. melanogaster is two point higher than the highest Illumina dataset, which is D. simulans. Our concatenated coding transcriptomes ranged in size from 9744 transcripts for S. cerevisiae to 127,526 transcripts for N. benthamiana (Table 1). Whatever the raw data origin, the number of transcripts in the concatenated assembly is less than the number of uniCDSs, and the number of CDSs in the concatenated assembly is more than the number of uniCDSs (Table 1). This is because transcripts often possessed more than one CDS (Table 1). Transcripts with multiple CDSs also influenced the number of redundant CDSs in our concatenated assemblies (between 3 and 22%, Table 1). The proportion of CDS redundancy for the simulated D. melanogaster data is within the range of all Illumina datasets, while the simulated C. elegans is 5% lower than the smallest Illumina dataset (H. rhamnoides 8%). Transcripts with multiple CDSs may be the result of sequencing or assembly errors, the activity of transposable elements such as group-II intron or transposases that get inserted in genes [27], or operon transcription [28]. Compared to the individual assemblies generated by CLC, Trinity and IDBA-tran (Additional file 5: Table S2), the concatenated assemblies of L. stagnalis, D. simulans and N. benthamiana contained fewer transcripts than any of the individual sub-assemblies, whereas for S. cerevisiae, Caenorhabditis sp, H. rhamnoides, D. melanogaster and C. elegans the number of transcripts within the concatenated assemblies were within range of those produced by the individual assemblers. Considering the total transcriptome sizes, the concatenated assemblies were similar to the individual assemblies, but were always larger than the CLC generated assemblies (Table 1; Additional file 5: Table S2). For the S. cerevisiae dataset the concatenated transcriptome was larger than all individual assemblies (Table 1; Additional file 5: Table S2). Finally, the N50s of the concatenated assemblies were higher than all of the individual assemblies except for the S. cerevisiae and C. elegans dataset. This suggests that most of the transcripts removed during our concatenation and filtering steps had small sizes. These statistics also show that our pipeline did not increase the overall transcriptome size compared to the individual assemblers. In some cases the overall transcriptome size even decreased considerably (Table 1). This phenomena has also been previously observed in other plant datasets [6,10]. The N50 values also suggests that our pipeline generates coding transcriptomes that have larger average transcript sizes than assemblies generated by the individual assemblers [13]. In order to further assess the performance of our pipeline, we compared transcripts generated by the three individual assemblers and our pipeline with the original transcripts from which artificial reads were generated for both the D. melanogaster and C. elegans datasets ( Table 2). These comparisons were performed with BLASTn and we only considered hits with a nucleotide identity of 98% covering at least 50% of the original transcript. In both datasets, CLC failed to recover the highest proportion of genuine transcripts (21% for D. melanogaster and 34% for C. elegans), while our concatenated assemblies failed to recover the lowest proportion of genuine transcripts (11% for D. melanogaster and 29% for C. elegans). In general, these concerningly high values are similar to those previously made on human and worm de novo transcriptome assemblies [3]. For both the D. melanogaster and C. elegans datasets, most of the missing genuine transcripts in the concatenated assembly (85% for D. melanogaster and 80% for C. elegans) had read coverages of less than 10X, whereas most of the successfully recovered transcripts (82% for D. melanogaster and 89% for C. elegans) had read coverages higher than 10X. Interestingly, and in contrast to the missing genuine transcripts, up to 27% of the assembled transcripts were not present in the original transcript set (representing bioinformatically 'invented' transcripts). IDBA_tran produces the lowest proportion of invented transcripts (14% for D. melanogaster and 14% for C. elegans), whereas Trinity produces the highest proportion of invented transcripts in D. melanogaster (25%) and CLC in C. elegans (25%). In C. elegans, our concatenated assembly Step number in Fig. 1 b Proportion of discarded CDSs is indicated in brackets c Proportion of transcripts with >1 CDS is indicated in brackets d Proportion of none unique CDSs is indicated in brackets had a higher proportion of invented transcripts than any single assembler, whereas in D. melanogaster it had a lower proportion than CLC and Trinity ( Table 2). Evaluation of concatenated assemblies In order to study the composition of the final uniCDS clusters in our concatenated assemblies we assigned all clusters to one of seven categories (Fig. 2). The resulting pattern was consistent across all datasets, and all aS ratios used (75-100%) in the clustering step (data not showed). CDS clusters primarily belonged to either category 1 (the cluster was present in all three individual sub-assemblies following concatenation and redundancy filtering) or category 6 (the cluster was only present in the Trinity assembly) (Fig. 2). Of all three individual assemblers, Trinity consistently generated the most unique clusters (excepted for C. elegans), while CLC consistently generated the fewest unique clusters (excepted for C. elegans) (Fig. 2). In order to compare these distributions between samples, we performed Kolmogorov-Smirnov statistical tests. All paired comparisons were statistically non-significant except for 4 that always involved at least one of the plant transcriptomes (one for H. rhamnoides and three for N. bentamiana) (Additional file 9: Table S6). Distribution comparisons between concatenated assemblies from both simulated datasets without plant Illumina datasets were always non-significant (Additional file 9: Table S6). Fig. 2 Categorization of concatenated clusters according to their presence/absence in the individual sub-assemblies. Category 1: clusters found in all three assemblers; category 2: clusters found in CLC and Trinity; category 3: clusters found in CLC and IDBA; category 4: clusters found in IDBA-tran and Trinity; category 5: clusters found in CLC; category 6: clusters found in Trinity and category 7: clusters found in IDBA This categorization exercise led us to ask whether any one of these categories contained a higher proportion of "biologically correct" transcripts than others? In order to address this question, we performed an in vitro validation using the L. stagnalis dataset. We tested ten randomly selected clusters from categories 1 (clusters detected in all three assemblers), 5, 6 and 7 (clusters unique to either CLC, Trinity or IDBA_tran respectively). The positive validation rate for categories 5, 6 and 7 ranged from 40 to 70%, and the negative validation rate ranged from 30 to 60% (Table 3). Category 1 had a positive validation rate of 80%, and a negative validation rate of 0% (Table 3). These results suggest that clusters found by only one assembler (categories 5, 6 or 7) are likely to be either very lowly expressed or are assembly errors, while those found in all three assemblers (category 1) are more likely to be genuine molecules, giving further credence to the concept of our bioinformatic approach. We also retrospectively investigated the completeness of each individual assembly relative to our concatenated assemblies. The results of this analysis were striking. Averaging across all eight datasets, 50.3% ± 16.1% of CDS clusters in the concatenated assembly were present in the CLC assemblies, 62.3% ± 7.1% were present in the IDBA_tran assemblies and 77.5% ± 8.0% were present in the Trinity assemblies (Fig. 3a). Both simulated concatenated assemblies were in the range of all Illumina derived assemblies, excepted for CLC where the proportion of detected CDSs was higher than in any of the 6 other samples (Fig. 3a). On face value this result suggests that Trinity alone provides the best overall picture of a coding transcriptome. However, when we looked retrospectively at the effect of our pipeline on CDS extension, we found IDBA_tran to be the best performer for all datasets (except for both Caenorhabditis datasets; Fig. 3b). Between 13 and 44% of the CDSs from each assembler were extended during our concatenating process ( Table 4). The proportion of extended CDSs from the simulated transcriptomes were within the range of all Illumina derived assemblies, excepted for CLC in C. elegans which was 3% lower than the smallest Illumina dataset (IDBA_tran in S. cerevisiae) ( Table 4). We also compared the annotatability of our concatenated transcriptomes relative to assemblies generated by each of the three individual assemblers using BLASTx sequence similarity searches against Swiss-Prot [13]. The results of these analyses showed that annotatability was always higher in the concatenated assemblies compared to all of the individual assemblies (Table 5). For all Illumina derived datasets, the proportion of CDSs with a BLASTx hit expressed as a percentage of that found in the corresponding concatenated assembly ranged between 94% for the Trinity assembly of the S. cerevisiae dataset to 36% for the CLC assembly of the L. stagnalis dataset (Table 5). This trend also held true for the D. melanogaster and C. elegans simulated datasets ( Table 5). We were aware that an increase in the proportion of CDSs returning a BLASTx hit does not necessary mean that annotation diversity also increases. Indeed, an overall increase in the number of BLASTx hits could be due to a greater number of mis-assembled isoforms or paralogs present in a given assembly. To account for this phenomenon we investigated annotation diversity by calculating the number of unique database entries for all BLASTx searches. Again in all cases the number of unique BLASTx hits was highest in the concatenated assemblies (Table 5). For the Illumina datasets, the number of unique database hits in the individual assemblies expressed as a percentage of that found in the corresponding concatenated assembly ranged between 98% (for the Trinity assembly of the Caenorhabditis sp. dataset) and 72% (for the CLC assembly of the L. stagnalis dataset; Table 5). These results demonstrate that an overall increase in the rate of annotation is accompanied by an increase in annotation diversity. This phenomena was also observed in the analysis of a N. benthamiana transcriptome [10]. It should be noted that the increase in annotation diversity in our concatenated assemblies was less extreme than the increase in the overall annotatability (Table 5). This implies that most of the increase in the overall annotation is due to CDS isoforms that were not found by a given individual assembler. We also performed an analysis of assembly completeness using the transcription factor database BUSCO [14]. In addition to the simple presence/absence pattern of BUSCO entries, this analysis also provides interesting information regarding the number of duplicated and fragmented entries. The results of this analysis also confirmed the results obtained with our BLASTx searches; the number of detected BUSCOs entries was always higher in the concatenated assemblies than in all of the individual assemblers for all Illumina datasets and the simulated datasets (Table 6). In addition, the number of fragmented copies was always lower in all concatenated assemblies than in the individual sub-assemblies, except for the Caenorhabditis sp. dataset where the number of fragmented copies was equal in the concatenated and IDBA_tran assemblies and the C. elegans dataset where the number of fragmented copies is lower in Trinity and equal in IDBA_tran (Table 6). There were always the fewest number of duplicated copies in all CLC sub-assemblies, but CLC was always the single assembler with the fewest total number of BUSCO entries, except for the S. cerevisiae and C. CLC 59 35 52 89 224 284 48 25 IDBA_tran 210 70 189 662 361 569 149 28 Trinity 259 155 324 520 389 633 291 42 Fragmented copies Concatenated 19 53 16 143 20 5 89 54 CLC 66 106 39 230 88 167 94 57 IDBA_tran 20 59 16 174 33 63 93 54 Trinity 35 136 24 189 52 8 100 53 elegans datasets (Table 6). Our concatenated assemblies always contained a higher number of duplicated copies than all three individual assemblers. This is apparently a weakness of our methodology that must be traded off against an assembly with more copies and fewer fragmented copies (Table 6). Our concatenated assemblies produced from the simulated datasets reflected the same patterns seen in the Illumina derived data (Table 6). Because the NCBI databases have evolved significantly over the last two years, we downloaded the previously reported [10] cumulative transcriptome of N. benthamiana (http://benthgenome.qut.edu.au/), repeated the BLASTx and BUSCO searches and compared these updated results to our assembly of the same raw data. This comparison revealed that essentially the same proportion of both assemblies returned a BLASTx hit against the swiss-prot database (75.28% versus 75.22%, Table 7). Nevertheless, 250 more unique database entries were detected in our concatenated transcriptome (Table 7). These two assemblies shared 13,938 entries, while our assembly possessed 2534 unique entries and the Nakasugi et al assembly possessed 2284 unique entries (Table 7). This picture was supported by the BUSCO analysis: both assemblies shared 929 BUSCOs entries (a total of 14 BUSCOs entries were missing in both assemblies suggesting this dataset is largely complete), with five entries unique to our assembly and eight unique to the Nakasugi assembly. In addition, the number of duplicated copies was lower in our assembly than in the assembly reported by Nakasugi et al. (745 versus 785 respectively). Conclusion As far as we are aware this is the first study to characterize the effects of combining multiple de novo transcriptome assemblies in order to both maximize the information content, and minimize the redundancy of the resulting coding transcriptome for a variety of eukaryotes. A similar method was previously reported for transcriptomes derived from plants in order to address assembly difficulties associated with polyploidy [10]. Our approach however requires only three alternative assemblies in comparison with many tens of assemblies. In general our methodology produces a more concise and information-rich coding transcriptome assembly that will make subsequent analyses more efficient; from the comparisons we conducted here on six independent eukaryotic datasets using three popular RNA-Seq assembly packages we generated on average 1.8X fewer transcripts, and significantly increased the degree and diversity of annotatability in comparison to any of the three individual assemblers. In addition, we tested our approach on two simulated datasets generated from reference genomes, confirming the results observed from 'real world' Illumina datasets. We believe our approach (encoded by the simple perl script provided here) will allow researchers with minimal bioinformatics experience to extract the most information from their RNA-Seq datasets. A weakness we observe in our approach is the generation of slightly more "false" transcripts and redundancy than seen in the individual assemblers we employed. This phenomenon (present in all methods used to assemble RNA-Seq data) will have an impact on subsequent analyses, for example differential gene expression (DGE). In the case of DGE analysis, this weakness can be countered to some extent by allowing multiple read mappings as implemented by Rsubread [29]. This also serves to emphasize the point that such analyses based on NGS data should always be confirmed by independent validation experiments. reading frames; PCR: Polymerase chain reaction; RNA: RiboNucleic Acid; SRA: Sequence read archive
2017-08-03T01:09:51.600Z
2016-12-01T00:00:00.000
{ "year": 2016, "sha1": "efd7cd937c124facd118a3fc4660360b620b515c", "oa_license": "CCBY", "oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/s12859-016-1406-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "efd7cd937c124facd118a3fc4660360b620b515c", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Biology", "Computer Science", "Medicine" ] }
241434850
pes2o/s2orc
v3-fos-license
Pollution Free Operation of Rail Vehicle with Diesel Engine using Fuel Cell Presently, energy used in rail transportation is fossil in nature, in case of electric vehicle, electricity which is generated mostly from coal/gas while in diesel engine driven vehicle oil /gas. Fossil energy sources inherently suffers from disadvantages such as limited in nature, damage to environment due to exhaust of greenhouse gases, noise pollution etc. Fuel cell not only overcome above demerits of diesel engine but offer other advantages such as energy recovery during braking, better dynamic response. A feasibility study of use fuel cell in place of diesel engine is presented in this paper. It briefly discuss fuel cell operation, its suitability for transient energy requirement of transportation application, necessity of energy storage system, simulation of potential of recovery of energy during braking. Present status of technology of drives, power conditioning is reviewed and a circuit topology for conversion of existing diesel engine based vehicle into fuel cell system is presented. Advantages of fuel cell hybrid electric vehicle (FCHEV) over conventional transportation vehicle is also discussed. I. INTRODUCTION This is an With change in style of life and ever growing industrialization is resulting in increased demand of energy at alarming rate, whereas reserve of conventional energy source such as coal, oil, gas are depleting very high rate beside damage to environment by air and noise pollution in. Hence there is immediate need for development alternative energy sources to overcome above concerns. A lot of development has taken place in this direction and some such sources. Renewable energy sources identified with good potential are, solar with batteries/ grid synchronized, wind generator, tidal energy of sea, biomass etc. Fuel cell from biomass family has potential to meet requirement of transportation application, such as compactness, availability 24x7x365 time frame and suitable to work on move. Indian Railways trains utilize energy either from 25 KV overhead catenary or from diesel engines. Since electricity is generated far away from the train hence, it does not create air and noise pollution at its site of use, therefore is not a health hazard to society. Furthermore, nonpolluting sources of energy such as water, nuclear, solar etc. are being explored for electricity generation besides development of modern technology for better emission control is being employed to limit air pollution. Hence electric transportation is not an immediate concern. Whereas diesel engine driven vehicles pass thru cities and town cause health hazard to the society and shall be addressed on priority. Fuel cell offers reliable and practical solution for use in transportation application, as it not only provides sustained renewable source of energy but also almost eliminate generation of greenhouse gases (Sox/Nox) hence no pollution to environment, produces almost no noise being combustion free system, no wear and tear of components so requiring less maintenance. In technical advance countries, fuel cell technology in car/busses/two-wheelers is being used commercially [1][2][3][4] though in limited numbers in comparison to conventional vehicles. (for details please refer www.fuelcells.org /uploads/fcbuses-world1.pdf) mainly due to higher cost, limited infrastructure for fuel supply and maintenance but fast spreading world over. It expected that as number increases cost will automatically come down and infrastructure will improve. Fuel Cell Hybrid Electrical Vehicle (FCHEV) buses/cars biggest advantage is zero emission from source to wheel and compare well in terms of efficiency with Electric or Hybrid Electric Vehicle while in terms of emission is way ahead, as later create considerable emission (for details please refer icrepq.com/ icrepq07/228-moghbelli.pdf). Indian government is pursuing introduction of FCHEV on large scale in mission mood from 2022 onwards. However, fuel cell technology in rail transport is in infancy stage and getting matured slowly. Few experimental prototype rail vehicles have been developed in recent [5][6][7]. Recently, Germany and UK has introduced intercity fuel cell based trains name Hydrail (http://www.railwaytechnology.com/features/ feature 122016 /) and Hydroflex (spectrum.ieee.org, AUG 2019, pp 06-07). China has also announce commercial operation of such trains. In a comparative study conducted for electric and fuel cell based hybrid system in terms of overall cost, emission and performance & capital cost revealed that considering infrastructure cost of electric system of electric vehicle, fuel cell based hybrid rail system cost is less, (for detail (pl see docs.trb.org/prp/13-1394.pdf and www.scirp.org/ journal/ PaperDownload.aspx). Pollution Free Operation of Rail Vehicle with Diesel Engine using Fuel Cell Naseam Haider Jafri, Sushma Gupta II. DIESEL ENGINE BASED RAIL VEHICLE Normally diesel engine based trains are operated either on long distance for goods movement for example in USA, Siberia etc (as large investment needed for electrification of tracks becomes economically unjustified) or in third world country due to constrain of capital investment. Roughly 40% routes in India are still non-electrified. However government has taken electrification of route in mission mode to avoid dependency for oil on foreign countries beside pollution caused by diesel engine operated trains. In India, DEMUs (Diesel electric Multiple Units) is used for passenger transport service on intercity routes, where distance between station stops is small 2 to 15 kilometer and total route is order 100-500 KM. DEMU is consists of minimum two basics unit on either end to facilitate movement of train in either direction. No of basic unit in a trains could be selected on passenger volume. Basic unit diagram is shown in figure 1, it consists of a Driving a Power Car (DPC) and no of passenger cars based on power capacity of DPC. Figure 1 Typical block diagram of diesel power car DPC has a diesel engine-alternator set, which supply DC power to two DC/AC converters connected in parallel, each of them fed 3phase Variable Voltage Variable Frequency (VVVF) supply to two motors connected in parallel mounted on each axle of bogie. DPC usually has two bogies. Normally power rating of diesel engine and DC link voltage of DEMU in range of 1400-2200 HP and 1500-2000 volt respectively. A digital controller ensures proper operation of complete train. Diesel engine based transportation vehicle inherently has following limitations/ disadvantages; a) Exhaust of greenhouse gases (Sox/Nox) causing air pollution thereby creating health hazard b) Generates a lot of noise thereby sound pollution c) One of the major disadvantage is inability to recover energy during braking (as neither any load nor energy storage system is available) d) Being an internal combustion engine require high maintenance due to wear and tear of rotating parts e) Poor efficiency from fuel to wheel f) Being mechanical system poor dynamic response resulting in poor acceleration of train g) Carrying fuel stock as dead load there by reduced hauling capacity III. FUEL CELLS BASED HYBRID SYSTEM A typical block diagram of Fuel Cell Hybrid Vehicle (FCHEV) is depicted in figure 2. It consists of mainly following sub systems. A. Fuel Cell and Balance of Plant (BOP) A simplified fuel cell is shown in figure 3 [8]. The fuel cell consists of two electrodes on either side of an electrolyte layer. The hydrogen fuel is fed to the cathode & oxygen from air is fed to anode continuously. The hydrogen fuel is decomposed into positive ions and negative ions. The intermediate electrolyte membrane, permits only the positive ions to flow from anode to cathode side and acts as an insulator for electrons, the free electrons moved to the cathode side through an external electrical circuit thereby produce electricity. Hydrogen positive ion react with oxygen at cathode to form pure water. The chemical reactions involved in the anode and cathode are given as Anode reaction: 2H2 Fuel cell being an energy source exhibit dropping characteristic of constant power except at both higher voltage and current due to polarization effect as shown in figure 4 [8,9]. Several fuel cells are stacked together connected in series and parallel combinations to produce required voltage & current. Fuel cell stacks for its operation requires many other equipment like compressor, regulator, diffusor etc. known as Balance of Plant (BOP) [3]. B. Energy storage system (ESS) Rail transportation vehicle operates in three distinct modes as shown in figure 5. i. At constant torque to pull the train to desired level of acceleration ii. Then at constant power to gain speed till rated voltage of motor is reached iii. Finally at weak field to reach maximum speed As evident from the above characteristic, none of power current or voltage are constant over the operation. Thus transient power, torque and current is required for successful operation. The requirement of tractive effort and current for specified performance depends upon mode of operation & load requirements e.g. in conventional local trains with DC traction motor, current drawn in constant torque operation typically goes as high as 2.5 times of rated current and reduces with increase in speed, whereas voltage increases with speed till it reaches rated value and remains constant thereafter. In typical sub urban route simulated time domain variation of current for 180 Amp rated motor for a complete route is shown in figure 6. Since fuel cells energy source has slow time domain characteristic, hence could not supply transient requirement of transportation application of energy, current or torque. Therefore, fuel system has to be supplemented by some energy source, which is able to deliver energy as well as store available energy at required rate at desired moment. The supplementary system is called Energy Storage System (ESS) and Fuel Cell System along with ESS is called Fuel Cell Hybrid System (FCHS) [10,11]. Various ESS having specific energy delivery and storage have been discussed in literature [12] with various combination [13,14]. Lithium-ion battery pack with super capacitor combination is one of the best combination and has been used in most of the development in this area. Fuel cell is operated at near optimal level to meet average load requirement whereas battery and ultra-capacitor delivery energy in medium and short duration respectively. However, type of service being performed by the vehicle decide quantum of benefit and complexity of implementation [10]. Energy recovered and performance improvement is remarkably high in sub urban application for mass movement and in hilly region but low enough in plane and long distance trains. C. Traction Drive The purpose of power conditioning unit is to convert input power into a desired fashion at output. In this case input power is DC voltage supply combination of fuel cell and ESS, which is converted into 3 phase variable voltage variable frequency (VVVF) supply at terminals of motor. Besides, it has to ensure operation of ESS and fuel cell in safe operational region and optimal fashion. Several topologies are reported in literature [10,13]. FCS is connected to ESS which may be battery or combination with Ultra Capacitors (UC) via bidirectional converter to have independent control on charging and discharging rates and supply power to DC link at predefined voltages. DC link voltage could be converted to VVVF supply using 3 phase converter either directly or via step up converter figure 7a, or Z-source inverter using less no of switches and providing protection [5,14]. Induction motor with sensor /sensor less vector control is preferred in industry to take advantage of robust designed and matured controls. Various type of optimization of power from FCS and ESS with focus on fuel consumption, performance have been discussed [15,16]. D. Proposed Drive Modern diesel engine driven vehicles are normally provided with IGBT based converter with VFFF vector control and induction motor. One of the important issue to be addressed for conversion of exiting diesel engine based transportation drive is, to utilize existing equipment to extent possible for economic feasibility and assets utilization. In order to utilize existing drive having DC link voltage in range of 1500-1800 Volts DC, topology presented in figure 8, could be employed. Wherein fuel cell stack is connected to DC link bus through a block diode to prevent reverse flow of current into fuel cell. Two independent bidirectional converters are connected each with batteries and ultra-capacitor to have independent controls of charging and discharging. Voltage of this DC link is boasted to desired voltages level by single phase converter pair with high frequency link or normal transformer replacing diesel engine-alternator cum rectifier. It will also provide isolation between high and low voltage circuit as desired feature in traction drives. A bogie control may be adopted, where in motors of on bogie are connected in parallel with one power converter. However for new vehicle axel control, wherein each motor is fed with independent converter could be employed. Further isolation transformer may be avoided by selecting a motor to match prevalent rating of fuel cell provided design permit. This arrangement will also offer flexibility in mounting of equipment on vehicle and weight balance. IV. SIMULATION OF ENERGY RECOVERY POTENTIAL IN BRAKING OPERATION Typical operation cycle of a transportation drive consist of acceleration to increase speed to reach quickly to destination and deceleration to reduce the speed in order stop or to meet the speed limitation of route. Rate of acceleration and deceleration depends upon application and varies in suburban, intercity, long distance/good trains. In trains having DC motors, during braking dynamic energy of train is either dissipated through frictional brake resulting in high wear and tear of brake shoes or in load resistor dissipating as heat. With invent of high rating power devices like GTO, IGBT, Induction motor with VFFF drives were developed, which are not only robust requiring less maintenance in absence of brushes and commutator but also facilitate recovery of energy during braking provided energy could be stored or used with other loads. This feature could be implemented in electric driven vehicle due to presence of strong grid of power supply capable of absorbing energy but not in diesel engine driven vehicles as neither storage nor utilization of energy in other loads is possible. Which is a big drawback of diesel engine driven vehicle. However, in Fuel Cell Hybrid Electric Vehicle (FCHEV) this limitation doesn't occur due to presence of ESS. Recovered energy in braking operation is quite considerable. In order to have an idea of potential of recoverable energy during braking, an actual operation of suburban train for round trip Church gate-Andheri-Church gate) route in Mumbai (India) for which all data such as route profile, track resistance and train schedule were readily available has been simulated for energy consumed in motoring and regenerated during braking operation with operation restrictions such as speed limit, stops etc Energy consumption in total route has been worked by calculating required tractive effort to overcome resistance of train movement and to achieve desired performance defined in equation 1. Tre q = Tr + Tacc (1) T is total travel time Required tractive effort is calculated over the complete route with increment in speed. Energy is calculated for motoring and regenerative braking operations and thereby net energy. V. RESULTS Simulation results for energy consumed during motoring, regenerated in braking and thereby net energy is given in table 1. Energy consumption as function of travel time is plotted in figure 9 & as function of travel distance is plotted in figure 10 respectively. It may be noted that about 35% energy could be recovered using regenerative braking system, which goes unrecovered in diesel engine based vehicles. However energy recovery to above level is not always possible and depends on track geometry and application. VI. DISCUSSION The Electric & diesel engine (internal combustion) driven vehicles are being commonly used in Indian railways on electrified and non-electrified routes respectively. A comparison of above with proposed fuel cell based vehicle for various parameters is presented in table 2. It is evident from above table that Fuel Cell Hybrid Electric Vehicle (FCHEV) is winner with flying colors considering its features of pollution free operation, use of renewable green fuel and regeneration of energy during braking operation. VII. CONCLUSIONS The use of Fuel Cell System supplemented with Electrical Energy Storage System has good potential in transportation applications especially as alternative to diesel engine based vehicle for obvious advantage of almost nil air & sound pollution besides ability to regenerate power during braking. Combination of Li-Ion batteries with super capacitors is found to be most technically appropriate solution for ESS. The induction motor being most reliable, robust & being widely used in transportation is obvious choice. Isolated topology using converter -transformer combination is a good choice for rail vehicle, which has inherent advantage of isolation between fuel cell stack and motor circuit. Optimum energy consumption from FCS and ESS could be achieved by appropriate control management system. However technology of Hybrid Fuel Cell based transportation system is yet to be matured, at present is in infancy, requiring further development in all related areas such as availability of hydrogen and its supply network, fuel cell stack including balance of plant for presenting a realistic, commercially viable challenge to diesel engine based rail transportation vehicles. Further, fuel cell and ESS accommodation in the space vacated by diesel engine system and weight balance is to be evaluated along with commercial viability.
2020-01-30T09:03:24.030Z
2020-01-10T00:00:00.000
{ "year": 2020, "sha1": "58dd893f6aa5b1475ac6b71a207a43bd70342d0d", "oa_license": null, "oa_url": "https://doi.org/10.35940/ijitee.c8347.019320", "oa_status": "GOLD", "pdf_src": "Unpaywall", "pdf_hash": "58dd893f6aa5b1475ac6b71a207a43bd70342d0d", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
235240848
pes2o/s2orc
v3-fos-license
Long-Term Survival Effect of the Interval between Postoperative Chemotherapy and Radiotherapy in Patients with Completely Resected Pathological N2 Non-Small-Cell Lung Cancer Simple Summary In patients with completely resected stage III pN2 non-small cell lung cancer (NSCLC), adjuvant chemotherapy of 4–6 cycles was recommended prior to post-operative radiotherapy (PORT). However, some were given concurrently or early-sequentially with PORT. The objectives of this study were to verify the benefit of adjuvant sequential chemotherapy and radiotherapy (SCRT) relative to that of concurrent chemoradiotherapy (CCRT) in an Asian population and to identify the optimal timing of initiation of PORT as part of adjuvant SCRT. A longer interval (>104 days and <180 days) between the initiation of adjuvant chemotherapy and PORT was associated with improved OS compared with CCRT. No locoregional recurrence-free survival (LRFS) difference related to the interval between the initiation of adjuvant chemotherapy and PORT was observed. In older patients (aged >60 years), the benefit of delayed PORT initiation was more significant. We suggest that PORT should be postponed in the completed-resected pN2 elderly patients. Abstract (1) Purpose: To investigate the effects of the time interval between initiation of adjuvant chemotherapy and radiotherapy on survival outcomes in patients with completely resected stage IIIA pN2 non-small-cell lung cancer (NSCLC); (2) Methods: Data on 2515 patients with completely resected stage IIIA pN2 NSCLC in 2007–2017 were extracted from the Taiwan Cancer Registry Database. The survival outcomes in patients who underwent concurrent chemoradiotherapy (CCRT) and sequential chemotherapy and radiotherapy (SCRT) with either a short (SCRT1) or long (SCRT2) interval between treatments were estimated using Kaplan–Meier, Cox regression, and propensity score matching (PSM); (3) Results: Multivariate analyses of OS showed that SCRT2 (hazard ratio [HR] 0.64, p = 0.017) was associated with improved overall survival (OS). After PSM, the median OS periods were 64 and 75 months in the SCRT1 and SCRT2 groups, respectively, which differed significantly from that of 58 months in the CCRT group (p = 0.003). In elderly patients, SCRT2 significantly improved survival relative to CCRT before PSM (p = 0.024) and after PSM (p = 0.002); (4) Conclusions: A longer interval between initiation of adjuvant chemotherapy and postoperative radiotherapy (PORT; SCRT2) improved OS relative to CCRT; the benefits were greater in elderly patients (age >60 years). Introduction Non-small-cell lung cancer (NSCLC) is the leading cause of cancer-related mortality worldwide [1]. For stage IIIA-pN2 NSCLC, surgical resection followed by adjuvant chemotherapy is the mainstay of treatment [2,3]. However, the role of postoperative radiotherapy (PORT) as part of multimodal therapy for completely resected IIIA pN2 NSCLC remains controversial. Its benefit has been a subject of debate since a meta-analysis of data from 2128 patients enrolled in nine randomized trials addressed its adverse effects in early-stage pN1 NSCLC [4]. Several subsequent studies were conducted to evaluate the effect of PORT in terms of improvement of locoregional control and overall survival (OS) [5][6][7][8][9][10][11]. Due to lack of strong evidence supporting the use of PORT for completely resected pN2 NSCLC, its use declined from 65% in 1992 to 37% in 2002 [5]. Ideal timing of PORT initiation also remains controversial. Adjuvant chemotherapy of 4-6 cycles was recommended prior to PORT, however, some were given concurrently or early-sequentially with PORT [12,13]. Two retrospective studies conducted in Asia demonstrated the effectiveness of early PORT, which benefited the OS in patients with stage IIIA pN2 NSCLC when followed by or administered concurrently with postoperative chemotherapy (POCT) [12,13]. However, the sequential chemotherapy and radiotherapy (SCRT) was associated with improved OS compared with adjuvant concurrent chemoradiotherapy (CCRT) by previous Adjuvant Navelbine International Trialist Association (ANITA) subgroup analyses, which demonstrated the benefit of PORT following adjuvant chemotherapy in patients with pN2 disease [14]. Most recently, the LungART trial, which focused on completely-resected pN2 disease, released its preliminary report of reduced evidence on the efficacy of PORT [15]. Adjuvant radiotherapy commenced within 4-8 weeks of surgery and SCRT were both included in the LungART trial [15]. Decisions regarding the optimal timing of PORT initiation must be made with balanced consideration of need for disease control by adequate adjuvant chemotherapy and possible reduction of the locoregional benefit of PORT. In the setting of SCRT for patients with completely resected IIIA pN2 NSCLC, this timing remains a subject of debate. The genetic makeup of tumors differs between Caucasian and Asian patients. For example, sensitizing epidermal growth factor receptor (EGFR) mutations are found in approximately 10% of Caucasian patients compared with up to 50% of Asian patients with NSCLC [16]. The effects of PORT administered as parts of adjuvant CCRT and SCRT need to be examined in large-scale studies conducted in Asian populations. The objectives of this study were to verify the benefit of adjuvant SCRT relative to that of CCRT in an Asian population and to identify the optimal timing of initiation of PORT as part of adjuvant SCRT in patients with completely resected stage III pN2 NSCLC. To our knowledge, this nationwide population-based study involves the largest cohort where the majority underwent intensity-modulated radiotherapy (IMRT) to evaluate the effect of interval between postoperative chemotherapy and radiotherapy. Data Source and Study Population Data on patients with NSCLC that was newly diagnosed between 1 January 2007 and 31 December 2017 were extracted from the Taiwan Cancer Registry Database (TCRD), Cancers 2021, 13, 2494 3 of 14 a nationwide database of oncology outcomes that captures the data from 97% of all newly diagnosed cancer cases in Taiwan [17]. The TCRD dataset includes clinical information and contains detail radiotherapy information not available in other Taiwan National Health Insurance Research Dataset (NHIRD). The follow-up period was extended from the index date, defined as the date of NSCLC diagnosis, to 31 December 2018. Survival during this period was examined via linkage to death certificates registered in the National Death Database. Our institute's review board approved the study protocol (EC1070305-E). The information on informed patient consent waived due to the retrospective nature of this study. From this dataset, data on patients with non-metastatic pN2 NSCLC who underwent microscopically negative-margin (R0) resection and at least lobectomy, adjuvant chemotherapy, and PORT were included. To minimize treatment variability, we excluded data of patients who received PORT doses <45 Gy and those who started adjuvant chemotherapy >90 days after surgery. To evaluate the impact of PORT timing on OS, the patient cohort was divided into the CCRT (first cycle of chemotherapy administered within 14 days of PORT initiation), SCRT1 (first cycle of chemotherapy administered 15-103 days before PORT), and SCRT2 (first cycle of chemotherapy administered 104-180 days before PORT) groups. The median interval between the first chemotherapy cycle and PORT in the SCRT1 and SCRT2 groups was 103 days. The maximum interval of 180 days accommodated PORT initiation up to 8 weeks after six cycles of chemotherapy, allowing some delay between chemotherapy cycles. Patients who initiated chemotherapy 14 days after PORT initiation were excluded from the study. In addition, we excluded those who were lost to follow-up or died within 3 months of diagnosis. Patients with no disease recurrence who were followed for <3 months after PORT were excluded from the CCRT group to avoid immortal time bias. Data on the following patient characteristics were collected: age, sex, year of diagnosis, treatment facility type, surgery type, Eastern Cooperative Oncology Group (ECOG) performance status (PS), smoking habit, tumor grade, histology, tumor size, tumor location, pathological T stage, pathological N stage, surgical margin status, radiation treatment time, status of target therapy usage, and total radiation dose. EGFR mutation information was not available in the TCRD until 2011. Information on the primary endpoint of OS, defined as the period from the index time of diagnosis to the date of death, was obtained from the TCRD and the Ministry of the Interior database. Statistical Analysis Analysis of variance and chi-square (X 2 ) test were used to evaluate inter-group differences in continuous and categorical variables, respectively. Univariate and multivariate Cox proportional-hazard modeling with hazard ratio (HR) calculation was used to identify factors associated with locoregional recurrence-free survival (LRFS), distant metastasisfree survival (DMFS), and OS. Such models were also employed to examine associations beween groups and the survival outcome while controlling for clinical (e.g., smoking, tumor size, and histology) and demographic (e.g., ECOG PS) variables. These variables represented significant predictors of survival in univariate and multivariate analyses. OS, LRFS, and DMFS were estimated using Kaplan-Meier analysis, and differences therein were assessed using the log-rank test. All tests were two tailed, and p < 0.05 was considered to represent statistical significance. Propensity score matching (PSM) was used to account for differences in baseline patient characteristics among treatment groups. Matching was performed based on patient characteristics and disease factors, including age, sex, tumor size, surgery type, treatment facility type, tumor site, and treatment time, using the method described by Rosenbaum and Rubin [18]. All calculations were performed using SAS version 9.3 (SAS Institute Inc., Cary, NC, USA) and SPSS version 22.0 (SPSS Inc., Chicago, IL, USA) software. Patient Selection and Characteristics In total, 2515 patients with completely resected stage IIIA pN2 NSCLC were identified in the TCRD. Patients who underwent neoadjuvant chemotherapy, or chemoradiation, or other pre-operative therapy were excluded from our study. After exclusion of those not given adjuvant CCRT or SCRT, 439 patients remained eligible for further analysis ( Figure 1). The cohort was divided into CCRT, SCRT1, and SCRT2 groups; demographic characteristics are summarized by group in Table 1. Sixty-four percent of patients with completely resected stage IIIA pN2 disease received SCRT after PORT, of whom 142 and 139 patients were assigned to the SCRT1 and SCRT2 groups, respectively. The most common histological diagnosis was adenocarcinoma (n = 344, 78%), and most patients were treated after 2010 and received PORT at a dosage of 45-55 Gy, delivered as intensity-modulated radiation therapy (IMRT). No significant difference was observed among the three groups in the distribution of histology types (p = 0.633), year of diagnosis (p = 0.816), ECOG PS (p = 0.567), sex (p = 0.882), smoking habit (p = 0.168), tumor site (p = 0.325), tumor size (p 0.595), EGFR mutation status (p = 0.297), or PORT dose (p = 0.415). More patients in the CCRT group had well-to moderately differentiated tumors (p < 0.001) and received IMRT (p = 0.025). Medical centers adopted SCRT more frequently than did regional hospitals (p = 0.033). Patient Selection and Characteristics In total, 2515 patients with completely resected stage IIIA pN2 NSCLC were identified in the TCRD. Patients who underwent neoadjuvant chemotherapy, or chemoradiation, or other pre-operative therapy were excluded from our study. After exclusion of those not given adjuvant CCRT or SCRT, 439 patients remained eligible for further analysis ( Figure 1). The cohort was divided into CCRT, SCRT1, and SCRT2 groups; demographic characteristics are summarized by group in Table 1. Sixty-four percent of patients with completely resected stage IIIA pN2 disease received SCRT after PORT, of whom 142 and 139 patients were assigned to the SCRT1 and SCRT2 groups, respectively. The most common histological diagnosis was adenocarcinoma (n = 344, 78%), and most patients were treated after 2010 and received PORT at a dosage of 45-55 Gy, delivered as intensitymodulated radiation therapy (IMRT). No significant difference was observed among the three groups in the distribution of histology types (p = 0.633), year of diagnosis (p = 0.816), ECOG PS (p = 0.567), sex (p = 0.882), smoking habit (p = 0.168), tumor site (p = 0.325), tumor size (p 0.595), EGFR mutation status (p = 0.297), or PORT dose (p = 0.415). More patients in the CCRT group had well-to moderately differentiated tumors (p < 0.001) and received IMRT (p = 0.025). Medical centers adopted SCRT more frequently than did regional hospitals (p = 0.033). Impact of Interval between Post-Operative Chemotherapy and Radiotherapy on Survival After PSM, data from 408 patients were available for analysis ( Figure 2B). Demographic and cancer characteristics were well balanced among the three groups. The median OS durations in the SCRT1 and SCRT2 groups were 64 (95% CI, 43.1-84.9) and 75 (95% CI, 67.4-82.5) months, respectively, which differed significantly from the median OS Impact of Interval between Post-Operative Chemotherapy and Radiotherapy on Survival After PSM, data from 408 patients were available for analysis ( Figure 2B). Demographic and cancer characteristics were well balanced among the three groups. The median OS durations in the SCRT1 and SCRT2 groups were 64 (95% CI, 43.1-84.9) and 75 (95% CI, 67.4-82.5) months, respectively, which differed significantly from the median OS duration of 58 (95% CI, 44.6-71.4) months in the CCRT group (log-rank test, p = 0.003; Figure 2B). Elderly patients in the SCRT2 group had significantly better survival than did those in the CCRT group before PSM (log-rank test, p = 0.024; Figure 4A), and this survival advantage remained significant after PSM (log-rank test, p = 0.002; Figure 4B). No such survival benefit was observed in younger patients before (log-rank test, p = 0.856; Figure 4C) or after (log-rank test, p = 0.871; Figure 4D) PSM. Discussion This study investigated the impact of a longer interval between adjuvant chemotherapy and PORT on the prognosis in patients with completely resected stage IIIA pN2 NSCLC. Crude 5-year OS proportions in the CCRT, SCRT1, and SCRT2 groups were 42%, 48%, and 62%, respectively, and were comparable to OS values obtained in retrospective studies on PORT and POCT administration in patients with stage IIIA N2 disease [12,13,19,20]. Discussion This study investigated the impact of a longer interval between adjuvant chemotherapy and PORT on the prognosis in patients with completely resected stage IIIA pN2 NSCLC. Crude 5-year OS proportions in the CCRT, SCRT1, and SCRT2 groups were 42%, 48%, and 62%, respectively, and were comparable to OS values obtained in retrospective studies on PORT and POCT administration in patients with stage IIIA N2 disease [12,13,19,20]. Among the patients with completely resected IIIA pN2 NSCLC, most recurrent tumors were located outside of the surgical area and accounted for most mortalities. Several randomized controlled trials have shown that adjuvant chemotherapy plays a key role in prolonging disease-free survival and OS [3,21,22]. However, high locoregional recurrence rates of 20-40% have been reported, even after adjuvant chemotherapy for completely resected IIIA pN2 NSCLC [14,21,23]. Consistent with the hypothesis that PORT improves locoregional control, which would translate to an OS benefit, retrospective studies of NCDB data have demonstrated that modern PORT at adequate dosages was associated with better OS in patients with completely resected IIIA pN2 NSCLC (5-year OS, 27.8% vs. 34.1%; p < 0.001) [8,9]. Furthermore, studies based on NCDB data have found that the survival outcome was associated with the timing of PORT, with better 5-year OS observed in patients treated with SCRT than in those treated with adjuvant CCRT for completely resected stage IIIA pN2 disease [19,20]. The work by Francis et al. based on NCDB data supports the detrimental effect of adjuvant CCRT relative to SCRT for completely resected IIIA pN2 NSCLC (median OS duration, 32.5 vs. 58.8 months; p < 0.001) [20]. In another NCDB data analysis, Moreno et al. found that the median OS duration was significantly improved in patients undergoing SCRT compared with those undergoing CCRT (53 vs. 37 months, p < 0.001) [19]. Although the influence of the sequencing of adjuvant chemotherapy and RT in patients with completely resected NSCLC has been investigated, [12,13,19,20] the optimal sequencing schedule, and especially the timing of PORT as part of SCRT, remains a subject of debate. Among trials conducted to evaluate the efficacy of adjuvant chemotherapy in patients with stage III N2 NSCLC, the International Adjuvant Lung Trial, in which three or four cycles of cisplatin-based adjuvant chemotherapy were administered, demonstrated a 5year survival benefit of 4.1% (HR 0.86; 95% CI, 0.76-0.98; p = 0.03) [3]; the ANITA trial, in which four cycles of adjuvant cisplatin were administered in combination with vinorelbine, demonstrated an absolute 5-year survival benefit of 8.6% [21]; and Ou et al. [22] administered four cycles of vinorelbine/carboplatin or paclitaxel/carboplatin doublet adjuvant chemotherapy and demonstrated an absolute survival advantage of 12.0% at 5 years. The common duration of the four cycles of adjuvant chemotherapy was 12 weeks, and the timing of PORT initiation was 2-3 weeks after the completion of chemotherapy. The cut-off point for SCRT2 in our study accommodated the completion of the four cycles of adjuvant chemotherapy and subsequent PORT (84 days + 20 days). The DMFS benefit observed in the SCRT2 group may reflect the greater probability of completing a course of adjuvant chemotherapy, which translates to improve OS with no detrimental effect on LRFS, which can occur with delayed PORT initiation. Subgroup analyses in the present study showed that elderly patients who received SCRT2 benefited the most and had significantly improved survival compared with those who received CCRT, before and after PSM. Generally, younger patients have a greater capacity to tolerate surgery, subsequent chemotherapy, and PORT. In previous studies conducted in Asian populations, early PORT (concurrent with or followed by chemotherapy) had an OS benefit in patients with stage IIIA pN2 NSCLC [12,13], and younger age (mean <60 years) might help to maintain the locoregional OS benefit. In contrast, several recent studies conducted with NCDB data, most of which examined cohorts with mean ages >60 years, yielded results demonstrating the importance of postponing PORT until after chemotherapy completion [19,20]. Our study produced similar results, showing that older patients benefited from a longer interval between the initiation of adjuvant chemotherapy and PORT. Compared with the most recent results of LungART, our entire cohort has a favorable 3-year OS, 75% (our PORT cohort) versus 66.5% (PORT arm of LungART), and 68.5% (no PORT arm of LungART) [15]. First, the LungART trial allowed adjuvant radiotherapy began within surgery 4-8 week, which means allowed early PORT [15]. Second, the mean age of LungART was 61 years old [15]. According to our finding, the elderly would not get survival advantage if they took early PORT. In the LungART trial, the cardiopulmonary toxicity was supposed to overwhelm the benefits of mediastinal relapse-free survival [15]. There was 3D conformal radiotherapy technique adoption in Lun-gART [15], however, there were only 17% of patients who underwent 3D-conformal radiotherapy in our study. The majority of patients in our cohort underwent IMRT, and modern technique would be necessary to lower surrounding normal organs toxicity [9,24]. In previous NCDB analysis by Corso et al., there were only 17% who used IMRT, whereas others used 3D-conformal radiotherapy [8]. The 5-year OS was 34.1% in NCDB and 53% in our study (TCRD) [8]. It is necessary to deliver adjuvant radiotherapy safer, instead of suspending usage. Our study showed the long-term survival effects of different intervals between adjuvant chemotherapy and radiotherapy on the basis of routine modern PORT adoption. This study has several limitations. Data of chemotherapy regimens and number of cycles were not recorded in the TCRD. However, the practice patterns of chemotherapy were examined in recent years by other Taiwan National Health Insurance Research Dataset (NHIRD) [25,26]. According to the study of Liang et al., platinum-based doublet chemotherapy was provided to the majority of the patients (66.9%), and it was combined with gemcitabine (33.8%) [26]. The second and third most common regimens were vinorelbine alone (13.0%) and platinum with docetaxel (11.6%) [26]. Our study period was conducted from 2007 to 2017, and the frequency of using platinum with pemetrexed was supposed to be high in patients with adenocarcinoma, owing to a longer OS than that in patients who received other platinum-based regimens [26][27][28][29]. Targeted therapies are providing survival benefits to EGFR mutant NSCLC disease as shown in much recent evidence that is emerging [30,31]. However, we excluded patients with targeted therapy from our study. Additionally, this study performed a retrospective analysis of non-randomized data without reporting the patients' safety data, and although we used PSM to account for confounders among the covariates examined, confounding by unmeasured covariates may have persisted. For example, some patients in the CCRT and SCRT1 groups may have received suboptimal chemotherapy. Despite these limitations, our study has several strengths. Our cohort used IMRT in the majority, compared to a previous NCDB study that used IMRT in only 17% [8]. Such modern radiotherapy factors would lower the treatment-related mortality associated with PORT [7,9,24,32,33]. IMRT is beneficial in node-positive disease compared with 3D-CRT [24]. To our knowledge, this study is the largest cohort study using IMRT modern techniques to N2 patients. The TCRD is a population-based database, and our results can be generalized to other cohorts. In addition, locoregional and distant recurrence events are registered in the TCRD, enabling more detailed analysis. To our knowledge, this study is the first to demonstrate the OS benefit of delayed PORT initiation after the administration of adjuvant chemotherapy for completely resected IIIA pN2 NSCLC, especially among older patients, in an Asian population. In addition, the availability of information on recurrence events across subgroups in the population helped us determine whether delayed PORT initiation after adjuvant chemotherapy had a negative impact on LRFS, and to identify SCRT2 subgroups with better DMFS (Supplemental Figure S1B). Conclusions In the context of postoperative treatment for completely resected stage IIIA pN2 NSCLC, a longer interval (>104 days and <180 days) between the initiation of adjuvant chemotherapy and PORT was associated with improved OS compared with CCRT. No LRFS difference related to the interval between the initiation of adjuvant chemotherapy and PORT was observed. In older patients (aged >60 years), the benefit of delayed PORT initiation was more significant. We suggest that PORT should be postponed in the completely-resected pN2 elderly patients. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/cancers13102494/s1, Figure S1: Forest plots of aHRs showing the effect of PORT timing on (A) local regional recurrence-free survival, and (B) distant metastasis-free survival. Informed Consent Statement: Patient consent was waived due to retrospective nature of this study. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to restriction of privacy and ethical policy. Conflicts of Interest: The authors declare no conflict of interest.
2021-05-30T05:09:51.329Z
2021-05-01T00:00:00.000
{ "year": 2021, "sha1": "732eab9bf6bfd46cc086b403ae2a1181c69423c5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/13/10/2494/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "732eab9bf6bfd46cc086b403ae2a1181c69423c5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
2100443
pes2o/s2orc
v3-fos-license
Comparison of Semi-Automated and Manual Measurements of Carotid Intima-Media Thickening Carotid intima-media thickening (CIMT) is a marker of both arteriosclerotic and atherosclerotic risks. Technological advances have semiautomated CIMT image acquisition and quantification. Studies comparing manual and automated methods have yielded conflicting results possibly due to plaque inclusion in measurements. Low atherosclerotic risk subjects (n = 126) were recruited to minimise the effect of focal atherosclerotic lesions on CIMT variability. CIMT was assessed by high-resolution B-mode ultrasound (Philips HDX7E, Phillips, UK) images of the common carotid artery using both manual and semiautomated methods (QLAB, Phillips, UK). Intraclass correlation coefficient (ICC) and the mean differences of paired measurements (Bland-Altman method) were used to compare both methodologies. The ICC of manual (0.547 ± 0.095 mm) and automated (0.524 ± 0.068 mm) methods was R = 0.74 and an absolute mean bias ± SD of 0.023 ± 0.052 mm was observed. Interobserver and intraobserver ICC were greater for automated (R = 0.94 and 0.99) compared to manual (R = 0.72 and 0.88) methods. Although not considered to be clinically significant, manual measurements yielded higher values compared to automated measurements. Automated measurements were more reproducible and showed lower interobserver variation compared to manual measurements. These results offer important considerations for large epidemiological studies. Introduction Vascular risk assessment has become integral to good clinical practice. Conventional risk factors which are derived from a patient's family and smoking history, blood pressure, and measurement of blood glucose and lipid levels have been used successfully to derive a person's future risk of developing atherosclerotic cardiovascular disease [1,2]. Measurement of carotid intima-media thickness (CIMT), a marker of atherosclerosis risk, can improve individual risk assessment and quantify pathology and/or drug therapy efficacy [3,4]. Because it is noninvasive, easy to perform, and highly repeatable, a number of epidemiological studies have adopted CIMT as a surrogate marker of cardiovascular risk [5][6][7]. Technological advances over the past number of years have improved image acquisition and measurement methods [8]. In tandem with these new methodologies, a number of studies have emerged comparing older manual and newer automated/semiautomated methods [8][9][10][11][12]. However, in some of these previous studies the statistical methods may have been unsuitable or the cohort may have been biased. Our aim was to compare manual and semiautomated methods of measuring CIMT in healthy male and female subjects with very low cardiovascular risk. The rationale for the low risk subjects was to minimise the potential influence of plaque on CIMT measurements. In addition, we examined the intraobserver and interobserver variation of each method. Material and Methods One hundred and twenty-six (68 male and 58 female) subjects were recruited from the general population. The study was approved by Trinity College Dublin Ethics Committee. Written informed consent was obtained from all subjects prior to testing protocols. Subjects were included if they were lifelong never-smokers, free from cardiovascular disease, and normotensive (<140/90 mmHg), had normal lipid profile (LDLc < 4.0 mmol/L), normal fasting glucose (fasting glucose < 6.2 mmol/L), and moderate alcohol intake (male < 21 units per week; female < 14 units per week). Subjects were excluded if they were receiving treatment for or had a history of hypertension, hyperlipidaemia, and diabetes or were taking any medications that affected haemodynamic and/or metabolic responses. High-resolution B-mode ultrasound images of the right and left common carotid artery were used to measure carotid intima-media thickness. Patients were scanned in the supine position using 7-12 MHz linear array transducer (Philips HDX7E, Phillips, UK). CIMT was calculated using both manual (Manual) and semiautomated (Automated; QLAB, Phillips, UK) methods. Manual CIMT measurements were recorded from the far wall at 1 cm, 1.5 cm, and 2 cm intervals proximal to the carotid bulb [13]. Automated measurements were also recorded from the far wall, using the same image, from the identical 1 cm section proximal to the carotid bulb. The carotid bulb was defined as the point where the far wall deviated from the parallel plane of the distal CCA. Mean manual and automated CIMT measurements for the right and left CCA were calculated from three consecutive cardiac cycles [14]. Pearson product-moment correlation coefficient and the intraclass correlation coefficient (ICC) were used to examine the relationship between manual and automated methods [15]. The associations of the differences of the mean of the paired measurements (Bland-Altman method) were used to examine absolute differences between the two methods (MedCalc, Belgium). The technical error of measurement (TEM) and ICC of ten randomly selected subjects were used to identify intraobserver reproducibility and interobserver reliability of the two methods [15,16]. An unpaired -test was used to compare gender differences (MedCalc, Belgium). Values are reported as mean ± SD unless otherwise stated. Results Subject characteristics and cardiovascular risk factors are outlined in Table 1. There were 65 male and 54 female subjects with a mean age of 40.5 years. No differences in age, diastolic blood pressure, total cholesterol, and LDLc were observed between genders. However, BMI, systolic blood pressure, triglyceride, and glucose were higher and HDLc was lower in males compared to females ( < 0.049). Evaluation of the differences of paired means (Bland-Altman method) identified an absolute mean bias and SD of −0.023 ± 0.052 mm between manual and automated CIMT measurements with limits of agreement of 0.078 to −0.125 mm (Figure 1). The TEM, quantifying the interobserver reproducibility and intraobserver variability, was lower for automated (3.71% and 1.52%) compared to manual (8.11% and 6.30%) methods. As a consequence, the interobserver and intraobserver ICC was greater for automated ( = 0.94 and 0.99) compared to manual ( = 0.72 and 0.88) methods. Discussion This study highlights that manual measurements yield higher values compared to automated measurements even in subjects with very low atherosclerotic risk. The mean differences of both methods were not clinically significant and no systematic errors were observed. In the absence of a gold standard measurement such as using a phantom, it is unclear which method best approximates real values. The results also demonstrate that automated CIMT calculations are more reproducible and show lower interobserver variation compared to manual calculations. These results offer important considerations where patients may be scanned by different technicians and where the accumulation of small variations may impact results, especially in large scale epidemiological studies. In the present study, Pearson product-moment correlation coefficient demonstrated a strong association between both methods; however, a strong ICC was not observed ( < 0.85) [17]. Pearson product-moment correlation coefficient is not considered to be a robust determination of association whereas ICC represents perfect agreement [16]. This was further emphasised by the mean bias of the Bland-Altman plot where manual measurements yielded greater, although not clinically significant, values compared to automated measurements. Previous studies report no differences between automated versus manual CIMT methodologies whereas other studies report significantly greater values using manual techniques [9,10,12]. Seçil et al. [12] reported significantly greater values for manual CIMT calculations compared to automated calculations. The authors reported that manual measurements were significantly higher (1.3-8.7%) compared to automated measurements. In the same study the authors also reported higher interobserver correlation coefficients for automated methods compared to manual methods. Freire et al. [9] reported no differences between automated and manual CIMT calculations; however automated methods provided lower interobserver and intraobserver variation coefficients. Puchner et al. [10] reported significant correlation ( = 0.86; < 0.01) between automated and manual methodologies with no observable systematic bias in the mean differences (mean difference 0.023±0.034 mm). The authors also reported lower interobserver and intraobserver variation coefficients for automated methods (6.6% and 5.6%) compared to manual methods (14.1% and 11.1%). More recently, Yanase et al. [11] reported similar values for manual and automated methods; yet automated calculations had lower standard deviations and variation coefficients indicating better reproducibility. Furthermore, automated methods were better correlated with Framingham and Prospective Cardiovascular Munster study (PROCAM) risk scores. For the present study, in order to minimise potential measurement inconsistencies caused by abnormal CIMT, focal thickening, or the presence of atheromatous lesions, only subjects with very low cardiovascular risk were recruited. In addition, manual CIMT measurements were averaged from three anatomic sites, over several cardiac cycles from both left and right sides. Despite these precautions, it is possible that outliers may have caused an overestimation of manual CIMT [18]. For automated methods, several hundred measurements are recorded, and so, averaged values would be less susceptible to individual outlier errors [18]. This study does not examine serial changes in CIMT over given time intervals. Such measurements are used in clinical practice as surrogate markers of vascular risk [19]. Larger increments in CIMT are more associated with greater risk of vascular events [20]. However, it is also important to make a clear distinction between changes in CIMT and progression of atheromatous plaque. As atherosclerosis has focal changes more so than uniform changes, variation in CIMT at different segments or changes in maximal CIMT may better represent progression of atherosclerotic disease [18]. Changes in vascular wall properties, characterised by CIMT, represent different disease processes. Standardised definitions of focal plaque structures such as luminal encroachment of 50% or >0.5 mm should be adopted to help differentiate the two distinct diseases processes [18]. Most large scale epidemiological studies have adopted manual methodologies to quantify CIMT with only one study using semiautomated edge detection software [6,[21][22][23]. Based on the results of our study, it is fair to suggest that future studies, particularly interventional and longitudinal studies, should consider adopting automated CIMT methodologies. Conclusion In conclusion, semiautomated measurements of CIMT yield significantly lower values compared to manual measurements and produce lower intraobserver and interobserver variation. Although the differences between manual and automated methods are small and may not be clinically significant, these observations offer important considerations for large epidemiological or longitudinal studies.
2017-03-31T23:15:18.529Z
2014-01-21T00:00:00.000
{ "year": 2014, "sha1": "80e6c284ca68442b02f4fc4f66a5b50320da7235", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/bmri/2014/531389.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "29fbf04ae540bff520f9e7c437079e6b6078134d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
224855495
pes2o/s2orc
v3-fos-license
Lexical-Semantic Features Of Hyponymy In The Short Stories “The Voyage” And “Dunyoning Ishlari” ( Deeds Of The World) This article focuses on the semantic category hyponymy which is a word or phrase whose semantic field is included within that of another word, its hyperonym or hypernym or more shortly it is a term used to designate a particular member of a broader class in linguistics and lexicography. And current research is aimed at discovering the types of hyponymy category and their comparison in short stories “The Voyage” (in English) and “Deeds of the world” (in Uzbek). INTRODUCTION In the Uzbek linguistics, a number of studies have been conducted in the field of studying the language as a whole. The basics of the system lexicology have been reflected in a number of scientific studies carried out in different periods of science development. Therefore, in her scientific findings prof. R.Safarova divided the ways of system lexicology development in the Uzbek language into the following phases: The American Journal of Social Science and Education Innovations (ISSN -2689-100x) Published: September 30, 2020 | Pages: 606-613 Doi: https://doi.org/10.37547/tajssei/Volume02Issue09-92 IMPACT FACTOR 2020: 5. 525 a) The first phase. The difference between word and lexeme, the semes and the ways of separating them into the main parts , and reflects on exploring semantic structure of some word pairs b) The second phase. The development stage of system lexicology is characterized by combining words into thematic and lexical-semantic groups and studying meaning by dividing it into component parts. At the same time, the principles and fundamentals of system linguistics, system lexicology namely researching lexical units by grouping into lexical-semantic groups were developed in the lexicology of the Uzbek language. The same lexical paradigms, and lexical paradigms have been identified as lexical-semantic groups in the lexicology of the system. Relying on this principle mainly lexical paradigms formed by synonymous senses, a group of words with antonymic meaning, a various thematic and lexical-semantic lines, lexical paradigms were the source of research as particular lexical-semantic groups in the system lexicology. In linguistics and lexicography a hyponym (from Greek hupó, "under" and ónoma, "name") is defined as a word or phrase whose semantic field is included within that of another word, its hyperonym or hypernym (from Greek hupér, "over" and ónoma, "name"). In simpler terms, a hyponym shares a type of relationship with its hypernym. For instance, pigeon, crow, eagle and seagull are all hyponyms of bird (their hypernym); which, in turn, is a hyponym of animal. Words that are hyponyms of the same broader term (that is, a hypernym) are called co-hyponyms. The semantic relationship between each of the more specific words (such as daisy and rose) and the broader term (flower) is called hyponymy or inclusion. [9] Hyponymy is not restricted to nouns. The verb to see, for example, has several hyponyms -glimpse, stare, gaze, ogle, and so on. Edward Finnegan points out that although "hyponymy is found in all languages, the concepts that have words in hyponymic relationships vary from one language to the next". Hyponymy refers to a much more important sense relation by describing what happens when we say "An X is a kind of Y", "A daisy is a kind of flower", or simply, "A daisy is a flower". And there is also stated that "Hyponyms are more specific words that constitute a subclass of a more general word". [4] e.g. maple, birch, and pine are hyponyms of tree. MATERIALS AND METHODS In linguistics, semantic analysis is the process of relating syntactic structures, from the levels of phrases, clauses, sentences and paragraphs to the level of the writing as a whole, to their language-independent meanings. It also involves removing features specific to particular linguistic and cultural contexts, to the extent that such a project is possible. The elements of idiom and figurative speech, being cultural, are often also converted into relatively invariant meanings in semantic analysis. Semantic analysis can begin with the relationship between individual words. This requires an understanding of lexical hierarchy, including hyponymy and hypernymy, meronomy, polysemy, synonyms, antonyms, and homonyms.  the cranes standing up so high  and a cart with a small drooping horse A crane is any of a family (Gruidae of the order Gruiformes) of tall wading birds superficially resembling the herons but structurally more nearly related to the rails. Horse in the meantime is a large solid-hoofed herbivorous ungulate mammal (Equuscaballus, family Equidae, the horse family) domesticated since prehistoric times and used as a beast of burden, a draft animal, or for riding. Birds and mammals as described in the definitions are two istinct characters of an animal. Therefore, these two words are classified as animal hyponymy. Type of hyponymy (clothes)  put on her flannel dressing-gown grandma was quite ready  an old sailor in a jersey standing by gave her his dry A dressing-gown is a robe worn especially while dressing or resting. While, a jersey is any of various close-fitting usually circularknitted garments especially for the upper body. The word worn and the word closefitting in the explanation mentioned are the characters of clothes. Hence, these words "dressing-gown" and "jersey" are classified as clothes hyponymy. Type of hyponymy (occupation)  And an old sailor in a jersey standing by gave her his dry  Such a very nice stewardess came to meet them A sailor is a traveler by water while, a stewardess is a woman who performs the duties of a steward, especially one who attends passengers (as on an airplane). The words "sailor" and "stewardess" are both types of occupation. Therefore, these two words are categorized as occupation hyponymy.  But their sweet smell was part of the cold morning. The above examples show how similar one word to another in terms of its hyponymy, however they are used differently depending on the context of the sentences. Based on the findings, it is concluded that in the short story of the Voyage and Dunyoning ishlari (Deeds of the world), there are 22 types of hyponymy category. CONCLUSION As one of the outstanding linguists stated language is a vehicle for communication between people. Therefore current finding will help students of English to know more about hyponymy and the types of hyponymy category so that they can use the range of vocabulary in written on in spoken as well. The descriptive qualitative methodology was used throughout the research, types of hyponymy were investigated by classifying the categories of hyponymy. Based on the results it was revealed that the most dominant type of hyponymy is "part of body" and the least dominant type of hyponymy category are "bird, drink, fruit and occupation". With
2020-10-15T10:14:43.425Z
2020-09-30T00:00:00.000
{ "year": 2020, "sha1": "2c56113c91b31013740d0fbb55018764c666c3d0", "oa_license": "CCBY", "oa_url": "https://www.usajournalshub.com/index.php/tajssei/article/download/1076/1023", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "2c56113c91b31013740d0fbb55018764c666c3d0", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "History" ] }
133984465
pes2o/s2orc
v3-fos-license
Model rainfall-runoff in the constraints Amazon The development of a hydrological model in Amazon region has been a challenge for researchers since historical series of hydrological data in the Amazon region are still insufficient. The aim of this study was based on different samples of settings for calibration and validation of IPH II model, using limited historical series of data of daily average water flow registered in Caiabi river hydrologic basin, a tributary stream of Teles Pires riverMato Grosso. The total area of the study location is 440.98 km2, and there were installed three meteorological automatic stations for climate monitoring, and one linigraph in the basin end for monitoring the altimetric quotas and also to estimate the daily water flow of the river. The precipitation data, evapotranspiration and water flow used to feed the IPH II model were collected between 09/18/2015 and 04/30/2016, using sixty percent of the initial historical data to calibrate and forty percent to validate the model has shown better statistic performance, however rearranging the data and establishing the sixty percent of the central data for the calibration was verified that there was an increase in the statistical performance of the model making the simulations of the IPH II model were successful. It was indicated by the results that the sample methodology for calibration of hydrologic models can bring substantial improvement to the performance. Introduction Reliable historical series of data related to river flows are scarce in Brazil due to the large territorial extension, large quantities of rivers and the lack of investments in hydrological monitoring.In the Amazon watershed, the quantitative monitoring stations are smaller when compared to the Southern and Southeastern regions of the country (NETO et al., 2008), increasing the difficulties in the application of hydrological models in the Amazonic region. In the absence of reliable data, the application of semi-distributed and distributed hydrological models can present unreliable results as these models require a large amount of spatial and temporal data.The water-flow hydrological models are more suitable for this reality since they require small amount of input data and present great potential to extend series of flows in the watershed where there is monitoring of precipitations (TUCCI, 2009).The rainfall-runoff models are important hydrological studies to improve processes that occur with rain and runoff, can be useful in solving problems such as drainage infrastructure, flood forecasting, urban planning and use of the soil (CHANG et al., 2018) Pereira et al. (2016b) compared the performance of the two hydrological models -semi distributed (SWAT) and rainfall-runoff (IPH II) in the simulation of flow in hydrograph basin of Pombo river in the state of Minas Gerais, finding better performance in the rainfall-runoff model after using four years in the calibration of the standards of the model and two years for its validation.Chlumecký et al. (2017) evaluating some methods of calibration of rainfall model can conclude that this type of model can efficiently describe the environmental processes involved in surface runoff.Machado et al. (2017) applied rainfall model in two hydrographic basins of the state of São Paulo and concludes that the results are satisfactory despite the simplicity of the model. The monitoring and the use of the hydrologic models are important to the applicability of damming, protective dikes, drainage channels, fore sighting of drought and floods and support to the granting procedure what have been showing great demand.In Brazil the usage of rainfall-runoff models is still restricted to the monitoring of the power plant reservoir making use of precipitation and another climate information (MELLER et al., 2014).The hydrologic models are significant while estimating the river flow on short and medium terms and the hydrological response in hydrographic basins considering the changes in usage and land occupation. In order to apply the hydrological models for these endings it is mandatory quality and reliability while predicting the hydrological behavior so it can be used as a tool in the management of hydric resources (PEREIRA et al., 2016a). Considering those aspects, the calibration of the model parameters is necessary so the simulated values are coherent to the observed data by using either the manual or the automatic way.(TUCCI, 2009) Taking that into account, the aim of this paper is to assess the calibration and validation of the rainfall-runoff IPH II model under different settings of data in order to enable the usage of limited series of data of Amazonic watershed to simulate the daily water flows. Study Area The place of study corresponds to the watershed of the river Caiabi (BHRC), inserted in the hydrographic region of the Amazon, being part of the watershed of the Teles Pires river with the area of 440.98 km 2 and perimeter of 182.65 km (DOR-NELES, 2015), located in the middle of the North region of the State of Mato Grosso according to Figure 1. According to the Thornthwaite classification, the climate of the region is of the type B2wA"a" (Humid presenting moderate hydric deficiency in the winter, with evapotranspiration potential greater than or equal to 1140 mm with less than 48% of the evapotranspiration concentrated in the summer).The rainfall average is 1974 mm concentrated in the summer/autumn and hydric deficiencies in the winter/spring.(SOUZA et al., 2013). The climatic and fluviometric information of BHRC was monitored by the stations listed on table 1. IPH II Model The IPH II (TUCCI, 2005) is a concentrated and deterministic model that requires as input variables precipitation data and reference evapotranspiration.The precipitation of the BHRC was calculated by the method of the arithmetic average with the data of the meteorological stations São José, Fetter and Bedin.The method of estimation of the reference evapotranspiration (ET 0 ) used in this study was that of Camargo (1999), recommended to region according to the work of Tanaka et al. (2016). The IPH II model uses specific algorithms related to evaporation and interception loss, separation of runoff, propagation of the superficial runoff and propagation of the underground runoff, according to detailed description in Bravo et al. (2006).Follows below a brief description of the IPH II model. When the evapotranspiration is lower than precipitation in the model, it is deducted from precipitation and, when the evapotranspiration is greater than the precipitation met by the interception reservoir (permeable areas and depressions), at the time depletion of the reservoir occurs, the evapotranspiration is treated by water in the soil by means of the linear relation presented in equation 1. (1) Being, E t bulk evapotranspiration in time t; ET 0 is reference evapotranspiration; S t is the content of water in soil per time; S max is the maximum amount/content of water in the soil. The modified Horton algorithm is used to separate the superficial runoff, resulting in two equations (Equation 2 and 3) that relate storage with infiltration and percolation. (2) (3) (4 (5) where, (4 (1 where, where, (4) where, S t is the water content in the soil at time t (mm); h is e -K , being K (h -1 ) a parameter that characterizes the exponential decay of the infiltration curve and depends on the characteristics of the soil; I 0 is the ability to infiltrate the soil when the water content is S 0 (mm d -1 ); And I b is the infiltration capacity when the soil is saturated (mm d -1 ). The surface volume propagation is made to the main section of the basin with the Clark method, which consists of a combination of the time-area histogram (HTA) with a simple linear reservoir (TUCCI, 2005). Superficial runoff is defined by the simple linear reservoir method, through Equation 7(7) where, Q S (t) is the superficial flow in the instant t (mm d -1 ); K S is the average emptying time of the superficial reservoir (d); V s is the effective precipitation in time t obtained whereby the HTA (mm). The spread of underground runoff is also obtained by the simple linear reservoir method by means equation 8: where, Q Sub (t) is the underground flow at instant t (mm D -1 ); K Sub is the average emptying time of the underground reservoir (d); V p is the percolated volume (mm). The I 0 parameters (initial infiltration capacity), I b (minimum infiltration capacity), h (parameter that characterizes the exponential decay of the infiltration curve and depends on the characteristics of the soil), K Sub (average aquifer emptying time), K Sup (surface drain delay time), T c (concentration time), R máx (initial loss reservoir volume) and Alpha (model parameter used in calculating the percentage of precipitation that drains superficially) were obtained by calibration. Data In this study the data entered in the model were precipitation, evapotranspiration and flow, which were monitored by 3 Davis meteorological stations and the flows, were estimated by key curve through the dimensions recorded by a Thalimeds linigraph model, OTT brand. The data series used starts on 09/18/2015 and was ended on 04/30/2016.The data was set for calibration and validation of the IPH II model with the respective percentages: 50-50%, 60-40%, 70-30% and 80-20%, in a way the best settlement was found and then some tests were carried out in an attempt to improve the performance of the model. Results and Discussion The quantitative measures of the performance of the model IPH II are displayed on table 2, in different settings of the data series used in the calibration and validation stages.(4) S máx where, (4) S máx where, (4) (1) where, (4) (1) where, (4) The analysis of the volume ratio observed with the calculated volume (V o /V c ) indicates that in the calibration phase of the model IPH II the simulations of the flows in the settings 60-40%, 70-30% and 80-20% were underestimated, and overestimated in the settings 50-50%.Nevertheless, in the validation phase, the relationship V o /V c indicated the sub estimation of the simulated flows by the IPH II model in the settings 50-50% and 70-30%, and overestimation in the settings 60-40% and 80-20%. The 60-40% settings presented the highest determination coefficients (R2) in the calibration being 0.065% higher than the second place; In the validation the 80-20% setting was the one that showed the highest coefficient of determination.These values are similar to those found by Germano et al. (1998) while applying the IPH II model in the simulation of flows in small urban watersheds.According to Legates and MACCBE (1999), being the coefficient of determination an indicative of precision, the conclusions based only on that coefficient may be mistaken, making necessary the application and interpretation of a set of statistical indexes to avoid mistakes (PEREIRA et al., 2016b). 50-50% Calibration 0,996 0,439 0,415 0,410 0,272 0,339 0,804 The evaluation of the accuracy of the simulations of the flows generated by the different settings, through the coefficient of Nash-Sutcliffe (E), indicated superiority of the settings 60-40% in relation to the others.According to Silva et al. (2008), the values of the Nash-Sutcliffe coefficient (E) above 0.75 indicates reliable performance of the model, and between 0.36 and 0.75 acceptable performance what demonstrates that, the IPH II model even being calibrated and validated with a limited series of data, showed satisfactory performance found, when the arrangement was used with 60% of the data for calibration and 40% for validation. Several hydrological modeling works are found in the literature, with a wide variation of the values of Nash-Sutcliffe (E), all attribute these variations to inconsistency of the input data of the model, errors of obtaining data in the measures stations, the non-distribution of soil parameters in the watershed and also in distributed models (BLANCO et al., 2007;PAIVA et al., 2011;ASADZADEH et al., 2016;PEREIRA et al., 2016a). Applying the Nash-Sutcliffe log, the values found in the validation improved (0.548) in the 60-40% arrangement, this is the case in which the Nash-Sutcliffe log makes the weight of the errors of the smallest and highest equivalent flows, demonstrating the accuracy of the model while simulating the downturn flows of hydrogram (KRAUSE et al., 2005). The deviations presented (MAE) by the different settings used to calibrate and validate the model IPH II, showed that the 50-50% arrangement obtained the best adjustment, with its predictions deviation around 5% in relation to the observed data.However, the 60-40% arrangement was the one which presented less deviation in the validation phase, with 12.6% when compared to the average of the observed data.The deviations found were minor in the study of Pereira et al. (2016b) in which the deviation represented 18% of the average observed in the calibration and 20.6% in the validation. The spreading (RMSE), demonstrated that the variation of the simulated values for the same observed value (Oak et al.,2015) in the calibration with the arrangement 50-50%, presented the smallest error and in the validation of the 60-40% arrangement. The index of Wilmott (d) (table 2) showed that calibration and validation using the 60-40% arrangement obtained good adjustments (0.936 and 0.784), because the closer to 1 the better the accuracy of the model (Wilmott, 1982). According to the statistics Standards, the 60-40% setting presented better performance than the others.While trying to enhance results, the process was inverted using 60% of the final data of the series for calibration and 40% of the initial data for validation.By analyzing Table 3 we can see that the Determination Coefficient (R 2 ), Nash-Sutcliffe (E), Nash-Sutcliffe logarithm (E log ) and Willmott Index (d), presented lower values when compared to the findings of the calibration-validation 60-40%.The MAE (absolute average error) and the RMSE (average square error) presented higher values in calibration and lower values in validation phases.While using the 60% range of data in the middle of the data series and validating with 20% of the initial and final data, the IPH II model underestimated the values of the observed flow rates in both the calibration and the validation, being the coefficients of (R 2 ), Nash-Sutcliffe (E) and Nash-Sutcliffe log (E log ) increased by around 30.9, 36.9 and 42.9%, when compared to the 60% Cal-40% Val setting in the validation. The absolute average error (MAE) and the Average Square Error (RMSE) in the calibration of the 20% Val-60% Cal-20% Val setting were higher than in the 60% Cal-40% Val setting, considering that in validation the errors decreased around 21.2 and 5.2% in relation to the setting 60% Cal-40%Val. Considering the hydrographs of the daily average flows observed and simulated by the IPH II model in Figure 2, it is observed that the use of the 20% Val-60% Cal-20% Val setting, the sampling of the data used in the calibration can better represent the larger flows and smaller, reflecting a better validation performance (TUCCI, 2009). It is possible to observe agreement between the data observed and simulated by IPH II, but in all settings the model had difficulty in representing the peak values, mainly in the validation.According to Pereira et al. (2016b) peak values are naturally difficult to simulate by hydrological models due to rainfall variability and low concentration time in river basins.This difficulty is also found by Andrade (2013) and Pereira et al. (2014aPereira et al. ( , 2014b)). Table 4 shows the calibrated parameters of the IPH II model, of the 60-40% arrangement variations. The parameter I 0 ranged from 10.000 to 37.971 mm h-1, indicating that on the 20%Val-60% Cal-20% Val calibration model it is identified less humidity in the soil, obtaining higher values of water infiltration in the soil at the beginning of precipitations (Jarvis et al., 2013) being those influenced by the low frequency of rainfall and not allowing the soil drenching (LIU;CHEN, 2015). The values of I b , ranged from 5.122 to 9.999 mm h -1 , and were higher than those found for Germano et al. (1998), 0.1 to 0.6 mm h -1 , in small urban watersheds; and Pereira et al. (2016b), who found 2.440 mm h -1 in a hydrographic basin area of 1650 km 2 . According to Alagna et al. (2016), I b referred to infiltration speed when all the porosity of the soil is filled with water.Under such conditions, the pore continuity directly affects values of infiltration.The land use in the area of the hydrographic basin of Caiabi river, in its vast majority, is formed by rotative agriculture of soy and corn, what may contribute to stability of aggregates (MALIK et al., 2012) and total porosity-micro and macroporosity (WENDLING, 2012), increasing the continuity of pores, and so improving the hydraulic conductivity of these soils.These facts may justify the largest I b values found in relation to those found for Germano et al. (1998) and Pereira et al. (2016b).(I0) = initial infiltration capacity (Ib) = minimum infiltration capacity, h = soil type, (Ksub) = average time of emptying of the aquifer, (Ksup) = lag time of runoff, (Rmáx) = volume of initials losses of reservoir, Alpha parameter used in the calculation of the percentage of precipitation that drain superficially (Bravo et al., 2006). Conclusion The different settings used for the calibration and validation of the IPH II model demonstrated to be a potential technique in order to calibrate and validate the rainfall-runoff hydrological models, under restricted data conditions. Regardless the short series of data used in this study, statistical indexes show that the simulations made with the IPH II model were satisfactory for the hydrographic basin of river Caiabi, indicating a functional tool for managing water resources in the Amazon region lacking spatially and temporally data. Despite the simplicity of the rainfall models, they presented precise results even though there were data restrictions, placing them as an alternative of use in the regions that have little monitoration, such as the Brazilian Amazon. Acnowledgements We would like to thank the Fundação de Amparo a Pesquisa do Estado de Mato Grosso for the Masters´ scholarship given to the first author, the partnership made with the Empresa Brasileira de Pesquisa Agropecuária (EMBRAPA-Agrossilvipastoril), who lent the linigraph used in this study and the rural producers for taking the area for hydro-meteorological equipment installation, and to the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) by means of the financing of a great part of the research with the promotion granted in Edital CNPq N o .014/2011. Figure 1 - Figure 1 -Location map and instrumentation of the basin area of river Caiabi h) .(I 0 -I b )]
2019-04-27T13:12:40.088Z
2018-03-27T00:00:00.000
{ "year": 2018, "sha1": "f38fe2704f1c542f017337e5581f3a4a7d1eb8b4", "oa_license": "CCBYNCSA", "oa_url": "https://periodicos.ufsm.br/cienciaenatura/article/download/33742/pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "f38fe2704f1c542f017337e5581f3a4a7d1eb8b4", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
14327654
pes2o/s2orc
v3-fos-license
Atmospheric Chemistry and Physics Abstract. The relationship between nucleation events and spectral solar irradiance was analysed using two years of data collected at the Station for Measuring Forest Ecosystem-Atmosphere Relations (SMEAR II) in Hyytiala, Finland. We analysed the data in two different ways. In the first step we calculated ten nanometer average values from the irradiance measurements between 280 and 580 nm and explored if any special wavelengths groups showed higher values on event days compared to a spectral reference curve for all the days for 2 years or to reference curves for every month. The results indicated that short wavelength irradiance between 300 and 340 nm is higher on event days in winter (February and March) compared to the monthly reference graph but quantitative much smaller than in spring or summer. By building the ratio between the average values of different event classes and the yearly reference graph we obtained peaks between 1.17 and 1.6 in the short wavelength range (300--340 nm). In the next step we included number concentrations of particles between 3 and 10 nm and calculated correlation coefficients between the different wavelengths groups and the particles. The results were quite similar to those obtained previously; the highest correlation coefficients were reached for the spectral irradiance groups 3--5 (300--330 nm) with average values for the single event classes around 0.6 and a nearly linear decrease towards higher wavelengths groups by 30%. Both analyses indicate quite clearly that short wavelength irradiance between 300 and 330 or 340 nm is the most important solar spectral radiation for the formation of newly formed aerosols. In the end we introduce a photochemical mechanism as one possible pathway how short wavelength irradiance can influence the formation of SOA by calculating the production rate of excited oxygen. This mechanism shows in which way short wavelength irradiance can influence the formation of new particles even though the absolute values are one to two magnitudes smaller compared to irradiance between 400 and 500 nm. Introduction Atmospheric aerosols are amongst other constituents responsible for light scattering, cloud formation and heterogeneous chemical effects and they are a key factor in balancing global climate (e.g.Houghton et al., 1996).There are two main sources for atmospheric aerosols: the emission of particlesnatural or anthropogenic -and the gas-to-particle transfer by homogeneous or heterogeneous nucleation of supersaturated vapours.The formation of secondary aerosols has been extensively studied in different environments in the last decades (e.g.free troposphere: Clarke, 1993;marine: Raes et al., 1997;coastal: O'Dowd et al., 1998; continental boundary layer: Kulmala et al., 2001b;Nilsson et al., 2001).Several nucleation mechanisms have been developed in the past few years to explain the observations of particle bursts in the atmosphere.The best understood way up till now is the binary nucleation of H 2 SO 4 and H 2 O (Kulmala et al., 1998) or the ternary nucleation of H 2 O, NH 3 and H 2 SO 4 (Korhonen et al., 1999).According to Kulmala et al. (2000) binary nucleation theory is not able to predict the observed nucleation rates in the atmosphere at typical tropospheric sulphuric acid concentrations (10 5 − −10 7 cm −3 , Weber et al., 1998;Weber et al., 1999).Ternary nucleation, however, gives significantly higher nucleation rates and thus can better predict the formation of new particles at typical tropospheric conditions (ammonia at a level of a few ppt).Kulmala et al. (2000) suggest that nucleation occurs almost everywhere in the atmosphere, at least during the daytime and leads to a reservoir of thermodynamically stable clusters (TSCs), which under certain conditions grow to detectable sizes.However we still do not exactly know under what kind of meteorological and physical conditions the growth of these TSCs will occur and which precursor gases are necessary.c European Geosciences Union 2002 M. Boy and M. Kulmala: Influence of spectral solar irradiance In our last publication (Boy and Kulmala, 2002) we suggested that UVA solar radiation is one key parameter for the formation of new particles.We calculated ratios of UVA to different solar bands (PAR -photo synthetically active radiation, reflected PAR, global, reflected global and net radiation) and plotted these ratios against the number concentrations of particles between 3-5 nm during the time the particle bursts occurred.Our analysis for that work was based on radiation sensor data from 1999.In January 2000 we installed a radiospectrometer -measuring solar irradiance between 280-580 nm -in Hyytiälä and in this experimental series we used continuous measurements made with this instrument to investigate more detailed information about what part of the solar spectrum has the highest influence on the formation of newly formed aerosols. Measurements Data were collected at the Station for Measuring Forest Ecosystem-Atmosphere Relations (SMEAR II) in Hyytiälä, Finland. The station is located in Southern Finland (61 • 51 N, 24 • 17 E, 181 m asl), within extended areas of pine-dominated forests.For a detailed description of the SMEAR II station and instrumentation, we refer to Vesala (1998).The conditions at the site are typical for a background location, however, occasionally measurements were polluted by the station buildings (0.5 km) and the city of Tampere (60 km) both located in a west-south-west direction from the instruments. Nucleation events have been classified into A, B and C classes (Mäkelä et al., 2000) and an extra group (marked by S) for days with small indications that the formation of new particles had occurred but not enough indications to classify the formation as an event.Class A events are categorised by high amounts of 3 nm particles and continuous growth to larger particle sizes.Class B events show the same behaviour with less clarity and class C events are marginal nucleation events where one of the two stages was not clearly observed. This type of classification is quite subjective and takes into account the uncertainties and limitations of the instrumentation.Because of this, there will always exist an overlap between the classes.There are new numerical methods which have been published (Birmili et al., 2001) to classify different event days by the maximal number concentration of particles in the nucleation mode, the background aerosol concentration and the characteristic times for the concentration curves of the newly formed particles increase and decrease. These methods may have some advantages compared to our technique of looking at all the days and deciding in a more or less subjective way the class of the event.However there are still disagreements in the scientific community about the best way numerical solutions can be used for classification and all numerical methods need to be modulated to the location.For these reasons we used for this work the old classification system for the events.In Table 1 all events of 2000 and 2001 are listed including the start and the end time of the particle bursts and some extra parameters which will be explained later.The monthly distribution of A to C events for 2000 and 2001 (Fig. 1) shows two peaks: the first one in spring (March till May) with 40% of event-days and a second smaller one in autumn (August and September) with 25% of the events. The spectral solar radiation data were measured using a radiospectrometer system produced by Bentham (England).The system consists of the following four components: The whole system is placed above the tree level in a small wooden cottage on a 10 m high building to insure an undisturbed solar irradiance throughout the year.The diffuser is protected by a quartz-glass which has a high transmittance (94-96%) in the measured wavelength-range and dry air is streaming permanently into the dome to prevent condensation.The calibration of the glass dome is made by measuring Table 1.Date, start and end time of the particle bursts; Time of the day spectral irradiance (300-339 nm) reaches 600 mW m −2 ; Time of the day particle concentration (3-10 nm) exceeds 400 particle cm −3 ; Maximal spectral irradiance (in 300-339 nm) and maxima particle number concentration (in 3-10 nm) for all A-and B-events of 2000 solar radiation on cloudless days with and without the glass about twice per year. The calibration sources include two calibrated Bentham CL6-H lamps (150 W, 250-2500 nm, in a housing with mounting for a direct connection to diffuser), a current stabilised power supply 250 W with automatic current ramp up/down facility and a mercury calibration lamp with a mounting for direct connection to the DM150.The signal calibration is carried out once a month with one of the two CL6-H lamps and once every 3 months the second CL6-H lamp is used as a reference emitter to recalibrate the first lamp if necessary.The wavelengths are checked also once every three months and in the two years a maximum wavelength shift of 0.4 nm at 253.65 nm was detectable. The spectroradiometer has been making measurements every 30 minutes since 28 January 2000.The scans are from 280-580 nm and the step-width is 1 nm.The row data are stored and recalibrated afterwards to enable later corrections of the data if necessary. A Differential Mobility Particle Sizer (DMPS) system (located near the mast) monitors aerosol size distributions at a height of 2 m from ground level giving a continuous view of the distribution and evolution of sub-micrometer aerosol particles.The DMPS system used here actually consists of two components.The first one includes a TSI 3025 UFCPC and a Hauke-type short DMA (Differential Mobility Analyzer) and measures particles between 3 and 20 nm in dry diameter.The second includes a TSI 3010 CPC and a Hauke-type medium DMA capable of measuring particles between 20 and 500 nm.Particle size distribution is recorded every 10 min.A detailed description of this system is given in Jokinen and Mäkelä (1997) and Mäkelä et al. (1997). Concentrations of ozone were measured with a TEI 49 (Thermo Environmental Instruments) gas analyser based on O 3 specific absorption of UV light.Air samples were collected from the mast at heights of 4.2 m, 16.8 m and 67.2 m every 5 min.Temperature (measured with PT-100-sensors) were collected every 50 s at these three heights as well. Concerning the classification of the events The importance of solar irradiance for the formation of new particles and the growth of these particles to the Aitken mode has been described in many papers (Birmili and Wiedensohler, 2000;Clement et al., 2000;Kulmala et al., 2001a).However, it is still an open question as to what part of the solar spectrum is responsible for the realization of these processes.To date, nearly all publications have used measurements of global solar irradiance as the radiation parameter. In this work we analysed data from a radiospectrometer measuring solar radiation from 280 to 580 nm with a step width of one nm to gain detailed information about which wavelength range of the solar spectrum has more influence on the formation of new aerosols. In the first step we divide the measured spectral solar irradiance ISPR into 30 groups with 10 nm wavelength ranges and calculate the average solar irradiance per group per scan per nanometer by According to our own experience gained from analysing data for two years and the results of Mäkelä et al. (2000) nearly all of the particle bursts occurred between 08:00 and 16:00 LT (see also Table 1).For these reasons in Eq. ( 2) we calculate the solar energy E G for each wavelength group and day during this time period. Figure 2 shows the E G curves for 15 A-Events in 2000 and 2001 (monthly representative selection).The curves show the same trend but there is a difference of more than one magnitude in all wavelength groups between the highest solar irradiance on 18 May 2000 and 5 February 2000.Although we can produce these plots for all event and non-event days for the two years, the plots will not give us differences in the spectral distribution for different days.Therefore we normalise every day by dividing all wavelengths groups by the mean value of this day between 330 and 380 nm.We then obtain Av, the average of this wavelengths interval, and can then calculate a normalised solar energy EN G for every day and wavelength group: The reasons we choose the 330 to 380 nm wavelength interval is the nearly linear trend with different slopes for all these curves throughout the year and the fact that irradiance in this range is mostly diminished by the scattering of permanent gases in the atmosphere and not by water vapour (Seinfeld and Pandis, 1998).In order to compare different event days with all days we now calculate first in Eq. ( 4) an average normalised spectral solar energy graph. with the number of all measured days NE = 546.The graph of EN G,N can now be used as a reference graph of normalised spectral distribution for all days in 2000 and 2001.Now we divide every wavelength group of the A-event days in Fig. 2 with the corresponding values from Eq. ( 4) by The results of Eq. ( 5) are plotted for all A-events in Fig. 3.The data for wavelengths numbers smaller than 300 nm are uncertain since in this wavelength range we are most of the year at the detection limit of the instrumentation with values smaller than 1 mW m −2 .However, we recognise on all days in April until September (green, red, magenta and yellow curves) a steep rise toward smaller wavelength numbers starting between 330 and 340 nm.This can be equated with an increase in solar radiation by a factor up to 2. On the other side the event days in February and some of the days in March (blue curves) behave in the opposite way with a decrease of spectral irradiance at wavelengths below 340 nm.The right hand side of all the curves are more bunched and mixed than the left side with a weak slope toward higher wavelength numbers.The rest of the A-and B-event days show a similar trend to those in Fig. 3 with a steep increase below 340 nm from April to September and a decrease in the same wavelength range in the autumn and winter months.The reason for this behaviour is physical.In the Finnish autumn and winter the solar zenith angle is always larger than 60 • and so the pathway of the solar beam through the atmosphere is much longer compared to summer or spring.Rayleigh scattering is more effective for smaller wavelength than for larger once and this has the consequence that our calculated yearly reference curve is inadequate.To avoid this we calculate by Eq. ( 6) equal to Eq. ( 4) and Eq. ( 5) a spectral solar reference curve for every month and the ratios for every event day to the corresponding month: with MD the amount of measured days per month in 2000 and 2001 (e.g.February = 57).In Fig. 4 the calculated ratios of R A,M (j, A(d, m)) are plotted for the A-events of Fig. in February and the events in March 2000 show a steep increase in short wavelength radiation in comparison with the reference curve for the corresponding month.The highest increase is by a factor of 2 for the day in March with the smallest amount of solar irradiance (see Fig. 2).We also found that in spring and summer higher and smaller values of spectral radiation between 300 and 340 nm on the event days are well mixed.If we combine the results from Fig. 3 and 4 we can conclude that short wavelength irradiance between 300 and 340 nm is higher on most event days in winter compared to a normalised reference graph for all days of the corresponding month but quantitative still much smaller than in spring or summer.It appears that on days with a low amount of solar energy the relative high values of irradiance between 300 and 340 nm seem to be important.The same results presented here for some of the A-events in 2000 and 2001 can be seen for all the event days.This indicates that for the formation of new particles the responsible solar radiation band is short wavelength irradiance in UV-B and the first 10 to 20 nm of UV-A.We calculate now for all event classes throughout the two years a comprehensive mean value for every wavelength group according to Eq. ( 4) with NE being now the number of measured event days per class and divide then these values by the values of our reference graph of normalised spectral irradiance for all days in 2000 and 2001 (EN G,N ).The average irradiance per class and the results of the above calculations are shown for all classes in Fig. 5.The amount of solar energy is for A-, B-and C-events in all wavelength groups about 2 times higher compared to the average for all the days in the two years and about 3 times higher compared to the non-event days.The graphs further show an increase in the short wavelength range between 300 to 340 nm and a continuously light increase towards higher wavelengths. Concerning the number concentration of the particles So far we have only compared the spectral solar irradiance with a classification of the single days into those with events and non-events.Furthermore, we have included half hour average values of number concentrations of different particle size ranges (size ranges: 3-5 nm, 3-6 nm, 3-10 nm and 3-50 nm).We have calculated correlation coefficients between the number concentration of the particles and the irradiance in the different wavelength groups for every day of the two years.The results with the highest correlations are those, which use a particle size of 3-10 nm.The differences in the correlation coefficients between the smaller size ranges and the 3-10 nm range is negligible (< 0.02).It could be due to the fact that at this small range 3-5 nm or 3-6 nm we are measuring particles at the detection limit of the DMPS system.So our results are more reliable if larger particles such as 3 to 10 nm were included for the following discussion.The size range 3 to 50 nm also includes the Aitken mode aerosols and here the correlation coefficients reach only half the previous values.This may be due to the fact that in this size range beside condensational growth, coagulation plays an important role and the influence of solar radiation is less important. Figure 6 gives the average correlation coefficients between the number concentrations of particles between 3 and 10 nm and the spectral solar irradiance for the different classes in a histogram plot.The solid lines in each subplot mark the maximum value of the correlation coefficient in each class. The number of measured days per class is included in each subplot.All of the three event classes (A, B and C) have the highest correlation in the wavelength groups 3-5, which corresponds to the short wavelength range between 300-329 nm.After 330 nm the gaps between the solid lines and the bars in the first three subplots increase slightly towards higher wavelengths and reach a maximum around 30% at 580 nm.The absolute values of the correlation coefficients are not very high (around 0.6 for A-events) but we have to remember that these numbers are averages over all events per class and that in this context more than for the absolute values, the differences of the correlation coefficients between the single wavelength groups are interesting. Case study for 5 May 2002 We will use now an example-day (5 May 2000) and present reasons for the higher correlation coefficients of the short wavelength bands.In order to do this we plotted the daily particle number concentrations for particles (3-10 nm), short (300-339 nm) and longer wavelength irradiance (460-469 nm) in Fig. 7.The graph firstly shows that longer wavelength irradiances start to increase earlier as shorter (green compared to red curve).This is a physically well-known effect and because particle bursts always appear after sunrise and stop mostly long before sunset (see Table 1) this leads to higher correlation coefficients between UV solar radiation and the particle number concentrations.The second reason for the higher coefficients can be explained by the three peaks in the short wavelength irradiance curve marked by black lines (a, b and c).The irradiance peaks at line (a) and (c) in the morning and in the afternoon also appear in the particle curve during the next hour.The peak at line (b) around noon has no time lag and occurs at the same time in the radiation and particle concentration.Such patterns of solar radiation and particle number concentration curves occur on many event days.Most times the particles trends and peaks better fit the shorter wavelength of the solar spectrum by having the smallest time lags between peaks around noon when the pathways for the solar beam through the atmosphere reach their minima.However there are still event days where the correlation is quite small, but before making any conclusions we should consider two important facts: -The radiospectrometer and the DMPS system are about 200 m apart from each other and the influence of moving clouds are not negligible for the solar irradiance at this distance. -For both data sets (spectral irradiance and particles) half hour average values were used.A time step is necessary for handling all the data in reasonable computer time, however many interested features of the daily trend are neglected. Time lag between irradiance and particle increases At the end of this session we use for each parameter -short wavelength radiation and particle concentration -one selected value to calculate the time difference between the two curves.For the particle we took the time when the concentration exceeds 400 particles cm −3 (blue dot in Fig. 7) and for the irradiance we chose 600 mW m −2 (red dot in Fig. 7).The chosen values for irradiance and number concentration of particles are subjective selections for the situation in Hyytiälä, Finland and are not competitative for other locations.Figure 8 shows for all A-and B-event days for both parameters the specified times and the time lack (see also Table 1).There is a trend -especially in 2001 -that the time differences (green dots) with values around 0.5-2 h are smaller in winter till the beginning of spring than in summer and autumn (2-7 h).However, for the first event day on 5 February the particle concentration exceeds already two hours earlier the amount of 400 particles cm −3 before the irradiance reaches 600 mW m −2 .A more detailed analysis of different parameters of this day showed that on 5 February the highest ozone concentrations (39 ppb) between 1 January till 11 March were measured and that the concentration of H 2 O was as small as 10 17 molecules cm −3 .Further biological activity measured by CO 2 flux measurements in chambers were going on at this day.Bonn et al. (2002) investigated in laboratory experiments the highest ozonolysis-rates of monoterpenes and special of exocyclic monoterpenes (βpinene and sabinene) for low water vapour and high ozone concentrations.Exactly this physical situation can be seen on day 36 of 2000.This feature and the results above indicate that there are most probably different chemical and photochemical mechanisms responsible for the production of the condensable vapour/s.From the high conformity between the short wavelength spectrum and the particles it appears that radiation leads to the formation of new aerosols on many event days, however, on other event days different mechanisms such as the ozonolysis of monoterpenes seem to be more important. A potential mechanism explaining the indirect influence of short wavelength irradiance on the formation of SOA In the previous session we showed that the short wavelengths range between 300 and 330 or 340 nm seems to be the most important spectral solar radiation band concerning the formation of new particles or the growth of new clusters to the detectable 3 nm size.In this session we will continue with this result and present a photochemical reaction mechanisms, as a hypothesis explaining the possible indirect influence of short wavelength irradiance on the production of newly formed SOA.This is only one out of possible many different mechanisms participating in the formation of aerosols and the reason for presenting it here was to show one way besides any other photochemical reactions or the onset of vertical fluxes how solar irradiance and specially short wavelength irradiance can influence the formation of SOA. First we calculate with the radiospectrometer data (average values of 5 nm ranges) between 280 and 350 nm for every half hour and day the average number of photons per 5 nm intervals. with W L(k) the wavelength per group (= 282.5, 287.5,. . ., 347.5 nm), h the Planck constant and c o the speed of light in a vacuum.Further we include half hour average data for ozone and temperature in our analyses and calculate with the absorption cross section (ACS O3 - Molina and Molina, 1986) and the quantum yield (Q O3 -JPL publication 00-003, 2000) of ozone the photolyse rate for O 3 at the same times as in Eq. ( 7). With the photolyse rate and the half hour average values of ozone we now calculate in Eq. ( 9) the production rate of O( 1 D) as a function of time and wavelength Figure 8 shows as an example the production rate of O( 1 D) for 15 May 2000.In the x-axis a maximum occurs around noon when the solar zenith angle has its minimum and in the y-axis a maximum occurs around 310 nm with a gradient of one magnitude in less than 15 nm in both directions.The maximum in the y-axis is a combination between the decreasing of the absorption cross-section and the quantum yield of ozone with higher wavelengths on one side and the steep increase of the spectral irradiance in the same direction on the other side.Although the absolute values of solar irradiance in the wavelength range from 300-330 nm is one to two magnitudes smaller than the maximum values between 450-500 nm this spectral solar band is the only one which enables the production of excited oxygen radicals in the troposphere.The produced O( 1 D) most often collides with N 2 and O 2 , removing its excess energy and quenching back to its ground state by theory by calculating the maxima of O( 1 D) production per day and plot these data for the whole year in Fig. 9.Most of the event days and especially the events in winter show very high values for excited oxygen production rate compared to the none-event days in the same month.However, there are still many none-event days with high values for this parameter; but it is well known that besides radiation other variables like the condensational sink, the temperature and the concentration of some till now unknown precursor species also influence the formation process of new aerosols. Summary and conclusions We analysed two years of solar spectral irradiance data and number concentrations of particles in different size ranges.It has been showed for the classification in events and nonevents that there exists an increase in short wavelength solar radiation on event days.By normalising all daily average spectral radiation curves with the mean value of 330-380 nm, calculating ratios between the normalised values of events to the reference curve for all days of the two years or to reference curves for all days of the corresponding month respectively, we obtained the following results: The short wavelength irradiance between 300 and 340 nm on many event days in autumn and winter shows an increase compared to the reference curve for the corresponding month.During the rest of the year this trend disappears, however the absolute amounts of solar irradiance in this range is still as much as one magnitude higher in spring and summer.Using the same normalised values as before and calculating the ratios of the average of the different event classes to the reference curve of the two years we found a peak between 1.17 and 1.6 in the short wavelength range for all classes (Fig. 5) and a weak continuous increase towards higher wavelengths. Furthermore we calculated for every day the correlation coefficients between number concentrations of particles (3-10 nm) and the different wavelengths groups.The graphs showed the same results as above with the highest correlation for the short wavelength range being between 300 and 330 nm.A more specific answer to the reasons for the differences in the correlation coefficients was given by using an example day (5 May 2000).Plotting the curves for particle number concentration and the spectral irradiance for short (300-339 nm) and longer (460-469 nm) wavelengths brought two aspects into focus.First, a temporal later increases of shorter wavelengths and second a higher agreement on most event days between the peaks of the particle curves and the peaks of the short wavelengths groups.The first effect is a well known physically aspect and it is attributed to the fact that particle bursts always occur after the sunrise and vanish long before sunset.The second point indeed more indicates a direct influence of the short wavelength solar spectrum on the formation of newly formed aerosols.Further we plotted the time differences when the solar radiation (300-339 nm) exceeds a value of 600 mW m −2 and the particle number concentration increases to 400 particles cm −3 .The results showed smaller time lags in winter and spring compared to summer and autumn with one day 5 February where the particle number concentration exceeded the selected value already two hours earlier than the irradiance.However this day also had the highest ozone concentration during winter (39 ppb), a very low amount of water vapour (< 10 17 molecules cm −3 ) and biological activities.Bonn et al. (2002) found in laboratory experiments that ozonolysis of monoterpenes had the highest rates at small concentrations of H 2 O and high concentrations of ozone.All conditions were present on 5 February.This indicates that during special periods different chemical and photochemical mechanisms are responsible for the production of condensable vapours and so for the formation of new aerosols. In session 4 we presented a hypothesis to explain how the evaluated part of solar irradiance affects the production of condensable vapours and so the formation of new aerosols.For this reason we used half hour average values of ozone and temperature to calculate the production rate of excited oxygen atoms.The results of this analysis showed a maxima of O( 1 D) around noon and at 310 nm with a decrease of more than one magnitude below or above 295 and 325 nm, respectively.O( 1 D) is the main source in the troposphere for the production of hydroxyl radicals and the above introduced part of the solar spectrum is the only way to produce excited oxygen atoms in the troposphere.OH radicals are the most reactive species in the troposphere steering many atmospheric chemical reactions and could also be involved in the formation of new particles through chemical reactions, which produce the condensable vapours. We conclude this publication with a correction concerning our last paper (Boy and Kulmala, 2002).As mentioned in the introduction we suggested in that paper UV-A to be the responsible solar radiation parameter for the formation of new aerosols by using a data set of different radiation sensors for 1999.Comparing the UV sensors data of 2000 and 2001 with the calculated UV radiospectrometer data we realised that UV-A measurements of the sensor were continuously approximately 10% too high and UV-B showed a strong dependency on the solar zenith angle.It is not possible afterwards to find the reasons for the overestimation of UV-A by the sensor, however a possible explanation could be the expanding of the sensor-filter (normally from 320-400 nm) into the UV-B range.This would explain the higher amounts of UV-A compared to other solar measurements during the time of the particle bursts, which occurred in 1999 and agree completely with the results of the present work. -- A DM150 double monochromator with 300 mm focal length, fixed slit, remote operated swing-away mirror, holographic gratings (2400 g/mm blazed at 250), internal 6-position stepping-motor-driven filter wheel, filter set for UV solar measurement (selected for order sorting and optimum stray light detector hysteresis) and an end window pmt bialkali photo cathode -Benthams 200 series Detection electronics; including the 217-T power supply & display, 215 high voltage power supply, 228A integrating A to D converter and 267 programmable d.c.amplifier -Input optics; including a ptfe diffuser (200-800 nm) and an UV transmitting fibre optic (2 m, 4 mm dia to 13 × 1 mm) Data transfer equipment; including a 488/IEEE interface card for use with a PCMIA expansion socket on a PC, two IEEE/488 Cables, radiospectrometer control and data acquisition, display and manipulation software Fig. 2 . Fig. 2. Spectral solar irradiance for a representative selection of A-event days in 2000 and 2001. Fig. 5 . Fig. 5. (a) Average spectral solar irradiance for all event and non-event classes of 2000 and 2001; (b) Ratio of the graphs from (a) to a yearly reference graph from Eq. (4) (E G,N ). Fig. 6 . Fig. 6.Correlation coefficients for event classes between the number concentration of particles (3-10 nm) and the wavelengths groups from Eq. (1) (I S,G ). Fig. 7 . Fig. 7. Daily pattern of spectral solar irradiance and number concentration of particles for 5 May 2000. Fig.8.Time short wavelength irradiance (300-339 nm) exceeds 600 mW m −2 and particle number concentration (3-10 nm) exceeds 400 particle cm −3 for A-and B-events of 2000 and 2001.Further the difference between the two explained time points are included. Figure 10 :Fig. 10 . Figure 10: and 2001 Date Doy Start of End of Time at I 600 I max [mW m −2 ] Time at N 400 N max [cm −3 ]
2018-05-08T17:38:38.519Z
2002-11-28T00:00:00.000
{ "year": 2002, "sha1": "0e50a022150c2e187899d6e3d5d3d1c9d6aafbce", "oa_license": "CCBYNCSA", "oa_url": "https://acp.copernicus.org/articles/2/375/2002/acp-2-375-2002.pdf", "oa_status": "GOLD", "pdf_src": "CiteSeerX", "pdf_hash": "55aaa75d62d2dfcd89a476946b72bd73e018d4ce", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Materials Science" ] }
267347378
pes2o/s2orc
v3-fos-license
VALORIZATION OF RED BEETROOT VALUE CHAIN THROUGH AGRO-FOOD PROCESSING eetroot is highly nutritious vegetable crop, which is a perishable commodity due to its high moisture content. Thence, development of products could be highly advantageous in red beetroot value chain valorization. For this purpose, jam with 0.5% (RJC5) and 0.1% (RJC1) cinnamon powder, puree with 1% thyme (RPT) and 1% mint (RPM) leaves powder, also, raw pickle (RRG) and boiled pickle (BRG) with garlic were processed from red beetroot. A control treatment (RJC, RPC, RRC and BRC) without any additive from jam, puree, raw pickle and boiled pickle, respectively, was also processed. Consistency and phenolic and flavonoids were determined for red beetroot jam and puree treatments. Vitamin C and DPPH activity were determined for jam, puree and pickle treatments, where hardness was determined for pickle treatments. Sensory evaluation was conducted for both jam and pickle treatments. For the sensory attributes of red beetroot puree treatments, tahina salad was prepared by adding puree treatments as an additive. Results showed that, adding cinnamon did not affect the RJ consistency, where consistency of both RPT and RPM treatments were higher than the RPC treatment. vitamin C content was enhanced for RJC5, RJC1, RPT and RRG treatments. Regarding the DPPH activity determination, RJC, RPM, RRG, BRG and BRC demonstrate significantly high DPPH values. RRC and RRG treatments manifest a perfect hardness when compared with BRC and BRG treatments. For sensory evaluation, the RRG treatment recorded an excellent taste, texture and flavor sensory parameters, where the RJC, RJC5 and tahina salad with RPC treatment exhibit a good overall acceptability score INTRODUCTION A 'value chain' in agriculture identifies the set of activities, which bring a basic agricultural product from production in the field to final consumption, where at each stage value is added to the product.The agro-food B Egyptian J. Desert Res., 73, No. 2, 573-591 (2023) processing considered to be a way supporting valorization of the crop value chain where it reduces the postharvest crop losing rate, link producers with the markets and achieve customer needs, therefore, valorize the crop value chain.In the same time, at the latest years, there was a big interest about differentiation healthy food products, which improve health benefit (Stolzenbach et al., 2013). Red beetroot (Beta vulgaris L.) which belong to the Chenopodiaceae family and known as red beetroot, beet, garden beet, table beet is a traditional vegetable in many parts of the world.Today, beetroot is regularly consumed as part of the normal diet (Liliana and Oana-Viorela, 2020). Red beetroot has a great health benefit, which is important for the development and growth of human body, it contains large amounts of bioactive and nutritional compounds such as carbohydrates, fibers, vitamins, minerals and betalains, it could be used as a source of natural pigments (Dhiman et al., 2021, Kusznierewicz et al., 2021and Trishitman et al., 2021).Beetroot considered to be a premium dietary supplement as it is rich in phytochemical compounds (carotenoids, phenolic acids, ascorbic acid) which shown pharmacological applications and used in traditional medicine for hundreds of years to treat constipation, gut and joint pain (Hamedi and Honarvar, 2019) and due to its anti-anemic, lipid and blood pressure-lowering effects (Dhiman et al., 2021), also, its extracts exhibit antihypertensive and hypoglycemic activity as well as excellent antioxidant activity (Babarykin et al., 2019). Jam, pickle and puree is the most food preservation process that could be suitable to be applied under home conditions or small-scale food production system, so, it could support women and stockholders.The aim of this study is to valorize the value chain of red beetroot through developing of red beetroot food products to enhance the availability of red beetroot throughout the year to the consumer, reduce post-harvest losses and raise the added value of the red beetroot. MATERIALS AND METHODS The fresh red beet root, cinnamon bark powder, thyme leaves powder, mint leaves powder, garlic, tahina, citric acid, salt and sugar were purchased from local markets, Giza, Egypt.The vegetal parts from red beetroot were removed and washed to remove residues of sand or dirt then packed in polyethylene bags and stored in a refrigerator at 4⁰C, until processing. Processing of Red Beetroot Jam The red beetroot jam (RJ) was processed according to Sindumathi and Amutha (2014), where the washed red beetroot was peeled and grated using a kitchen processor.The grated red beetroot was then put into an open stainlesssteel pan and a required amount of sugar (55%) was added and heated continuously under low flame.When the total soluble solids (TSS) reached 60° Brix, citric acid (0.6%) was added and stirred continuously.Heating was stopped when the TSS reached 67-68° Brix.The mixture was hot filled into 300 ml previously sterilized glass jars and the jars were inverted to ensure sterilization of the vertical space then they had left to cool under ambient temperature.The prepared jam was stored at a refrigerated temperature (4 ±2°C) until analysis.Two treatments of the red beetroot jam were processed by adding 0.5% (RJC5) and 1% (RJC1) of cinnamon powder.A control red beetroot jam treatment (RJC) was processed without any additive. Processing of Red Beetroot Puree Washed red beetroot was cut into cubes and soaked in water and boiled for 10 minutes, then cooled to room temperature.The cooled red beetroot cubes were blended using a kitchen blender to obtain a fine puree paste as described by Guldiken et al. (2016).Two treatments of red beetroot puree (RP) were processed by adding 1% thyme (RPT) and 1% mint (RPM) leaves powder while a control treatment (RPC) was processed without any additives.Dried glass jars were filled with the RP treatments, covered and sterilized at 121°C for 30 minutes (Talcott and Howard, 1999) in stainless steel pressure cooker.RP treatment jars were cooled to room temperature and then stored refrigerated at a temperature of 4±2°C until analysis. Processing of Red Beetroot Pickle Washed red beetroot was cut into small, homogeneous slices using a slicer (approx.thickness 5 mm).Red beetroot sliced pieces were divided into two groups, the first group (RR) was pickled directly in the raw state while the second group (BR) was boiled for five minutes and cooled to room temperature before pickling.Samples from two groups were distributed into clean dried glass jar.Minced garlic cloves (1%) were added to both groups (RRG and BRG), while the control treatment of both groups was processed without minced garlic (RRC and BRC), for both groups, respectively.Each jar was filled with a salty brine (120 g NaCl /L of water) to obtain an overall salt concentration of 5-6% in the end-product (Srivastava and Singh, 2016). Analytical Methods The fresh red beet root sample was analyzed for moisture, ash, total fiber, total protein and ether extract according to A.O.A.C. (2005).Total carbohydrate was calculated by difference.For the RJ and RP treatments, consistency and phenolic and flavonoid compounds were determined as follows: Consistency was measured using viscometer, V60002, FFUNGILAB, Spain (Spindle R7) 20 rpm, torque was 100% maintained at the Food Safety and Quality Control laboratory (FSQC), Faculty of Agriculture, Cairo University. For the RJ, RP, RR and BR treatments, vitamin C and DPPH were determined as follows: Vitamin C was estimated according to Baja and Kaur (1981) where 5 g of each sample were extracted with 100 ml of oxalic acid -EDTA solution.The extract was filtered through a filter paper and then centrifuged.A 5 ml aliquot was then transferred into a 25 ml calibrated flask and mixed with other reagents (0.5 ml of metaphosphoric acidacetic acid solution and 1 ml of 5% V/V sulphuric acid), followed by 2 ml of ammonium molybdate reagent.After 15 minutes the absorbance was measure at 760 nm against a reagent blank. DPPH radical scavenging activity was estimated as described by Brand-Williams et al. (1995).Concentrations ranging from 0.1, 0.2 and 0.4 mg/ml were prepared with methanol from each sample.The extract (100 µl) and DPPH radical (100 µl, 0.2 mM) was dissolved in methanol.The mixture was stirred and left to stand for 15 minutes in the dark, then the absorbance was measured at 517 nm against a control which carried out using 2 ml DPPH solution without the test sample.The DPPH free radical scavenging ability was subsequently calculated as follows: DPPH scavenging ability (%) = (Ac -At)/Ac × 100. Where Ac: absorbance of control At: absorbance of samples Hardness of The Red Beetroot Pickle Treatments Hardness (N) was measured using Instron Universal Testing Machine (Model 2519-105, USA) at Research Park (CURP), Faculty of Agriculture, Cairo University.Six replicates from each sample were taken.The machine test speed was 200 mm/min and hardness (N) was recorded electronically. Sensory Evaluation The RJ, RR and BR treatments were analyzed for their sensory profiles.For that, several attributes (color, texture, taste, flavour and overall acceptability) were evaluated.Whilst the sensory attributes of RP treatments were evaluated by estimate the color, texture, taste, flavour and overall acceptability of tahina salad processed using the RP treatments.Each of these attributes was rated on a hedonic scale ranging from 1 to 10, where the number 1 corresponded to the lower limit (less intense, less pleasurable), 5 corresponded to the middle limit (middle intense, middle pleasurable) and 10 corresponded to the higher limit (very intense, very pleasurable) according to Guine et al. (2016). Statistical Analysis The data obtained were subjected to statistical analysis of variance (ANOVA).All analyses were performed in triplicate.All tests were conducted at the 5% significant level according to Armonk (2011). Gross Chemical Analysis of Red Beet Root Red beetroot considered to be a good source of human important nutrient.The chemical composition of red beetroot varies based on the cultivar.Sawicki et al. (2016) mentioned that the disposition of nutritional components in red beetroot differs based on the plant's anatomical portion (leaves, stem, root, peel). Table (1) illustrates that red beetroot contained a high content of total carbohydrates (8.8%), crude fiber (3.2%), crude protein (1.2%) and a small amount of ether extract (0.2%), which made it a good choice of nutrient source in weight loss diet.Moisture and total ash contents found to be 86.1% and 0.5%, respectively.Moftah et al. (2023) reported the same results for moisture, carbohydrate and total protein contents expect the total fiber content that was significantly lower (1.9%).Bangar et al. (2022) registered that the ash and total fiber of red beet root were 1.08% and 2.8%, respectively.The results are approaching with USDA (2020), which illustrated that, raw red beetroot contains 87.58%, 1.61%, 0.17% and 9.59% moisture, protein, total lipid and carbohydrate, respectively.Guldiken et al. (2016) mentioned that moisture of fresh red beetroot was 87%.Differences in nutritional values attributed to the properties of the soil and other environmental conditions (Abdo et al., 2020). Consistency of Red Beetroot Jam and Puree Treatments The consistency measurement is an important guidance to the product development (Shahnawaz and Shiekh, 2011).Consistency is related to non-Newtonian or semi solid fluids with suspended particles and long chain soluble molecules and is measured practically by distribution or flow of the product.The United States' standards define the consistency of semi solid products as their ability of holding the liquid section in suspension (Gould, 1983). Fig. (1) shows the consistency values of both RJ and RP treatments.It was found that, for the RJ treatments, the highest consistency value was obtained with the RJC1 treatment followed by RJC then RJC5 treatments.The results are in accordance with Salama et al. (2019), who illustrated that, the consistency of gurma melon jam with 1% cinnamon was more than the consistency of gurma melon jam with 0.5% cinnamon. Whereas the RPT and RPM treatments demonstrated a highly increment consistency value in comparison with the RPC treatment.Barrette et al. (1998) published that the consistency depends mainly on the insoluble solids to total soluble solids ratio.Nada et al. (2016) also clarified that, the apparent viscosity decreases as temperature increased for all strawberry puree jam samples studied.(RJC) control red beetroot jam, (RJC5) red beetroot jam with 0.5% cinnamon, (RJC1) red beetroot jam with 1% cinnamon, (RPC) control red beetroot puree, (RPT) red beetroot puree and (RPM) red beetroot puree. Phenolic and Flavonoid Compounds of Red Beetroot Jam and Puree Treatments A histrionic HPLC chromatogram peaks of RJ and RP treatments were shown in Table (2 (RJC) control red beetroot jam, (RJC5) red beetroot jam with 0.5% cinnamon, (RJC1) red beetroot jam with 1% cinnamon, (RPC) control red beetroot puree, (RPT) red beetroot puree with 1% thyme and (RPM) red beetroot puree with 1% mint. Likewise, data in Table (2) and peaks in Fig. (2b) represent the HPLC chromatogram of RP treatments which shows that, catechin, chlorogenic acid, caffeic acid, hesperidin and kaempferol were identify in the RP treatments but with a different concentration.The highest catechin (5.437 mg/kg) and kaempferol (19.951 mg/kg) concentrations were observed with the RPC treatment followed by RPT (3.475 mg/kg) and RPM (1.640 mg/kg) treatments for the catechin concentration but followed by RPM (6.380 mg/kg) and RPT (4.223 mg/kg) treatments for kaempferol concentration, respectively. The highest chlorogenic acid concentration (3.048 mg/kg) was observed with RPT treatment while the highest caffeic acid concentration (8.617 mg/kg) was observed with RPM treatment.Hesperidin concentration found to be decreased in RPC treatment (0.726 mg/kg) followed by RPT (5.097 mg/kg) and RPM (24.588 mg/kg) treatments, respectively.Moreover, vanillic acid and o-cumaric acid were identified in the RPT treatment (1.604 mg/kg and 0.808 mg/kg) and RPM treatment (3.741 mg/kg and 0.458 mg/kg), respectively. Also Fig. (2b) shows that, the RPM treatment was uniqueness with some compounds that not found in the other RP treatments which was ferulic acid, resveratrol, rosemarinic acid, myricetin, apigenin with 5.056 mg/kg, 4.288 mg/kg, 2.181 mg/kg, 0.718 mg/kg and 0.119 mg/kg concentrations, respectively, while RPT treatments was particularly had quercetin (2.919 mg/kg) and rutin (0.681 mg/kg).P-cumaric acid (119.698mg/kg) was only identified in the RPC treatment.Tahira et al. (2011) reported that mint contains rosmarinic acid and caffeic acid as a major phenolic acid and, ferulic acid in a considerable concentration.Ravichandran et al. (2012) clarified that cinnamic acid, vanillic and caffeic acid was found in red beet extracts and stated that, concentrations of phenolic acid vary depending on treatments.Baião et al. (2017) reported in a review that red beetroot contains caffeic acid, p-cumaric acid, phydroxybenzoic acid, syringic acid and vanillic acid.Nieto (2020) reviewed the quercetin, syringic acid, caffeic acid, hesperidin and kaempferol as the main phenolic acids and flavonoid components in thyme plant. Vitamin C of Red Beetroot Jam, Puree and Pickle Treatments Vitamin C is one of the most antioxidant effective factors in food.Temperature and duration of thermal processing impacts vitamin C content (Nemzer et al., 2011, Njoku et al., 2011, Paciull et al., 2016and Pavlović et al., 2017).Therefore, vitamin C content of RJ, RP, RR and BR treatments were evaluated.Data of vitamin C are presented in Fig. (3).There was a significant difference in vitamin C content for both RJ and RP treatments.A highly significant increment in vitamin C content was recorded for RJC5 and RJC1 treatments in comparison with the RJC treatments.The results obtained (RJC) control red beetroot jam, (RJC5) red beetroot jam with 0.5% cinnamon, (RJC1) red beetroot jam with 1% cinnamon, (RPC) control red beetroot puree, (RPT) red beetroot puree with 1% thyme and (RPM) red beetroot puree with 1% mint, (RRC) control raw red beetroot pickle, (RRG) raw red beetroot pickle with 1% garlic, (BRC) control boiled red beetroot pickle and (BRG) boiled red beetroot pickle with 1% garlic. are in agreement with Byarushengo et al. (2014), who announced that cinnamon essential oil can compensate the loss happened in vitamin C during pineapple jam process.Also, Shokry et al. (2018) observed that, using cinnamon and clove fortify the loss in vitamin C content in pomegranate jam. On the other side, the RPT treatment possesses a significant highly vitamin C content, counter to the RPM treatment which recorded a significant lower vitamin C content than RPC treatment.wherefore, adding thyme found to be more powerful in supporting vitamin C loss in RP processing more than the mint addition.The same trend of thyme and mint effect was observed by Rehman et al. (2019), who reported a significant increase in ascorbic acid content for apple jam treated with thyme compared with apple jam treated with mint and clarified that, a deficiency in ascorbic acid was occurred with an increase in the percentage of mint added to the apple jam. Fig. (3) clarifies that vitamin C content of the RR treatments is significantly higher as compared with the BR treatments.Furthermore, for the RR treatments, the RRG treatment recorded a significantly higher vitamin C content more than RRC treatment at variance the BRG treatment which scored a slightly lower vitamin C content than the BRC treatment.So, adding garlic did not support the degradation in vitamin C content that occurred as a result of the heat treatment.The results of the BR treatments are slightly in accordance with Srivastava and Singh (2016), who found that vitamin C content in boiled RR was 8.85 mg/100 g, also mention that vitamin C content for the fresh beetroot was 7.95 mg/100 g.The loss in vitamin C content may be due to effect of temperature during thermal processing (Uckiah et al., 2009 andRehman et al., 2019). DPPH Radical Scavenging Activity of Red Beetroot Jam, Puree and Pickle Treatments Antioxidant activity of RJ, RP, RR and BR treatments were estimated by DPPH free radical scavenging activity determination.DPPH is a stable free radical that reduced in the presence of antioxidants, resulting in color changing from purple to yellow (Do et al., 2014).Data in Table (3) reveal the DPPH values of RJ, RP, RR and BR treatments.The highest significant value of DPPH was found with the RJC treatment followed by RJC5 then RJC1 treatments, whilst the RPM treatment possessed a significant increment in DPPH value followed by RPT then RPC treatments.Moreover, the finding of the present study displays that adding garlic significantly enhanced the DPPH value when compared with the control sample of both RR and BR treatments, this is because garlic had the strongest antioxidant capacity as reported by Sayin and Alkan (2015).Sawicki and Wiczkowski (2018) reported that, there is a decrement in the rate of betalains loss during boiling and fermentation of the unpeeled red beet and so the unpeel step for the red beetroot enhances the DPPH percent as well as the antioxidant effect. Hardness of Red Beetroot Pickle Treatments Pickle texture and the changes happened during processing are a key of importance to determine because it affect the consumer acceptance (Farahnaky et al., 2012).Hardness of the the RR and BR treatments were evaluated at zero time, 30 days and 60 days of pickling.Fig. ( 4) clarifies that, the hardness of the RRC and RRG treatments were significantly more than the BRC and BRG treatments.Also, it was noticed that, adding garlic to the RR and BR affected the pickle hardness, where it was found that, the hardness of RRG and BRG treatments were significantly lower than the RRC and BRC treatments all over the 60 days.Rahman et al. (2022) observed more than 50% reduction in hardness of blanched nutmeg pickle whereas the time of the blanching time increase the hardness decrease and reported that this may be happened due to the tissue softening under high temperature.Badwaik et al. (2016) reported that, the turgor pressure increases within the cell structure during the blanching treatments which forces the cell membrane against the cell wall and cause a loss in fruit texture.Rahman et al. (2014) informed that the presence of enzymes responsible for hydrolyzed plant tissues might be affect the decrement in pickle hardness.(RRC) control raw red beetroot pickle, (RRG) raw red beetroot pickle with 1% garlic, (BRC) control boiled red beetroot pickle and (BRG) boiled red beetroot pickle with 1% garlic. Sensory Evaluation of Red Beetroot Jam, Puree and Pickle Treatments Sensory evaluation used to assess the product quality and consumer expectations about the product.Data in Table (4) represent the sensory evaluation of all red beetroot treatments.For RJ treatments, it was noticed that there were no significant differences between RJC and RJC5 treatments for color, texture, flavour and overall acceptability, but significantly differences in taste.The RJC1 treatment was significantly lower in flavour and overall acceptability when compared with RJC and RJC5 treatments.For taste parameter, RJC5 and RJC1 treatments found to have a slightly significant lower taste as compared with RJC treatment. For the sensory evaluation of RP treatments, which was evaluated by adding the RP treatments to tahina salad, the mean lower significant sensory parameters scores were observed with the RPM treatments, where the RPC was significantly higher than the RPT treatments, where taste, texture and overall acceptability of the RPC treatment was significantly higher than the RPT treatment. Furthermore, RRG treatments found to have the highest significant taste, texture and flavour scores, followed by RRC treatments for texture parameter and BRC treatment for taste and flavour parameters. CONCLUSION The present study revealed that, using cinnamon with RJ, thyme and mint leaves powder with RP exhibit a good DPPH activity with an excellent content of polyphenol and flavonoid compounds beside the acceptable consistency.Also, adding cinnamon and thyme support vitamin C content for both RJ and RP treatments, respectively.RR treatments state an excellent vitamin C and hardness properties and possess a good DPPH value as compared with boiling BR treatments.Furthermore, adding garlic reinforcement vitamin C content for the RRG treatment and DPPH value of RRG and BRG treatments but affected both RR and BR treatments hardness.Regarding the sensory evaluation, the highest overall acceptability was found with RJC, RJC5, RRG and BRC treatments, whilst the tahina salad contained RPC treatment scored a high overall acceptability followed by tahina salad contained RPT treatment then tahina salad contained RPM treatment.Generally, preserving red beetroot through agri-food processing reduced postharvest loss and supports the red beetroot value chain with a recommendation of more analysis that should be done in order to provide a safety product. Table ( 1 ). Gross chemical composition of raw red beet root (fresh weight basis).
2024-02-01T16:37:04.022Z
2023-12-31T00:00:00.000
{ "year": 2023, "sha1": "4223bb154d78019fdde0a69514bd21fd9d0e9671", "oa_license": null, "oa_url": "https://ejdr.journals.ekb.eg/article_336333_4d9dc0c7d4a9fbfecd6e25d644519758.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "fc05a987191af515481ee4bbabaf355b02d9a176", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
214727433
pes2o/s2orc
v3-fos-license
The dynamic neural code of the retina for natural scenes Understanding how the visual system encodes natural scenes is a fundamental goal of sensory neuroscience. We show here that a three-layer network model predicts the retinal response to natural scenes with an accuracy nearing the fundamental limits of predictability. The model’s internal structure is interpretable, in that model units are highly correlated with interneurons recorded separately and not used to fit the model. We further show the ethological relevance to natural visual processing of a diverse set of phenomena of complex motion encoding, adaptation and predictive coding. Our analysis uncovers a fast timescale of visual processing that is inaccessible directly from experimental data, showing unexpectedly that ganglion cells signal in distinct modes by rapidly (< 0.1 s) switching their selectivity for direction of motion, orientation, location and the sign of intensity. A new approach that decomposes ganglion cell responses into the contribution of interneurons reveals how the latent effects of parallel retinal circuits generate the response to any possible stimulus. These results reveal extremely flexible and rapid dynamics of the retinal code for natural visual stimuli, explaining the need for a large set of interneuron pathways to generate the dynamic neural code for natural scenes. distinct receptive field, and a final layer that represented the responses of individual ganglion cells.We found that CNN models could predict the responses of ganglion cells to either natural scenes or white noise nearly up to a fundamental limit of precision set by intrinsic neural variability, and were substantially more accurate than linear-nonlinear (LN) models 6 or generalized linear models (GLMs) 1 (Figure 1B, C).Based on results varying the number of cell types in the first two layers, eight cell types were chosen as the minimum number that achieved the maximal model performance (Fig. 1D). CNNs internal units are highly correlated with interneuron responses To examine whether the internal computations of CNN models were similar to those expected in the retina, we computed receptive fields for first and second layer model cell types in CNNs trained on responses to natural scenes.We found that the receptive fields of CNN model cells had the well-known structure of retinal interneurons 7,8 , with a spatially localized center-surround structure (Fig. 2A-B), a mix of On and Off responses, and both monophasic and biphasic temporal filters. In inferotemporal cortex, units of CNNs have been shown to be correlated with a linear combination of the activity of individual neurons 9 making it difficult to draw conclusions about individual neurons by an examination of CNN units.We compared the activity of CNN units to interneuron recordings performed on separate retinae that the model was never fit to (Fig. 3A).The stimulus presented to the retina and separately to the model was a spatiotemporal white noise checkerboard, a stimulus that has no spatiotemporal correlations except for the 50 µm size and 10 ms duration of square stimulus regions.We compared each interneuron recording with 8 units of the first layer and 8 units of the second layer at each location to find the most correlated unit in the model at the location of the cell.We found that each recorded interneuron was highly correlated with a particular unit type, and only at a single location (Figure 3B-E).Spatiotemporal receptive fields were highly similar between recorded interneurons, and their most correlated model cell type (Fig. 3B).The magnitude of this correlation approached the variability of the interneurons themselves, as assessed by using an LN model fit to the interneuron to predict another segment of the interneuron's own response (Figure 3 C,D).This correlation was specific for individual unit types, as could be observed by ranking the model cell types from most to least correlated, and finding that the second and lower most correlated model cell types were substantially less correlated with the interneuron than the most correlated cell type (Fig. 3 D,E).This high correlation did not arise by chance as a "null" CNN model fit by shuffling spikes relative to the stimulus did not produce internal units correlated with interneuron responses (Fig. 3D, E).Therefore, fitting a CNN model to the natural scene responses of retinal ganglion cells alone models an entire population of interneurons, many of which have high correlation with measured interneuron responses created with a different stimulus and a different retina. A wide range of retinal phenomena are engaged by natural stimuli Numerous nonlinear computations have been identified by presenting artificial stimuli to the retina, including flashing spots, moving bars and white noise.However we neither understand to what degree natural vision engages these diverse retinal computations elicited by artificial stimuli, nor understand the relationship between these computations under natural scenes and underlying retinal circuitry.We tested models fit either only to natural scenes or white noise by exposing them to a battery of structured stimuli previously used in the literature to identify and describe retinal phenomena.We focused on effects shorter than 400 ms, which was the longest timescale our model could reproduce as limited by the first layer spatiotemporal filter.Remarkably, the CNN model exhibited fast contrast [10][11][12] adaptation (Fig. 4A), latency encoding 3 (Fig. 4B), synchronized responses to motion reversal 13 (Fig. 4C), motion anticipation 14 (Fig. 4D), the omitted stimulus response 15 (Fig. 4E), frequency doubling in response to reversing gratings 16 (Fig. 4F) and polarity reversal 17 (Fig. 4G).All of these response properties arose in a single CNN model simply as a by-product of optimizing the models to capture ganglion cell responses to natural scenes.CNN models trained on white noise did not exhibit all of these phenomena, in particular failing to capture fast contrast adaptation, latency encoding and the omitted stimulus response, indicating that natural scene statistics trigger nonlinear computations that white noise does not.Even though these natural scenes consisted only of a sequence of images jittered with the statistics of fixational eye movements (the stimulus contained no explicit object motion or periodic patterns), the CNNs still exhibited motion anticipation and reversal, and the omitted stimulus response. The only retinal phenomenon tested that was not captured by the model was the object motion sensitive (OMS) response 5 , a computation thought to discriminate object motion from retinal motion due to eye movements.We hypothesized that the absence of an OMS response in the model was due to the lack of differential motion in the training stimulus, and trained additional models on the retinal response to movies of swimming fish that include differential motion.We found that these models did indeed exhibit an OMS response (Fig. 4H).Thus the model reveals whether retinal computations triggered by one stimulus occur in another, in particular during natural scenes. Interneuron contributions to a dynamic visual code Receptive fields in sensory neuroscience are typically thought of as representing a static sensory feature, although it is known that this feature can change slowly due to adaptation to the statistics of the stimulus [18][19][20] .A particularly advantageous property of CNN models is that rapid dynamics of visual sensitivity can be examined by computing the instantaneous receptive field (IRF), which can be easily calculated as the gradient of the model output with respect to the current stimulus (Fig. 5A).This can be done at each moment of time, allowing us to examine for the first time the full dynamics of the receptive field and assign those dynamics to the action of interneurons. IRFs changed with extremely rapid dynamics on the scale of tens of ms, as judged by comparing the correlation coefficient between IRFs at different time delays.The dynamics of the IRF were limited by stimulus correlations, in that for an uncorrelated stimulus (white noise), the IRF changed from its previous value with a time constant of ~ 30 ms (Fig 5B).To examine these rapid changes in feature selectivity, we clustered the IRFs computed at each time point, revealing that during natural stimuli the retina signaled in different modes that changed rapidly depending on the stimulus frame (Fig. 5, C -E).By computing the average stimulus for each IRF cluster, we unexpectedly found new phenomena triggered by different ethologically relevant stimuli.The presence of an edge changed the IRF to be maximally sensitive to motion of that edge, and the direction of motion and orientation preference changed with the intensity gradient of the edge (Fig 5 C, E).IRFs also showed much stronger direction and orientation selectivity than could be observed in the mean receptive field (MRF) (Fig 5E).The location of the IRF showed also substantial variation compared to the MRF (Fig. 5E).Furthermore, changes in local stimulus intensity reversed the polarity of the IRF (Fig. 5E), indicating a local effect distinct from previously reported polarity reversal triggered by peripheral stimuli 17 . Because the IRF is a mathematical sum of the features conveyed by different interneuron pathways in the model, we investigated the source of these dynamic receptive fields by performing an exact decomposition of a ganglion cell's response into the Interneuron Contributions (INCs) of each of the 8 model cell types in the first layer at each time point (Fig. 6), using the method of Integrated Gradients 21,22 (see methods).Intuitively, the INC is the product of the sensitivity of the model interneuron to the stimulus and the sensitivity of the model output to the model interneuron, and is an application of the multivariable chain rule.Thus to determine the effect of an interneuron on the circuit's output, this analysis takes account of both a cell's input (its receptive field) and its output (projective field) 23 .To assess the model interneuron's contribution over a range of stimuli rather than a single point in stimulus space, the INC is integrated over a straight path of increasing contrast from the zero stimulus, a grey screen, to the particular stimulus frame.This new type of analysis is different than simply examining the representation of a stimulus in a neural population, and reveals how an interneuron population uses that stimulus representation to change the model circuit's output. We identified the patterns of INCs that generate the code for natural stimuli, white noise and artificially structured stimuli.We found surprisingly that the interneuron patterns generating responses to some artificial stimuli live within the space of those elicited by natural stimuli but not within the space of white noise (Fig. 6 B, C), showing that these artificially structured stimuli are indeed ethologically relevant to understanding the retinal code under natural scenes.This further explains why models fit to natural scenes but not white noise recapitulated the previously described phenomena triggered by these structured stimuli -white noise is insufficient to explore the stimulus space that triggers these phenomena, but natural scenes are.Thus, natural scenes drive the set of interneuron contributions into a set of states that encompasses previously explored artificial stimuli, showing the relevance of stimuli of unknown functional relevance such as the omitted stimulus response 15 . These results capture the dynamic retinal code of natural scenes, and connect that code to much of the retinal phenomenology previously described.This approach reveals the extensive rapid changes of the neural code on a previously inaccessible timescale, and enables a direct determination of the contribution of cell types to any arbitrary stimulus.Because model cell types have high correlation with retinal interneurons, this approach will serve as the foundation to define how interneuron patterns generate the dynamic neural code for natural scenes. Methods Visual Stimuli.A video monitor projected the visual stimuli at 30 Hz controlled by Matlab (Mathworks), using Psychophysics Toolbox 24 .Stimuli had a constant mean intensity of !"!" !! .Images were presented in a 50 x 50 grid with a square size of 25 µm at a frame rate of 100 Hz.Static natural jittered scenes consisted of images drawn from a natural image database 25 and drifted in two dimensions with the approximate statistics of fixational eye movements 5 .The image also changed to a different location every one second, representing a saccade-like transition.Natural movies consisted of fish swimming in an aquarium, and contained both drift and saccade-like transitions that matched static jittered natural scenes.For analysis of model responses to artificial stimuli (Fig. 3), unless otherwise stated stimuli were chosen to match published values for each phenomenon. Electrophysiology.Retinal ganglion cells of larval tiger salamanders of either sex were recorded using an array of 60 electrodes (Multichannel Systems) as previously described 26 .Intracellular recordings were performed using sharp as previously described. Model training.We trained convolutional neural network models to predict retinal ganglion cell responses to either a white noise or natural scenes stimulus, simultaneously for all cells in the recorded population of a given retina 27 .Model parameters were optimized to minimize a loss function corresponding to the negative log-likelihood under Poisson spike generation, where and are the actual and predicted firing rates of the retinal ganglion cells at time t, respectively with a batch size of T , chosen to be 50 s.To help with model fitting, we smoothed retinal ganglion responses during training with a 10 ms standard deviation Gaussian, the size of a single time bin in our model. The architecture of the convolutional neural network model consisted of three layers, with 8 cell types (or channels, in the language of neural networks) per layer.Each layer consisted of a linear spatiotemporal filter, followed by a rectification using a rectified linear unit (ReLU).For each unit, an additional parameter scaled the activation of the model unit prior to the rectified nonlinearity.This scaling parameter could vary independently with location. Optimization was performed using Adam 28 , a variant of stochastic gradient descent.Models were trained using TensorFlow 29 or PyTorch 30 on NVIDIA Titan X GPUs.Training an individual model to convergence required ~8 hours on a single GPU.The networks were regularized with an L2 weight penalty at each layer and an L1 activity penalty at the final layer, which helped maintain a baseline firing rate near 0 Hz. During optimization, the spatial components of linear filters were implemented as a series of stacked linear convolutions, each consisting of a series of 3 x 3 filters.Thus seven sequential 3 x 3 filters were applied to generate a 15 x 15 filter.After optimization, these sequential filters were collapsed into a single linear filter.Therefore, this procedure did not change the final architecture of the model, but improved the model's performance, presumably by reducing the number of parameters. We split our dataset into training, validation, and test sets, and chose the number of layers, number of filters per layer, the type of layer (convolutional or fully connected), size of filters, regularization hyperparameters, and learning rate based on performance on the validation set.We found that increasing the number of layers beyond three did not improve performance, and we settled on eight filter types in both the first and second layers, with filters that were much larger (Layer 1,15 x 15 and Layer 2, 11 x 11) compared to traditional deep learning networks used for image classification (usually 5 x 5 or smaller).Values quoted are mean s.e.m. unless otherwise stated. Linear-Nonlinear Models.Linear-nonlinear models were fit by the standard method of reverse correlation to a white noise stimulus 6 .We found that these were highly susceptible to overfitting the training dataset, and imposed an additional regularization procedure of zeroing out the stimulus outside of a 500 µm window centered on the cell's receptive field. Generalized Linear Models.Generalized linear models (GLMs) were fit by minimizing the same objective as used for the CNN, the Poisson log-likelihood of data under the model.We performed the same cutout regularization procedure of only keeping the stimulus within a 500 µm region around the receptive field (this was critical for performance).The GLMs differed from the linear-nonlinear models in that they have an additional spike history feedback term used to predict the cell's response (Pillow et.al. 2008).Instead of the standard exponential nonlinearity, we found that using soft rectified functions log(1+exp(x)) gave better performance. Interneuron contributions To identify the contribution of each model neuron to the processing of specific visual stimuli, we used the recently developed method of Integrated Gradients we obtain an equality .Our goal is to quantify the contributions of the first layer model units , where " [1]" refers to an index of the layer, "c" refers to channel, " " is the linear convolutional filter, and is the bias parameter.Therefore we further apply the chain rule to define the INC of c-th channel ( ) as .Finally, the spatially averaged INCs forms a vector with eight elements, which is taken as the contribution of that model cell type to the model output at that instant of time.Dashed lines indicate the correlation between an interneuron's response and an LN model fit to a separate segment of the recording from the same interneuron.Thus, the correlation between model units and interneurons approaches the variability of the interneurons themselves.Dotted lines indicates correlation between interneuron responses and a null model, fit after taking the spikes of a ganglion cell in 5 second blocks and shifting them randomly relative to the stimulus (E).Average correlation between interneuron recordings and the most correlated CNN unit from a different retina, an LN model fit to the same interneuron, and the null model. 21,22 to decompose a ganglion cell's firing rate into the Interneuron Contributions (INCs) of each of the 8 model cell types by performing path integral.Mathematically, the trained deep learning model represents a nonlinear function , where is the output firing rate and is the movie input.Using the line integral where the path takes a straight line , and assuming Figure 1 . Figure 1.Convolutional neural networks provide accurate models of the retinal response to natural scenes.(A) Convolutional neural network model trained to predict the firing rate of simultaneously recorded retinal ganglion cells from the spatiotemporal movie of natural scenes.The first layer is a spatiotemporal convolution, the second is a spatial convolution, and the third is a final dense layer, with rectifying nonlinearities in between each layer.Each location within the model also has a single parameter that Figure 2 . Figure 2. Structure of receptive fields of model cell types.(A) Receptive fields of model units in Layer 1 computed by presenting a white noise stimulus to the model, and shown as the spatial and temporal average of the space-time separable approximation to the receptive field.(B) Same for Layer 2. Figure 3 . Figure 3. Model internal units are correlated with interneuron responses.(A) Schematic of experiment.Models were fit to natural scenes or white noise stimuli.Bipolar or amacrine cells from a different retina were recorded intracellularly responding to a different white noise sequence.(B) Spatiotemporal receptive fields of example interneurons recorded from a separate retina, and the model unit that was most correlated with that interneuron.The model was never fit to the interneuron's response.(C) Top.Correlation map of a model cell type with the response of an interneuron recorded from a different retina to a white noise stimulus.Each pixel is the correlation between the interneuron and a different spatial location within a single model cell type.Bottom.Responses compared to the most correlated model unit and the interneuron.(D) The average correlation between different interneuron types (7 bipolar, 26 amacrine) and model cell types ranked from most correlated model unit (left) to least (right).Dashed lines indicate the correlation between an interneuron's response and an LN model fit to a separate segment of the recording from the same interneuron.Thus, the correlation between model units and interneurons approaches the variability of the interneurons themselves.Dotted lines indicates correlation between interneuron responses and a null model, fit after taking the spikes of a ganglion cell in 5 second blocks and shifting them randomly relative to the stimulus (E).Average correlation between interneuron recordings and the most correlated CNN unit from a different retina, an LN model fit to the same interneuron, and the null model. Figure 4 . Figure 4. CNN models reveal that many nonlinear retinal computations are engaged in natural scenes.After fitting a model to natural scenes, a number of artificially structured stimuli were presented to the model.(A) Contrast adaptation.Left.LN model of a ganglion cell responding to a uniform field stimulus with low or high contrast, showing adaptive changes in temporal filtering and gain.Middle, Median temporal frequency taken from the Fourier transform of the temporal filter, averaged over a population of ganglion cells as a function of contrast.Results shown for models fit to natural scenes and white noise.Right, Averaged gain measured as the slope of the nonlinearity as a function of contrast, showing that CNN models decrease their decrease their gain with contrast when fit to natural scenes, but not when fit to white noise.(B) Latency encoding.Left: Flash response with intensities ranging from weak to strong.Right: Latency of the peak response vs. stimulus intensity for models trained on natural scenes or white noise.(C) Motion reversal.Stimulus consists of a moving bar that abruptly reverses direction at different positions.Left.Published results of a population of ganglion cells showing a synchronous response (arrow) to the reversal.Also shown is the population response of CNN model cells.(D) Motion anticipation.Population Figure 5 . Figure 5. Dynamic mode switching of retinal receptive fields.(A) Diagram of the instantaneous receptive field (IRF) as the sensitivity of the ganglion cell to the stimulus at each moment.(B) Average correlation coefficient between IRFs at different times separated by a time interval Δt for white noise and natural scenes.Also shown for comparison are average correlations between stimulus frames.(C) Four IRF clusters for a single cell.Top Row: Average spatiotemporal stimulus that drove an IRF cluster, shown as a sequence of stimulus frames from 200 ms to 100 ms preceding a spike.Bottom Row: The mean spatiotemporal IRF in each cluster.Left, Two different IRF clusters showing motion sensitivity (Top: motion up, Bottom: motion down), which were driven by an edge.Right.IRF clusters driven by intensity changes showing a biphasic OFF receptive field when the background intensity changed, and a biphasic On receptive field when the stimulus center brightened.(D) t-sne analysis of IRFs, colored by cluster identity from k-means clustering of IRFs performed separately.(E) Top left.For 26 neurons with 12 IRFs clusters each, radial axis shows Direction Selectivity Index for IRFs and mean receptive field (MRF), plotted against the preferred angle of motion.Top right.Same for Orientation Selectivity Index.Bottom left.Normalized value of the first peak (either positive or negative) of the temporal filter, plotted against the time of the peak for IRFs and MRF.Bottom right.The position of the center of mass of IRFs relative to the center of mass of the MRF (black point at zero).50 µm corresponds to ~ one visual degree. Figure 6 . Figure 6.Interneuron contributions to natural and artificial scenes.(A) Diagram of concept of Interneuron Contributions (INCs), which represent how much each model unit (cell) contributes to the model's output for each particular stimulus (see methods).We focused on the contribution of Layer 1 model units, and averaged over all units of a given type (B) INCs for the 8 cell types for the model's first layer for a natural stimulus sequence.Each colored row shows the contribution of a cell type in layer 1 of the model.(C) t-SNE plot including natural scenes, white noise, and several artificially structured stimuli that can be summarized by a single 400 ms stimulus sequence.Each point in the Figure 1 Figure 4
2020-04-01T09:21:12.283Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "4e599bf805f0765d7398d171806df64f7647aec7", "oa_license": "CCBYNC", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2019/12/17/340943.full.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "df0036a22d84305a201dc3f3c642ca0ce740c39d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Computer Science" ] }
270413764
pes2o/s2orc
v3-fos-license
Statistical‐Physical Adversarial Learning From Data and Models for Downscaling Rainfall Extremes Quantifying the risk from extreme weather events in a changing climate is essential for developing effective adaptation and mitigation strategies. Climate models capturing different scenarios are often the starting point for physical risk. However, accurate risk assessment for mitigation and adaptation often demands a level of detail they typically cannot resolve. Here, we develop a dynamic data‐driven downscaling (super‐resolution) method that incorporates physics and statistics in a generative framework to learn the fine‐scale spatial details of rainfall. Our approach transforms coarse‐resolution (0.25°) climate model outputs into high‐resolution (0.01°) rainfall fields while efficaciously quantifying the hazard and its uncertainty. The downscaled rainfall fields closely match observed spatial fields and their distributions. Contrary to conventional thinking, our results suggest that coupling simple statistics and physics to learning improves the efficacy of downscaling midlatitude rainfall extremes from climate models. Introduction The susceptibility to extreme weather events will likely worsen in a changing climate under continued warming from greenhouse gas emissions (Seneviratne et al., 2021).The adverse effects of weather extremes are broad, including but not limited to food security, urban infrastructure, public health, and ecological sustainability.For example, the rising severity and frequency of cyclones and extreme rainfall events lead to more frequent and severe floods, causing economic damage and loss of human lives and livelihood (Neumann et al., 2015).Quantifying the hazard of weather extremes in a changing climate is vital for estimating risk and optimizing climate change adaptation and mitigation strategies. Conceptually, weather extremes are rare events within a non-stationary, nonlinear stochastic process.Unfortunately, being few and far between, observational records of extreme events are often too short to determine future risk, rendering the historical record inadequate for modeling future risk.One could use physically based numerical climate models to infer the frequency and severity of rare extremes in nature (John et al., 2022).However, coarse-resolution models often do not capture essential fine-scale detail while becoming computationally expensive at the needed high resolutions.Thus, there is strong interest in Downscaling (super-resolution) methods (Wilby & Wigley, 1997) with improved efficacy. Machine Learning (ML) can provide the needed efficacy.The methodology suggested is to learn a downscaling function using long-range simulations of a relatively small ensemble of high-and low-resolution climate model pairs under different climate scenarios.After that, downscaling many low-resolution climate model outputs to high-resolution fields is rapid.When trained with uncertainty-aware methods (Trautner et al., 2020), the downscaling function would also help rapidly quantify risk.However, high-resolution reference models are not ground truth.Resolving the compounding effect of model bias and downscaling imperfections is essential to improve trust and support the ML pathway.This paper takes an initial step in addressing the issue.Here, we adopt a hindcasting approach using extreme historical events in coarse-resolution reanalysis model outputs and fine-scale observations to train a downscaling function.We compare downscaled results from more recent years with measurements to test performance.Although observational data may have some biases, typically, they are small, so high-resolution climate model bias becomes a non-issue.Since we expect the learning process to correct low-resolution model bias, using observational data effectively yields a verifiable approach that is also immediately applicable.Training to the present time using a coarse model and projecting the risk into the immediate future (inter-annual to decadal timescale) is vital to many applications, such as underwriting insurance and disaster response planning. However, model-and data-trained downscaling has additional issues that need to be resolved.Downscaling is already an ill-posed inverse problem, so the inability of ML solutions to satisfy basic physical principles without additional support complicates the situation.By definition, there is a severe lack of data at the rare extremes, but developing synthetic data augmentation strategies for "detail-preserving regularization" is challenging.An added training/learning challenge is that one-to-one correspondence between climate model fields (inputs) and observational data (outputs) only sometimes exists.Additionally, missing/spurious events, timing errors, and intensity differences are possible. Here, we propose a novel dynamic data-driven downscaling approach (Blasch et al., 2018) that overcomes these issues.While the methodology is extensible to many meteorological fields (e.g., near-surface wind and heat stress), this study focuses on rainfall extremes in the mid-latitudes, a fundamental driver of inland flooding. Our approach consists of the following steps (see Figure 1): 1. Physics of Orographic Precipitation: a simple upslope lift or spectral model captures the basic structure of rainfall fields and provides high-resolution estimates of orographic precipitation.A coarse climate model field input yields a relatively spatially detailed orographic rainfall component.2. Dynamic Data-Driven conditional Gaussian process (CGP): Indexing the training data (coarse model outputs and high-resolution observations) on a manifold and employing a conditional ensemble-based Gaussian process (Ravela, 2016;Trautner et al., 2020) produces "first-guess" rainfall estimates.3. Generative Adversarial Learning: In the third step, the "first-guess" downscaled rainfall and orographic rainfall estimates are input to an adversarial learning framework.We produce high-resolution deterministic rainfall from coarse model inputs with little training data by priming a two-stage Generative Adversarial Network (GAN) with physics and statistics.4. Optimal Estimation for Bias Correction: In the fourth step, we inject stochastic perturbations of residual excess rainfall into the deterministic output to produce an ensemble-based optimal estimate for bias correction (Ravela & McLaughlin, 2007;Ravela et al., 2010;Trautner et al., 2020).We show that the distribution of extreme annual rainfall captured by downscaled predictions closely matches the observation. In the steps mentioned above, using physics and statistics reduces the need for extensive training data for the GAN, compensating for the spatial details and nonlinearity neither mechanism captures alone.Test results suggest that compared to dynamical downscaling, which is computationally expensive, or statistical downscaling, which is scale limiting, or using detailed physics followed by simple statistics, simple physics and statistics with learning capture high-resolution spatial patterns with fidelity and match the observed distributions.Using a reanalysis model as input (ERA5) and observations (Daymet) as training output immediately enables the interannual risk assessment for insurance and disaster risk response, with potential for future applicability to longterm climate risk.Wilby and Wigley (1997) has written one of the earliest comprehensive reviews of downscaling methodology for precipitation simulated by climate models, where they have broadly categorized the methods into four groups: regression methods, weather pattern-based approaches, stochastic weather generators, and limited area modeling. Related Work In practice, a downscaling process can be hybrid by combining more than one of those techniques.The limited area modeling, also known as dynamical downscaling, involves embedding a relatively high-resolution numerical model inside the coarse-resolution model (e.g., Giorgi et al., 2009).The computationally expensive nature of the dynamical downscaling methods has led to the popularity of non-expensive statistical downscaling approaches, especially regression methods.It entails establishing linear or nonlinear relationships between coarse-resolution predictor variables and fine-resolution predictand (precipitation) from historical records.In more recent literature (e.g., Anandhi et al., 2008), regression methods are considered a part of a broader category, named transfer functions, including modern ML approaches for downscaling.This section reviews a non-exhaustive list of relevant statistical and machine-learning methods for precipitation downscaling.).ERA5 rainfall is initially downscaled in the first step using conditional Gaussian Processes and combined with orographic rainfall estimates from the upslope model.The outputs pass to a Generative Adversarial Network (GAN-1), which produces fine-resolution rainfall fields.In the next step, another adversarial network (GAN-2) is trained on upscaled Daymet rainfall fields and then applied to the output of GAN-1 to produce even finer-resolution rainfall.Over a Validation period, rainfall return-period distributions computed from downscaled and observed rainfall fields train bias correction functions.Finally, Bias-corrected rainfall risk curves are back-projected onto rainfall fields.The white ellipses denote methods, and the colored double-boxes denote input or output to the models.The dotted arrows represent actions performed only during training and not in operation. Statistical Downscaling Bias Correction and Spatial Disaggregation (BCSD) was one of the earliest successful statistical downscaling techniques.It is a simple yet effective parametric downscaling method, which begins by bias correction of the distribution of coarse-resolution rainfall to match the high-resolution rainfall followed by spatial interpolation (Wood et al., 2002).Despite its simplicity, BCSD has been shown to outperform some relatively complex statistical methods (Bürger et al., 2012).Multiple linear regression (MLR) has been used (Hessami et al., 2008;Najafi et al., 2011) to take the predictive abilities of climate variables other than rainfall into account.MLR approaches are often accompanied by a bias correction technique, such as quantile mapping, and a dimensionality reduction technique, such as Principal Component Analysis or Independent Component Analysis.Najafi et al. (2011) show that, if proper predictors are chosen, MLR techniques can be an efficient method for downscaling.Non-parametric methods such as k-nearest neighbors (Gangopadhyay et al., 2005), kernel density estimators (Lall et al., 1996), kernel regression (Kannan & Ghosh, 2013), non-homogenous Markov model (Mehrotra & Sharma, 2005), Bayesian model averaging (Zhang & Yan, 2015) etc. have also been widely used for rainfall downscaling.Mannshardt-Shamseldin et al. (2010) have used Generalized Extreme Value Theory with regression methods to downscale extreme precipitation. ML-Based Downscaling Besides the standard statistical methods, neural methods such as multilayer perceptron (Xu et al., 2020), artificial neural network (Schoof & Pryor, 2001), and quantile regression neural network (Cannon, 2011) have been adapted for rainfall downscaling.Alternative ML approaches such as random forests (X.He et al., 2016), support vector machine (Anandhi et al., 2008;Tripathi et al., 2006) and genetic programming (Sachindra & Kanae, 2019) have also been applied to solve the downscaling problem with varying degrees of success.None of those mentioned above models have consistently outperformed the others in terms of performance and interpretability (Baño-Medina et al., 2020). With the advent of deep learning techniques, a new suite of ML-based approaches, such as Recurrent Neural Networks (Q.Wang et al., 2020), Long Short-Term Memory (Miao et al., 2019), autoencoder (Vandal et al., 2019), U-net (Sha et al., 2020) etc. have been available for rainfall downscaling.Because of their deep layered structure, deep learning methods are well-suited for extracting high-level feature representations from high-dimensional climate data sets.Several Convolutional Neural Network (CNN)-based approaches, developed for single image super-resolution, are also brought into the climate science domain, as they can explicitly capture the spatial structure of climate variables.Although the methods primarily apply to computer vision, insights also apply to the precipitation downscaling problem.Super-resolution CNN (Dong et al., 2015) was one of the first successful approaches developed in this field.Many subsequent models, such as Very Deep Super-resolution (Kim et al., 2016), Enhanced Deep Super-resolution (Lim et al., 2017), Deep Back-projection Network (Haris et al., 2018) etc., were built upon it.Adversarial super-resolution method Super-resolution Generative Adversarial Network (Ledig et al., 2017) showed that GAN could better model the high-frequency distribution of the image and improve the sharpness and perceptual quality.Enhanced Super-resolution GAN (ESRGAN) (X.Wang et al., 2018) improves upon this approach by providing a better GAN-loss formulation.These CNN and GANbased methods have since emerged for various rainfall downscaling studies (Singh et al., 2019;Vandal et al., 2017;Watson et al., 2020).Video super-resolution methods have been applied to rainfall downscaling to establish temporal relationships (Teufel et al., 2023). Physics-Based Downscaling Physics-based approaches for rainfall downscaling analytically estimate high-resolution orographic precipitation components to augment climate model-simulated large-scale rainfall fields.The literature discusses the impact of orography on regional precipitation patterns and orographic precipitation modeling (Roe, 2005).A simple model for orographic rainfall estimation would be the Upslope model (Collier, 1975), where we assume the condensation rate is proportional to the vertical wind velocity, and the condensed rainwater falls immediately to the ground.Considerations of downstream hydrometeor drift (Sinclair, 1994) may reduce these biases.The linear theory for modeling orographic precipitation (spectral method) (Smith & Barstad, 2004), which improves the upslope model, introduces a time delay component between condensation and rainfall and vertical moisture dynamics.However, the spectral model is sensitive to its parameters (Paeth et al., 2017).One of the most significant disadvantages of the spectral model is its inability to account for the spatial variability of the climate variables and parameters.On the other hand, the upslope model incorporates spatial variability of the input variables, which makes it worthwhile despite its biases. Methods Our approach downscales low-resolution rainfall data from the European Centre for Medium-Range Weather Forecasts Reanalysis (Hersbach et al., 2020) (ERA5).The high-resolution rainfall fields are comparable to the Daymet gridded daily rainfall data set (Thornton et al., 2022), which serves as the ground truth.Because of model and data collection biases, ERA5 and observed precipitation do not have one-to-one correspondence at a daily scale, which makes learning a downscaling function between them challenging.A two-step downscaling process overcomes this problem (Figure 1).In the first step (GAN-1), rainfall is downscaled from 0.25°to 0.1°resolution, using ERA5 data as the predictor and the corresponding ERA5-Land data as the ground truth.ERA5-land provides a replay of the land surface component of ERA5 at a finer resolution and has high spatial and temporal correspondence with ERA5 (Muñoz- Sabater et al., 2021).The downscaled rainfall is initially estimated by actively searching data on a manifold to learn the downscaling function incrementally using an iterative CGP. Upon convergence, the "first-guess" downscaled rainfall field and a physics-based estimation of orographic rainfall are processed by an adversarial learning framework (GAN-1) to refine finer-scale details.In the second step (GAN-2), upscaled Daymet rainfall fields become predictors to the corresponding high-resolution Daymet fields for training downscaling from 0.1°to 0.01°resolution.These two trained models provide the pathway from ERA5 predictors to Daymet-resolution downscaled rainfall.Even though this transformation may contain bias, our approach corrects it in the final bias correction step. Conditional Gaussian Process We use an iterative CGP regressor to build a dynamic data-driven downscaling method.Figure S1 in Supporting Information S1 shows a schematic representation.First, we index the training pair of low-and high-resolution rainfall fields on a manifold (Ma & Fu, 2012;Ravela, 2016) to query and retrieve nearest neighbors and actively estimate the downscaling function.At each iteration, the downscaled rainfall field upscales again, which targets new data on the manifold for the next learning iteration.Upon convergence, it produces a "first-guess" downscaled rainfall field. Let a low-resolution rainfall field be L query , and its high-resolution counterpart H query to be generated.We make the nearest neighbor search through the low-resolution fields (L train ) of the manifold for the closest match to L query , and we denote it as L k .We also obtain the high-resolution counterpart associated with L k , indicated as H k .Now we iteratively improve L k and H k until L query and L k converges, after which we assign H k to our desired "firstguess" downscaled field H query . (1) Here D and U are downscaling and upscaling functions, respectively.The rate α is set as a scaling constant.In this study, we used averaging and pooling as the upscaling function and the following Gaussian process regressor as the downscaling function. where C LL is the sample conditional covariance of L train , C HL is the cross-covariance between H train and L train .To overcome dimensionality issues (Yadav et al., 2020), ensemble-based reduced-rank square-root methods (Ravela & McLaughlin, 2007;Ravela et al., 2010) are employed. Upslope Orographic Precipitation Estimation We use the upslope model (Roe, 2005) for estimating orography-induced precipitation, assuming that precipitation is proportional to the total condensation rate in a vertical column of a saturated atmosphere induced by the vertical wind velocity. Here, S is the orography-induced condensation rate and w is the orographic vertical wind velocity.Further, ρ is the air density, q s is the saturation specific humidity, p is atmospheric pressure level, and p s and p toa are atmospheric pressure at the surface and the top of the atmosphere, respectively.In this study, we assume the top of the atmosphere to be at 200 hPa level.The orography-induced vertical wind velocity at the surface is estimated by, where u → and v → are zonal and meridional components of horizontal wind at the surface, and Z e and Z n are the slope of the surface at eastward and northward directions.We interpolate the elevation of the surface to the resolution of ERA5-land before estimating the slope.w is presumed to decrease linearly from the surface to the top of the atmosphere, where it becomes zero.The saturation moisture content (i.e., ρq s ) is estimated by, where R d is the gas constant of dry air (287.04J/kg/K), R v is the gas constant of saturated air (461.5 J/kg/K), T is the air temperature, and e s is the saturation vapor pressure.e s is estimated by, where, L v is latent heat of vapourization (2.26 × 10 6 J/kg) and e s0 is the reference saturation vapor pressure at the reference temperature T 0 .When T 0 is 273.16K, e s0 is 611 Pa. Equations 4-6 enable orography-induced vertical wind velocity and saturation moisture content calculations at discrete pressure levels up to 200 hPa, where ERA5 model outcomes are available.We compute the gradient at each pressure level using the second-order finite difference to estimate the integral in Equation 3. Spectral Method for Orographic Precipitation Estimation The linear method for orographic rainfall estimation (the spectral method) incorporates the time delay between conversion from cloud water to hydrometeor and hydrometeor to precipitation and vertical dynamics (Smith, 2003). where P(k,l) is the Fourier transform of the orographic precipitation, ĥ(k,l) is the Fourier transform of the terrain elevation, k and l are horizontal wavenumbers, σ = uk + vl is the corresponding intrinsic frequency, u and v are zonal and meridional components of wind, C w = ρq s is the thermodynamics uplift sensitivity factor, a coefficient relating condensation rate to vertical motion (see Equation 5), m is the vertical wavenumber, and H w is the depth of moist layer penetrated by vertical wind. Journal of Advances in Modeling Earth Systems where, N 2 m is moist static stability, given by g T (γ Γ m ) , g is gravitational acceleration, T is temperature, γ is the environmental lapse rate, and Γ m is the moist adiabatic lapse rate. where, R v is the gas constant of saturated air (461.5 J/kg/K) and L v is latent heat of vapourization (2.26 × 10 6 J/kg). Adversarial Learning In GAN, a Generator (G) and a Discriminator (D) network each play a game.Here, the game's outcome is to produce high-resolution rainfall from low-resolution input.The Generator (G(L; α G )) maps the input lowresolution (L) rainfall to a super-resolution reconstruction using a deep convolutional and upsampling network with parameters α G .The Discriminator (D(L; α D )), repeatedly fed fine-resolution rainfall ground truth (H) and the super-resolution generator output, learns to tell them apart.The output of the Discriminator is a scalar value, which represents the probability of a rainfall field being "real" (i.e., ground truth).As the two networks train iteratively as adversaries, the Generator's rainfall fields become more realistic, and the Discriminator better distinguishes between them.This is a minimax optimization problem, where the Generator is trying to increase logD(G(L), while the Discriminator is trying to reduce it by optimizing their parameters α G and α D .The adversarial loss functions of the networks are expressed as the following.(Goodfellow et al., 2014) However, in this study, we are using the Relativistic Average GAN (RaGAN) approach, which compares the discriminator outcome of a real image (H) with the average of the fake images (G(L)) and vice versa.Compared to its non-relativistic counterpart, the Relativistic Discriminator increases stability and generates higher-quality samples (Jolicoeur-Martineau, 2018).The discriminator loss function (L D ) in our study is given by, The loss function for the Generator (L G ) considers adversarial Loss as well as pixel-wise ℓ 1 Loss. where λ ∈ R is a regularization factor tunable as a hyperparameter. The network architectures of the Generator and Discriminator follow ESRGAN (X.Wang et al., 2018); Figure S2 in Supporting Information S1 shows a schematic representation.The basic building block of the ResNet-style (K.He et al., 2016) generator is a dense residual block, which is multiple convolutional blocks connected with dense connections.The ESRGAN (X.Wang et al., 2018) approach replaced them with Residual in Residual Dense Block (RRDB), which consists of stacked dense residual blocks connected with skip connections.The convolutional component of the Generator stacks RRDBs with a global skip connection.It operates on the lowresolution space, followed by an upsampling part that increases the resolution of the rainfall field.In our model, for GAN-1, we have delegated the upsampling job to CGP, and the GAN performs only the convolutional operations on the high-resolution space.GAN-2 does not use CGP but instead uses a sub-pixel convolution (Shi et al., 2016) (also known as pixel-shuffle)-based upsampling network.For the Discriminator, we have used a VGG-style (Simonyan & Zisserman, 2014) deep convolutional network that converts a given real/fake rainfall field into a single value, which is interpretable as the probability that the rainfall field is "real."Unlike ESRGAN, we do not use any pre-trained VGG network to compute perceptual Loss, as they are unsuitable for capturing climate data features. Hazard Quantification Due to the lack of one-to-one correspondence between ERA5 and Daymet, the final downscaled rainfall and ground truth are not event-wise comparable.However, one can compare the rainfall risk (hazard) estimated from both of them.In climate studies, risk typically includes hazard, exposure, and vulnerability; mathematically, the risk is a distributional representation (e.g., exceedance probability) of a variable of interest.We use the term "risk" in the latter sense.To assess the risk of an extreme rainfall event, a two-parameter Generalized Pareto distribution (Hosking & Wallis, 1987) is fit to the annual return periods (R) calculated from the empirical cumulative distribution function (E) of rainfall r from each grid point of each spatial grid and time window of interest. The probability density function of the Pareto distribution is given by, We compare the average annual return period curves simulated by the fitted Pareto distributions for both highresolution truth and super-resolution predictions and estimate the difference between their mean (bias) and standard error. Bias Correction ERA5 underestimates the rainfall hazard due to its tendency to underestimate the intensity of severe storms, leading to bias.To reduce this bias, we model the observed distribution of rainfall excess beyond the 99.9th percentile, conditioned on spatial location and season (month of the rainfall event).Injecting perturbations from the excess rainfall distribution into the deterministically downscaled rainfall fields improves bias.Additionally, it yields higher-order moments useful for an optimal estimation-based bias correction method (Equation 15) for additional improvement.We estimate the excess rainfall distribution on the training data set and develop the optimal estimation-based bias correction equations on the validation data set.The optimal estimator is: Here y t , y p , and y * p represents annual return period curves for ground truth, the mean of the stochastic injections (the prior), and bias-corrected (the posterior) rainfall, respectively.Again, reduced-rank square-root methods are helpful for highly resolved (and, therefore, high-dimensional) risk curve (e.g., return periods, exceedance probabilities) ensembles (Ravela & McLaughlin, 2007;Ravela et al., 2010).Using a quantile mapping method to bias-corrected rainfall fields, we back-project the bias-corrected return period curve.The process of learning the correction and back-projection is applied once.Even though it may be possible to learn and use it iteratively, doing so risks overfitting. Evaluation Methods Prior research on ML-based super-resolution techniques is mainly focused on computer vision problems and evaluates their models based on metrics such as peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) (X.Wang et al., 2018).However, we believe those metrics are unsuitable for assessing extreme rainfall patterns and their distributions.We propose the following evaluation methods that are more suitable for our use cases.The supplementary material includes additional metrics for comparison. Empirical CDF-Based Metrics For this study, we define an extreme event as the 90th percentile or higher mean rainfall in the study region.For each extreme event, we generate ECDF of rainfall values on all spatial grids for the downscaling model's highresolution reference and super-resolution prediction.In a non-ideal scenario, these two CDFs don't overlap.The mean horizontal difference between these two CDFs is known as bias, and the supremum vertical difference is known as the Kolmogorov-Smirnov (KS) statistic (Massey, 1951).Each event can be represented as a point in the bias-KS statistic plane.The better-performing method produces a point cloud closer to the origin, that is, it has relatively less bias and KS statistic.In practice, these point clouds generated by various downscaling methods may overlap significantly, and it may be challenging to visualize their differences.We summarize each point cloud as their mean, based on which we can assess the performance of each downscaling method at a glance. Mutual Information-Based Metric Entropy measures the expected amount of information or uncertainty inherent to a random variable outcome.The entropy H(X) (Cover & Thomas, 2006) of a random variable X is given by, where p() represents the probability function.The mutual information (I(X, Y)) between two random variables X and Y represents how much uncertainty of X is reduced when Y is known. We can normalize the mutual information based on the entropy of the individual random variables. The super-resolution produced by a better-performing downscaling model will have higher mutual information with the high-resolution truth.The measure as used here is sensitive to spatial shifts. Results We employ a two-step downscaling method to tackle the lack of correspondence between ERA5 and Daymet extreme rainfall fields.In the first step, downscaled rainfall is initially estimated by iteratively learning the downscaling function by actively searching data on a manifold using a CGP.After convergence, the downscaled rainfall field and a physics-based estimate of orographic rainfall are refined using an adversarial learning framework (GAN-1).In the second step (GAN-2), upscaled Daymet rainfall fields are utilized as predictors for corresponding high-resolution Daymet fields to train downscaling.These two trained models are applied in succession to obtain a super-resolution reconstruction of rainfall that is further bias-corrected to capture the observed distribution of extremes. 4.1. Step 1: Downscaling From 0.25°to 0.1°W e evaluate the following methodological choices for the first step of downscaling: 1. Bicubic Interpolation 2. Physics-based downscaling (Upslope Method) 3. Physics-based downscaling (Spectral Method) 4. Gaussian Process Regression (GP) 5. GAN 6. GP + GAN 7. GP + GAN + Upslope 8. GP + GAN + Spectral To evaluate the methods, the geographic region surrounding Cook County (Chicago), Illinois, USA, a flood-prone area, is selected as the primary domain of this study.Additionally, we chose another study in Denver, Colorado, USA, a mountainous region, to examine the efficacy of the orographic precipitation models. Figure S3 in Supporting Information S1 presents a qualitative assessment of the performance of the above methods by comparing low-resolution input, super-resolution output, and high-resolution reference for a particular extreme event.We notice that the downscaled rainfall fields capture the spatial pattern of observed rainfall well when a learning method is involved.However, the naive bilinear interpolation method and the pure physics-based model without statistical post-processing cannot capture the spatial pattern of the rainfall at a finer scale.models incorporating adversarial networks seems to outperform their alternatives. In Figures 2 and 3, we present a quantitative comparison among the models based on the Empirical Cumulative Density Function (ECDF)-based and mutual information-based metrics, described in Section 3.7.The normalized mutual information is individually computed for each spatial grid, and box plots represent their spread.The top and bottom lines of each box represent 25th and 75th percentile, and the red middle line represents 50th percentile of the mutual information.The bias and the KS-statistic represent the deviation of the ECDF of the downscaled rainfall events from that of the observed ones.We consider a model better-performing when the corresponding mutual information is high and bias and KS-statistic are low.As expected, the bilinear interpolation method and the pure physics-based models perform worse in all metrics.The performance of the rest of the models is comparable.There is a slight improvement in performance in the combined GP + GAN method compared to its GP-only and GAN-only counterparts.GP + GAN + Physics models outperform all other models for both mutual information and ECDF-based metrics.The improvement from the more sophisticated physics model (Spectral method) is minimal over the simpler physics model (Upslope method).The above findings are consistent for both the Chicago and Denver regions.We performed a t-test on each pair of methodological choices to examine if the changes were statistically significant.Figures S5 and S6 in Supporting Information S1 show that the improvements in CGP + GAN + Physics methods are statistically significant for mutual information-based metrics but not for the ECDF-based metrics for both regions. 4.2. Step 2: Downscaling From 0.1°to 0.01°F or the second step of downscaling, the following three methods are evaluated: 1. Bicubic Interpolation 2. CGP 3. GAN Other methods from the previous steps are skipped because of their computationally demanding nature and lack of climatic data availability at this scale. Figure S4 in Supporting Information S1 compares the above methods with a specific event's high-resolution truth (Daymet rainfall).While the interpolation method can capture the large-scale spatial structure of the rainfall pattern, it fails miserably when it comes to the high-frequency details.The CGP approach can capture the highfrequency structure relatively better, but it suffers from high noise in the output, which deteriorates the overall performance.The GAN approach can reasonably capture high-frequency rainfall, despite room for improvement.Figure 4 presents the ECDF and mutual information-based metrics for the Chicago and region.CGP outperforms interpolation by a large margin in bias and KS-statistic but performs poorly on mutual information due to high noise.GAN outperforms both methods for all metrics. Combined Downscaling Once both the downscaling models are trained, we apply them in succession to obtain the final downscaling outcomes.A qualitative assessment of the method's performance can be seen in Figure 5, where we compare a rainfall event simulated by ERA5 and its downscaled outcomes.A direct comparison with observation is impossible due to a lack of correspondence between ERA5 and Daymet rainfall.Instead, we evaluate the performance of the combined downscaling method to capture extreme rainfall hazard, following Section 3.5.Figure 6 compares the Pareto-distribution simulated mean annual extreme rainfall return period curve for observations, deterministic downscaling, stochastic downscaling, and the optimal estimate.Deterministic predictions underestimate the risk curve because ERA5 underestimates the magnitude of some very extreme (>99.9thpercentile) rainfall events-however, the stochastic injection of residual extreme rainfall and optimal bias correction close the gap.The mean corrected return-period curves are derived along with an upper and lower bound representing one standard error.This error bound serves to quantify the uncertainty surrounding the magnitude of rainfall across all events in the testing period.The final annual projections show around 6.8% bias and 10% standard error at Chicago and 6.4% bias and 12% standard error at Denver, even at an estimated extreme 1000-year return period.It is important to note that the data may not contain a 1000-year rainfall event but is a prediction from the fitted distribution.Finally, we back-project the bias-corrected return-period curve to the rainfall field to obtain the final bias-corrected downscaled rainfall predictions.An example rainfall field from this final prediction can be seen in Figure 5. Discussion This study leverages data, physics, and ML to develop an approach for rainfall downscaling and estimating extreme rainfall hazards.At the outset, we sought to overcome the difficulty that insufficient training data at the extremes would raise, the nonlinearity that straddling a substantial separation of scales entails, and the lack of correspondence between the source and target fields that the training data presents.In contrast to conventional physics-only, statistics-only, and physics-followed-by-statistics methods, we couple simplified physics and statistics with generative learning, which is novel and promising.Physics (upslope/spectral), statistics (CGP), and learning (GAN) fulfill distinct roles.In our downscaling method, a generative adversarial learning model captures the relatively fine-scale structures of the rainfall field.Priming the learning model with a CGP alleviates the need for extensive data augmentation strategies, given sparse historical extreme rainfall data.Priming with orography also accrues similar advantages; additionally, this improves the physical consistency of the result.While each step is somewhat limited in skill, they support each other's limitations, doing better than anyone alone to statistically improve the model significantly (see Figures 2-4, Figures S5 and S6 in Supporting Information S1).For simplicity, one could take alternative approaches, such as directly upscaling Daymet rainfall to ERA5 resolution for training.However, the proper upscaling function is nonlinear, without which significant bias can be introduced in the learned downscaling mapping. Our work suggests that if physics and statistics can handle processes at the "larger" scales, learning appears to do a fine job at the "finer" ones.When tested with data containing correspondence (ERA5-to-ERA5Land or upscaled observations to observations), each GAN is skillful in detailed reconstruction.However, we must be mindful that the final result in the absence of correspondence will have uncertainties beyond those induced by observational noise.Input bias (from ERA) and the downscaling operator couple to inflate the uncertainties beyond intrinsic noise levels.In applications such as risk, aggregated feedback in the form of bias correction ameliorates the problem.In other applications, such as precipitation nowcasting or short-term forecasting, an online framework, such as data assimilation, may be needed to reduce uncertainties. Our method applies to the current climate.In principle, it also applies to future climate hazard studies by directly using the downscaling learned in the present climate or adapting it to train with a few additional high-and lowresolution climate model runs.Our approach can incorporate multiple sources to quantify uncertainty.This includes Monte Carlo samples of coarse resolution model input fields or their perturbations, which our system rapidly downscales.Backprojecting resampled return period curves quantifies additional uncertainties in rainfall fields.This way, rapid, robust assessments of regional rainfall-driven risk estimation become tractable.Estimating parameter uncertainty in the learning model (Trautner et al., 2020) would improve uncertainty quantification.We anticipate that incorporating physics for other processes, such as convective rainfall, will increase the model's efficacy.Additionally, adapting the loss function to capture finer-scale details remains an area for investigation. Conclusions In contrast to statistical, physical, or detailed simulation followed by statistics, we developed a downscaling approach that uses simplified physics and statistics to prime a two-stage GAN, post-correcting the downscaled output with an optimal estimation-based bias correction scheme.We apply this to downscale ERA5 model precipitation to Daymet resolution without explicit correspondence between the fields.While KS/bias comparisons of distributions are not sensitive to geometric shifts, the mutual information measure is sensitive to spatial patterns.Ablation experiments using correspondence on the model or data downscaling sides using mutual information indicate that coupling simplified physics (e.g., Upslope or Spectral) with CGP and GAN is statistically significant.However, as expected, the proposed and alternative distribution comparisons are weakly informative.We tested the approach in two mid-latitude regions: Denver, where orography is significant, and Chicago, where it isn't, with similar benefits.However, we found no significant difference in performance from the choice of the physical scheme, suggesting that their large-scale patterns are of relative importance.Similarly, including CGP in the combination is significant, though it performs less effectively.This suggests that CGP and physics might compensate each other while reducing the data burdens on the GAN, effectively serving as a priming mechanism.Due to Daymet's smooth fields, an additional example using CHIRPS (Text S2 and Figure S7 in Supporting Information S1) shows that the downscaling approach can reproduce the patterns in the training data.Our work can be applied to downscale from climate models to observed data or from low-resolution to high-resolution climate models, which we believe can enable short-term and long-term climate risk assessment for sustainability-related decision-making applications. Figure 1 . Figure1.Overview of the proposed downscaling methodology.Coarse-resolution (0.25°× 0.25°) climate model outputs are downscaled to fine-resolution (0.01°× 0.01°) rainfall in two steps (GAN-1 and GAN-2).ERA5 rainfall is initially downscaled in the first step using conditional Gaussian Processes and combined with orographic rainfall estimates from the upslope model.The outputs pass to a Generative Adversarial Network (GAN-1), which produces fine-resolution rainfall fields.In the next step, another adversarial network (GAN-2) is trained on upscaled Daymet rainfall fields and then applied to the output of GAN-1 to produce even finer-resolution rainfall.Over a Validation period, rainfall return-period distributions computed from downscaled and observed rainfall fields train bias correction functions.Finally, Bias-corrected rainfall risk curves are back-projected onto rainfall fields.The white ellipses denote methods, and the colored double-boxes denote input or output to the models.The dotted arrows represent actions performed only during training and not in operation. Figure 4 . Figure 4. Performance comparison of three downscaling methods (Bicubic Interpolation, conditional Gaussian process, and Generative Adversarial Network), for downscaling 0.1°rainfall to 0.01°, at the Chicago (top row) and Denver (bottom row) region, based on (a, c) ECDF-based metrics and (b, d) Mutual Information-based metric. Figure 5 . Figure 5. Qualitative comparison of rainfall event simulated by ERA5 and downscaled predictions.The top row showcases an event in the Chicago region on 12 July 2017, and the bottom row shows an event in the Denver region on 12 September 2013.(a, e) Low resolution rainfall simulated by ERA5.(b, f) Intermediate resolution rainfall produced by the first step of downscaling.(c, g) High-resolution rainfall produced by the second step of downscaling.(d, h) Final downscaled rainfall produced by bias correction. Figure 6 . Figure 6.Comparison of mean annual return period curves between high-resolution ground truth (Daymet) and superresolution prediction at (a) Chicago and (b) Denver.The solid lines represent the mean and the shaded area represents the standard error.
2024-06-13T15:42:31.233Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "2370078068508964b94d598f0a7524f4d3dd1e64", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1029/2023ms003860", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "eafa55f5702bfd17586e0bff336662340cef002f", "s2fieldsofstudy": [ "Environmental Science", "Physics", "Computer Science" ], "extfieldsofstudy": [] }
14420425
pes2o/s2orc
v3-fos-license
Kinetic Monte-Carlo simulations of sintering We simulate the sintering of particle aggregates due to surface diffusion. As a method we use Kinetic Monte-Carlo simulations in which elasticity can explicitly be taken into account. Therefore it is possible to investigate the shape relaxation of aggregates also under the influence of an external pressure. Without elasticity we investigate the relaxation time and surface evolution of sintering aggregates and compare the simulations with the classical Koch-Friedlander theory. Deviations from the theoretical predictions will be discussed. INTRODUCTION One interesting aspect of powder processing is thermal sintering: Due to atomic diffusion solid bridges between particles form and grow. This lowers the surface free energy. If one waits long enough, complete coalescence of the particle aggregate will occur. The typical time scales depend on the particle sizes and shapes, the material type and the prevailing temperature. Here we address open questions concerning the sintering dynamics of large aggregates. It has long been known, that the time for the sintering of two identical particles is proportional to the 4th power of the radius of the final particle. This is at least true for large particles above the roughening temperature (Nichols 1966). For lower temperatures the equilibration time increases exponentially (Combe et al. 2000). In earlier investigations elasticity has not explicitly been taken into account, i.e. the atoms were restricted to discrete lattice positions. However, an elastic deformation changes the diffusion constant and hence influences the sintering process. For example, the lattice constant of nano-particles may differ from its bulk value due to surface tension, or particles may be compressed differently in a powder under external load, depending on their position in the contact force network. Therefore we developed a program where the atoms are allowed to be displaced from their lattice sites in order to minimize the total elastic energy of the system. MODEL We use a three dimensional Kinetic Monte-Carlo (KMC) method to simulate the sintering of aggregates of nano-particles on a fcc lattice. This means that grain boundaries are neglected in this paper. In our simulation the atomic displacements are calculated by finding the nearest local energy minimum by means of a conjugate gradient method (Press et al. 1992). In order to save computing time, we update only the neighborhood of diffusing atoms, and relax the whole system elastically only in the beginning and, if necessary, again after rather large time intervals. The activation energies for the hopping rates then depend on the relaxed real positions of the atoms. The atoms interact via a Lennard-Jones potential. We calculate the binding energies E b,i as the sum of the pair interactions up to the fourth nearest neighbor on the underlying fcc-lattice . In a diffusion step an atom hops from its initial position (nominal lattice site i) to an unoccupied one, which is next to i on the underlying fcc-lattice. The activation energy for such a move is the difference between the energy at the saddle point and the energy at the initial position of the atom. For the calculation of the saddle point energy E sp one has to distinguish two cases: The final site may be stable (at least three occupied neighbors) or unstable. In the latter case one can actually not find a local energy minimum at this site (see Fig.1). Then the atom continues to move from this intermediate site to a randomly chosen stable final po- ). The radius of particle one is 10 atoms, the radius for particle 2 is varied from 4 to 20 as denoted in the bottom of the graph. ) sition next to it (with some probability it may actually jump back to its starting position i). Accordingly the saddle point energy for the first case is taken as The first term is a constant parameter. The second one describes the strain dependence of the saddle point energy. E ′ b,i and E ′ b,f are the strain dependent contributions to the binding energy at the initial ('i') and final ('f') atom positions. The average is taken to guarantee symmetry and α is an empirical parameter. This ansatz is justified by the finding that the binding as well as the saddle point energy depend approximately linearly on the strain. (Schroeder & Wolf 1996) If the hop ends in an unstable, intermediate position, this is approximately regarded as the saddle point. We calculate the energy of the atom at the intermediate site giving it the average displacement of the stable atoms on the neighboring sites. This energy is taken as saddle point energy in the second case. In the KMC simulation every process is selected according to its corresponding rate which is calculated from the activation energy by: (2) ν is a fixed attempt frequency, β = 1/k B T with the Boltzmann factor k B and the absolute temperature T . Without elasticity E b,i corresponds to the bond counting model (Newman & Barkema 1999) and the strain dependent terms in (1) vanish. it is well known that the equilibration time, in which the two particles coalesce into a single one, is: where N ∝ r 3 is the total number of atoms. But what happens if the two diameters differ? In Fig. 2 the equilibration time is plotted versus the number of atoms. If (3) were valid, the double logarithmic plot should give a straight line with slope 4/3. Instead we find that τ ∝ r 4 with the reduced radius of the two particles, (see Fig. 3). Surface evolution during sintering The surface evolution during the sintering process is usually described by the Koch-Friedlander theory (Koch & Friedlander 1990): Here A(t) is the surface of the agglomerate, A eq the surface in equilibrium, τ a relaxation time. However, this equation is interpreted and applied in several different ways in the literature. Sometimes τ and A eq are viewed as constants, meaning the global relaxation time and the equilibrium surface area the whole aggregate will have after coalescence. In this case Eq. (5) describes an exponential decay. In many cases τ and A eq are regarded as time dependent, local quantities. One assumes that the aggregate coarsens homogeneously by pairwise coalescence of particles. As explained above, the time constant τ is then determined by the current radius of the constituent particles, which after n pairwise coalescence steps is 2 n/3 times the radius of the initial primary particles. The sintering becomes more slowly. Using a continuously varying where V is the constant solid volume of the aggregate and determines A eq , the theory has recently been successfully applied to experimental sintering data of Ni-particles to calculate the activation energy of the relevant diffusion process, whereas the evaluation with a constant τ gives unreasonable energy values. (Tsyganov et al. 2004) What is missing, is a microscopic justification of Eq. (5). Therefore we simulated the sintering of agglomerates and measured the surface area. Typical snapshots are shown in Fig. 4. We fitted the surface area with the analytical solutions obtained for constant τ or with Eq. (6), respectively. The constant A eq was obtained as the average asymptotic surface area for large times. One can distinguish two different stages in which either a constant or a variable τ gives a better description of the surface dynamics. At early times when the particles successively merge together and the average radius increases, the fit with a variable τ gives a better description (Fig. 5(a)). When the average diffusion length does not increase significantly any more, a constant τ is a better assumption (Fig. 5(b)), but surprisingly its value is smaller than at the end of the early time regime with variable τ . Influence of elasticity on sintering In order to understand the influence of an external stress we allowed elastic displacements of the atoms as described above. We set up a configuration of two particles with periodic boundary conditions in x-direction. The strain is controlled by the total system size in xdirection, which is kept constant during each simulation. In Fig. 6 an initial and final configuration are shown. One expects that the final configuration is reached faster under compression than under tension. We measure the average squared radius along the x axis: with The value R 2 (t) reflects how far away the system is from the thermodynamic equilibrium state. Its time dependence is plotted in Fig. 7. It is scaled with the initial value, R 2 (t = 0), in order to eliminate the effect of the Poisson ratio for a better comparison of the curves for different initial strains. The initial configuration (upper part of Fig. 6) are two spheres, whose surface is not in thermal equilibrium. Therefore initially the curves raise beyond 1 due to surface roughening. Looking at the relaxation behaviour we find that both systems evolve asymptotically similarly. How- Figure 6. Sintering of two clusters with periodic boundary conditions in x-direction. An external stress (compressive/tensile) is applied by changing the system size in x-direction. ever, the relaxation process is faster for compressive strain which becomes obvious from the larger slope in the beginning of the coalescence. The equilibrium state is therefore reached earlier. The reason is that α > 1 in Eq. (1) for hopping diffusion in Lennard-Jones-Systems, and the binding of atoms on the surface becomes weaker under lateral compression of the surface (Schroeder & Wolf 1996). This implies that the activation energy decreases under lateral compression. The cylindrical configuration reached at the end of our simulation probably is only metastable: Due to the periodic boundary conditions it corresponds to an infinitely long solid cylinder which should undergo a Rayleigh instability and split up into separate spheres. We think that thermal fluctuations in this direction are the reason why R increases again for larger times. CONCLUSIONS AND OUTLOOK We developed a 3d-KMC simulation program to model the sintering of nano-particles. The special feature of the program is that atoms are not fixed at their lattice sites in contrast to common KMC codes. This allows us to analyse the effect of elasticty on the sintering behavior. The validation of the Koch-Friedlander theory leads to interesting results: The assumption of a constant τ can only be applied if the characteristic diffusion length does not change. At the beginning of the sintering process of an agglomerate consisting of many clusters the characteristic length scale changes. In this case the assumption of a variable τ gives a better de- Figure 7. Average squared radius perpendicular to symmetry axis, scaled by initial value for the configuration shown in Fig. 6. scription of the surface evolution. We found that the equilibration time for two clusters of different size follows the r 4 power law with the reduced radius (4). The influence of elasticity for our set of parameters is only felt in the case of an external force. There it is found that compressive stress leads to faster relaxation. Without external stress it seems that the influence of the surface tension is negligible. This needs not always be the case, as our potential does not show strong surface tension. Therefore a next step would be to choose a potential that shows a stronger surface tension. So far grain boundaries were neglected as we used a continuous fcc lattice. In the current development of the code these grain boundaries will be implemented.
2014-10-01T00:00:00.000Z
2005-03-14T00:00:00.000
{ "year": 2005, "sha1": "d2b9d7c23c3858245d077a30f99e34f8c9d7aeba", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d2b9d7c23c3858245d077a30f99e34f8c9d7aeba", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
8951246
pes2o/s2orc
v3-fos-license
ALRD: AoA Localization with RSSI Differences of Directional Antennas for Wireless Sensor Networks In this paper, we fit RSSI values into a parabola function of the AoA between 0° and 90° by applying quadratic regression analysis. We also set up two-directional antennas with perpendicular orientations at the same position and fit the difference of the signal RSSI values of the two antennas into a linear function of the AoA between 0° and 90° by linear regression analysis. Based on the RSSI-fitting functions, we propose a novel localization scheme, called AoA Localization with RSSI Differences (ALRD), for a sensor node to quickly estimate its location with the help of two beacon nodes, each of which consists of two perpendicularly orientated directional antennas. We apply ALRD to a WSN in a 10 × 10  m indoor area with two beacon nodes installed at two corners of the area. Our experiments demonstrate that the average localization error is 124 cm. We further propose two methods, named maximum-point minimum-diameter and maximum-point minimum-rectangle, to reduce localization errors by gathering more beacon signals within 1 s for finding the set of estimated locations of maximum density. Our results demonstrate that the two methods can reduce the average localization error by a factor of about 29%, to 89 cm. Introduction A wireless sensor network (WSN) consists of tiny sensor nodes equipped with computational, communication, and sensing capabilities, whereby each sensor node can collect data about the environment, such as temperature, vibration levels, light, electromagnetic strength, and humidity.The sensed data is then transmitted to the sink node through a chain of multiple intermediate nodes that help forward the data.Due to their capabilities and versatility, WSNs have been widely used in many areas, such as military affairs, healthcare, and environmental monitoring.In many applications, apart from sensed data, the location information of the deployed sensor node is also desirable as it can be used to improve routing efficiency.Hence, the discovery of the locations or positions of sensor nodes is one of the most critical issues for WSNs. Localization is the process of determining the absolute or relative physical location of a specific node or the target node.Although a global positioning system (GPS) [1] can provide precise location information, the costly hardware and large size make it unsuitable for WSNs.Furthermore, a GPS can only be used outdoors since it depends on signals directly received from satellites for localization.Besides the GPS, numerous localization methods have also been proposed.Most of these deploy some beacon (or anchor) nodes, which periodically broadcast beacon signals containing their own locations to help other sensor nodes with the localization. Localization schemes can be classified as range based or range-free.In range-free schemes, the sensor node location is estimated solely on network connectivity.Such schemes need no extra hardware, but their accuracy is too low, and they usually rely on a large deployment of beacon nodes to improve the accuracy.Conversely, range-based schemes usually have better accuracy.They measure the time of arrival (ToA) [2,3], time difference of arrival (TDoA) [4][5][6], angle of arrival (AoA) [2,4,15,17,18,20,21,27], and received signal strength indicator (RSSI) [3, 7-14, 16, 20, 21] to estimate the distances or angles between pairs of nodes, which in turn are used to calculate the locations of nodes.Most kinds of measurement are taken with extra auxiliary hardware.For example, ToA and TDoA are very sensitive to timing errors, and, hence, their measurement relies on highly accurate synchronized timers.The AoA, which is defined as the angle International Journal of Distributed Sensor Networks between the propagation direction of an incident RF wave and a reference direction, can be measured by an array of antennas.Unlike the previously mentioned three kinds of measurement, RSSI can be outputted by most commercial offthe-shelf sensor nodes. RSSI-based localization methods can be further classified into groups, such as propagation model [8,9], proximity [10][11][12] or fingerprinting [13,14,28].Propagation model localization methods analyze the relationship between RSSI values and distances to learn parameters such as the path loss exponent of the propagation path-loss model in the calibration phase.The calibrated propagation model is then applied to convert the signal strength to the estimated distance between transmitter and receiver in the localization phase.In proximity localization methods, an unknown node broadcasts a localization packet to initiate the localization process.Nearby location-known reference nodes then report the RSSI values measured from the packet to a nominated node.The order of reported RSSI values is then used to determine the location of the unknown node.Fingerprinting localization methods measure RSSI values from a set of static nodes during a calibration phase at several locations.The measured RSSI values at a particular location are then used to fingerprint the location.In the localization phase, a node measures RSSI values from the same static nodes and then estimates its location by finding the fingerprinting that is the closest match with the measured RSSI values. In this paper, based on the concept of integrating AoA Localization and fingerprinting localization for reducing errors, we propose a novel localization scheme, called AoA Localization with RSSI Differences (ALRD).It estimates the AoA for localization in 0.1 s by comparing the RSSI values of beacon signals received from two perpendicularly oriented directional antennas installed at the same place.In the proposed ALRD, we fit RSSI values received from a directional antenna into a parabola function of an AoA between 0 ∘ and 90 ∘ .We also set up a beacon node with two perpendicularly oriented directional antennas and fit the difference of the signal RSSI values of the two antennas into a linear function of the absolute value of one AoA between 0 ∘ and 90 ∘ .With the parabola and linear functions, a sensor node can then self-localize itself quickly (within 0.1 s) by observing RSSI values of the beacon signals emitted by the two beacon nodes.The fitting functions can easily be stored in a WSN node, despite their limited storage space, and their inverse functions can be used to speed up the localization process.Hence, ALRD is suitable for mobile sensing and actuating applications in an open and stable environment since it allows a sensor node to fast localize itself with small localization errors.Our experiments demonstrate that the average localization error is 124 cm when deployed in a 10 × 10 m indoor area. We further propose two methods, namely maximumpoint minimum-diameter and maximum-point minimumrectangle, to reduce ALRD localization errors by gathering more beacon signals within 1 s for finding the set of estimated locations of maximum density.Such estimated locations are then averaged to obtain the final location estimation.Experimental results obtained demonstrate that the two methods can reduce the average localization error by a factor of about 29%, to 89 cm.Hence, ALRD is suitable for mobile sensing and actuating applications, as it allows a sensor node to quickly localize itself with lower localization errors.The rest of this paper is organized as follows.We review some AoA determination schemes in Section 2. In Section 3, we describe the proposed localization scheme, ALRD, in detail.Section 4 shows our experimental results.Then, we describe improvements to ALRD and compare it with other schemes in Section 5. Finally, we conclude the paper in Section 6. Related Works In this section, we review some research that determines AoAs for localization.Amundson et al. developed the RIMA system that uses radio interferometry measurements to estimate the AoA [15].RIMA estimates the AoA by measuring the TDoA of an interference signal generated by a antenna array.The system consists of a beacon node and a target node.The beacon node is formed by grouping three sensor nodes to form an antenna array.The three sensor nodes are arranged in a manner such that their antennas are mutually orthogonal.Two of the sensor nodes transmit a pure sinusoidal signal at slightly different frequencies to create a low-frequency interference signal.The other sensor node and the target node both measure the phase of the low-frequency signal.The difference in the phase readings measured by these two nodes is then used to estimate the AoA from the beacon node to the target node.Although RIMA can accurately measure the AoA within 1 s, it requires very accurate time synchronization between the beacon node and the target node, which is very difficult to achieve. Two methods, Estimating Direction-of-Arrival (EDoA) [5] and Rotatable Antenna Localization (RAL) [16], utilize the property of directional antennas to estimate the AoA of a signal.EDoA estimates the AoA of an incoming signal by using a mechanically actuated parabolic reflector.The receiver, which is fixed to a parabolic reflector rotated by a step motor, is used to observe the RSSI values of signals emitted from a transmitter.When the orientation of the reflector is aligned with the direction from the receiver to the transmitter, the receiver will observe the highest RSSI value.Hence, the AoA can be obtained by searching for the reflector orientation in which the highest RSSI value is observed.Their experimental results show that the error in measuring the AoA has a mean of about 4 ∘ and a standard deviation of about 8 ∘ in both indoor and outdoor environments.However, EDoA needs to take a long time to rotate the reflector for searching the highest RSSI value.In RAL, a beacon node is equipped with a rotatable directional antenna.It regularly rotates its antenna to emit beacon signals in different directions.A sensor node determines the angle from the beacon node to itself by observing the RSSI values of the received beacon signals, which contain the location of the beacon node and the current orientation of its antenna.Similar to EDoA, RAL can determine the AoA by determining the strongest signal.By using the estimated AoAs and locations of two distinct beacon nodes, a sensor node can then calculate its own location with a localization error of 76 cm within a 10 × 10-meter indoor area.Two enhanced methods were further proposed to reduce the localization error by a factor of 10% [16].EDoA and RAL both need a long time to rotate the antenna or the reflector for observing the variation of the RSSI values while estimating the AoA.Therefore, EDoA and RAL are only suitable for localizing static sensor nodes. The Proposed Scheme Existing localization schemes using AoAs may take a long time to finish the localization or need very accurate time synchronization.In this section, we propose ALRD that finish localization quickly by learning how the RSSI values vary with the AoA in advance without the need of time synchronization between the anchor node and the target node. Preliminary. As shown in Figure 1, we define the AoA as the angle from the propagation direction of an incident RF wave to the orientation of the directional antenna emitting the RF wave.The AoA is positive if it is counterclockwise and negative otherwise.Figure 2 shows a plot of RSSI values over AoA; we observe that if the distance between the sensor node and the directional antenna is fixed, the RSSI varies like a parabolic function of AoA, referenced to the orientation of the directional antenna, between −90 ∘ and 90 ∘ with a an axis of symmetry at AoA = 0 ∘ .Furthermore, we set up two perpendicularly oriented directional antennas installed at the same location (seen in Figure 3).From the results obtained (as shown in Figure 4), we also observe that the difference of the signal RSSI values received by a sensor node, localizing between the orientations of two-directional antennas, varies like a linear function of the absolute value of the AoA between 0 ∘ and 90 ∘ .It can be noted that absolute values of AoAs are used and that when one absolute AoA value is , the other absolute AoA value is 90 − .We then take only one absolute AoA value as the representative without ambiguity. RSSI Gathering and Analyzing. Before deployment, ALRD needs to gather and analyze RSSI values of the signals that a sensor node receives from a directional antenna at different distances and angles.The measured RSSI values are then analyzed to generate the fitting functions.These fitting functions are then stored into the storage of each sensor node for localization.For better accuracy, the RSSI gathering and analyzing tasks are needed to execute at each new system deployment location, since the environment changes may make the measured RSSI values have some differences.The tasks are described as follows. (1) Gathering RSSI values: as shown in Figure 5, we set up a directional antenna that can be rotated by an angle from the -axis (or the east direction) (0 ∘ ) to the -axis (or the north direction) (90 (5) Storing functions: the quadratic and linear approximation functions are loaded into the storage of the sensor nodes before they are deployed. ALRD Setup. Figure 6 shows the setup for ALRD.We assume that all sensor nodes are randomly deployed in a planar square area of interest.Two beacon nodes, 1 and 2 , are deployed in the lower left and lower right corners of the area, respectively.Each beacon node is equipped with two-directional antennas with perpendicular orientations.The antennas of the beacon node in the lower left (or right) corner have either an upright or a horizontal to the right (or left) orientation.The antenna with the upright orientation is called the vertical antenna, whereas, the antenna with the left or right orientation is called the horizontal antenna. Each beacon node is assumed to know its location and orientations of the two antennas.The beacon nodes transmit beacon signals via the two-directional antennas regularly and alternately.The beacon signal contains the orientation of the antenna and the location of the beacon node, which are expected to reach the whole area of interest.Note that the setup in Figure 5 can be the basic building block for deploying ALRD in a large indoor localization environment.Figure 7 shows a deployment instance of beacon nodes in a large area.The beacon nodes consist of 1, 2, or 4 pairs of perpendicularly oriented directional antennas.Any two adjacent beacon nodes are D units away from each other in the horizontal direction and 2D units away from each other in the vertical direction.Based on the setup of Figure 6, we can see that any two adjacent beacon nodes in the horizontal direction can properly localize target nodes within an area of D by D units.For example, in Figure 7, the beacon nodes X and Y can properly localize target nodes within the shaded area.Hence, ALRD can help localize sensor nodes in a large area by installing a lot of beacon nodes. Localization Procedure. In ALRD, a sensor node executes the following steps to estimate its location. ( 1 and 2 , the sensor node can determine its location (, ) by calculating where is the known distance of 1 and 2 .Note that we can also calculate this by using only Experiment Results In this section, we describe the implementation of ALRD and the results of the experiments using the implementation.The beacon node used for RSSI gathering and analysis is also attached with a Maxim AP-12 panel antenna, which is rotated by a Fastech Ezi-Servo 28 L step motor, as shown in Figure 8(a).The beacon node used in localization is attached with two AP-12 panel antennas with perpendicular antennas, as shown in Figure 8(b).Its horizontal and vertical beamwidths are 65 ∘ and 28 ∘ , respectively. Experimental Setup. We installed the ALRD setup in a 10 × 10 m region of an indoor basketball court for conducting experiments as shown in Figures 9 and 10.Two beacon nodes were set up at two ends of the edge of the experiment area, and the localization accuracy was tested at 81 grid points (as arranged in Figure 10).Since the largest distance between a measurement point and a beacon node is about 12.73 m, we gathered and analyze the RSSI values of signals emitted at distances of 1, 2, . . ., 13 m for every degree from 0 ∘ to 90 ∘ .The RSSI value for each distance and each degree was obtained by averaging 100 measurements. The gathered RSSI values are shown in Figure 11.The interval for gathering the RSSI values is set as 1 m because the RSSI values will be indistinguishable if the interval is too small.To reduce the gathering time and space used for storing the approximation functions, we only measured RSSI values for one of the four antennas of the same type of the two beacon nodes in our experiments.The coefficients of determination, 2 , of all the approximation functions are shown in Figure 12.We note that the coefficients of determination are high and all exceed 0.96.Therefore, the approximation functions are very suitable for expressing the measured RSSI values and RSSI differences. Localization Errors. The localization accuracy is tested at the 81 grid points shown in Figure 10.The beacon nodes transmit beacon signals via each of their antennas 10 times per second.Therefore, a sensor node can localize itself 10 times per second.We take the average of 10 localization results and plot the cumulative distribution in Figure 13.The average localization error of the localization experiment is 124 cm.In Figure 14, we use different colors to represent the localization errors of the test points.The brighter color indicates the smaller localization error.As Figure 13 shows, the test points that lie in the middle of the region have smaller localization errors.This can be explained by Figure 4, in which the curves almost look like straight lines in the middle. Improvement As the results show, ALRD can let sensor nodes localize themselves by measuring RSSI values of signals from two beacon nodes in a short time (0.1 s).However, the measured RSSI values be influenced by environmental interferences so the estimated location may deviate from the real location and has a localization error.Thus, if a node spends more time measuring more RSSI values, then more location estimations can be made, which in turn can reduce the deviations.Based on the concepts introduced in [28], we propose two methods, namely, maximum-point minimum-diameter (MPMD) and maximum-point minimum-rectangle (MPMR), to remove some estimated location from a set of estimated locations for the purposes of reducing the localization error. Assuming that is the set of estimated locations, MPMD and MPMR remove some locations by finding a subset ⊆ with the largest density.The density of a set of estimated locations is defined as follows: where is the cardinality of the set, Dia is the diameter of the locations in the set, and Area is the area of the smallest axis-parallel rectangle containing all locations in the set.The diameter Dia of a set of locations can be obtained by finding a pair of locations with the longest distance between them.The Area of a set of locations can be obtained by finding the four extremes (i.e., the leftmost, rightmost, highest, and lowest extremes) and calculating the area of the rectangle bounded by them. The following steps are executed by a sensor node to apply MPMD or MPMR to obtain a subset of a set of locations such that has the largest density. Step 2. Derive subset of by removing the location with the maximum summation of the distances from itself to other locations in , and calculate the density of . Step 3. If > , then return ; otherwise, set = and go to Step 1. By using the set of estimated locations with the maximum density returned by MPMD and MPMR, we can then calculate the average of all locations to derive a new estimated location with a small localization error.Table 1 shows the average of the localization errors after applying MPMD and MPMR to 10 localization results collected by a sensor node within 1 second.Figure 15 shows the cumulative distributions of the ALRD localization errors and those improved by MPMD and MPMR.As shown, MPMD is more suitable than MPMR for sensor nodes because it makes similar improvements as MPMR but has less computational overheads. Table 2 compares ALRD with other three localization schemes, namely, EDoA [4], RAL [13], and RIMA [15].As we have mentioned previously, EDoA and RAL take a long time to localize sensor nodes because they have to rotate the antennas or the reflector.RIMA is able to accurately localize a target node in a short time, but it requires time synchronization between the beacon node and the target node.By contrast, ALRD can localize sensor nodes in a short time and provides relatively low localization errors without the need for time synchronization. Conclusion In this paper, we proposed AoA Localization with RSSI Differences (ALRD) to estimate angle of arrival (AoA) by comparing the received signal strength indicator (RSSI) values of beacon signals received from two perpendicularly oriented directional antennas installed at the same place.We have implemented and installed ALRD in a 10 × 10 m indoor environment.Our experimental results showed that a sensor node can estimate its location by using only four beacon signals within 0.1 s with an average localization error of 124 cm.Hence, ALRD conserves the time and energy spent on localization.Furthermore, we proposed two methods, namely, maximum-point minimum-diameter (MPMD) and maximum-point minimum-rectangle (MPMR), to reduce ALRD localization errors by gathering more beacon signals within 1 s to find the set of estimated locations of maximum density.The results demonstrated that MPMD and MPMR can reduce the localization error by a factor of about 29% to 89 cm.Thus, as ALRD allows a sensor node to quickly localize itself with lower errors; it is suitable for mobile sensing and actuating applications. As our experiments show, it is sufficient to gather RSSI values for only one of four antennas of the same type to achieve sufficient localization accuracy.By equipping antennas of the same type to all beacon nodes, the sensor nodes merely need to store the quadratic and linear approximation functions of one antenna.In the future, we will focus on applying ALRD to realize a large-area localization system.Moreover, we will also try to apply different types of directional antennas and their combinations to ALRD in the hope of further reducing localization error. Figure 1 : Figure 1: AoA of a sensor node and a directional antenna. Figure 2 :Figure 3 : Figure 2: RSSI values of signals received from a directional antenna. Figure 4 : Figure 4: Difference of the signal RSSI values received from two-directional antennas with perpendicular orientations. Figure 5 : Figure 5: The setup for gathering and analyzing RSSI values. ( 4 )Figure 8 : Figure 8: (a) The beacon nodes used in RSSI gathering and analysis.(b) The beacon node used in localization. 4. 1 . Implementation.The sensor nodes and the beacon nodes of the proposed ALRD scheme are implemented in nesC with TinyOS support on the Moteiv BAT mote sensor.The BAT mote sensor has a Texas Instruments MSP430 F1611 microcontroller running at 8 MHz with 10 kB RAM and 48 kB flash memory.It is equipped with the Chipcon CC2420 IEEE 802.15.4 compliant wireless transceiver using the 2.4 GHz band with a 250 kbps data rate.With an integrated onboard omnidirectional antenna, the BAT mote sensor has a maximum transmission range of 50 m (indoor) or 125 m (outdoor). Figure 12 :Figure 13 : Figure 12: The coefficients of determination, 2 , of the quadratic () and linear () approximation functions for different distances. ∘ ) and transmits beacon signals containing the rotating angle for every degree.A sensor is placed at the -axis at a distance of , 2, . . ., meters for receiving signals emitted from the antenna for several times (e.g., 100), where and are specified values (e.g., = 1 and = 10).The received signal RSSI values are averaged and stored.The gathered RSSI average values are denoted by (), where = 1, 2, . .., , and = 0 ∘ , 1 ∘ , . .., 90 ∘ .(2)Performing quadratic regression: for each distance , the gathered RSSI values are fitted approximately into a quadratic function () of the rotating angle , by quadratic regression analysis.(3) Calculating RSSI differences: for each distance , the RSSI difference () at angle is obtained by calculating () − (90 − ).In practice, () is approximately the difference of RSSI values between two signals that a sensor node receives from two perpendicularly oriented directional antennas installed at the same location.(4) Performing linear regression: for each distance , the RSSI difference () at angle is approximately fitted into a linear function () by linear regression analysis. 1) Receiving beacon signals: in order to localize itself, the sensor node needs to collect the signal RSSI values 1ℎ and 1V of the horizontal and vertical antennas of beacon node 1 .It also needs to collect the signal RSSI values 2ℎ and 2V of the horizontal and vertical antennas of beacon node 2 .∘ ≤ and ≤ 90 ∘ .Here, −1 (⋅) is an inverse function of the quadratic function (⋅) obtained in RSSI gathering and analysis.As shown in Figure 6, + should ideally be 90 ∘ .Therefore, the sensor node can obtain such that + is closest to 90 ∘ .Let the discovered be denoted by 1 .Similarly, by 2ℎ and 2V , the sensor node can find such that + is closest to 90 ∘ , where and are two absolute values of AoAs (where 0 ∘ ≤ and ≤ 90 ∘ ) corresponding to the horizontal and vertical antennas of the beacon node 2 and = −1 ( 2ℎ ) and = −1 ( 2V ).Let the discovered be denoted by 2 .(3) Estimating AoA: the distance estimate 1 obtained in Step 2 is used to choose a proper linear approximation function 1 for estimating the AoA of the sensor node corresponding to the beacon node 1 .The AoA corresponding to the horizontal antenna of 1 is calculated as 1 = −1 1 ( 1ℎ − 1V ), where −1 1 (⋅) is the inverse function of the linear function 1 (⋅) obtained in the RSSI gathering and analysis stage.Similarly, by 2 and ( 2ℎ − 2V ), the AoA corresponding to the horizontal antenna of the beacon node 2 can be calculated as 2 = −1 2 ( 2ℎ − 2V ). and , corresponding to the horizontal and vertical antennas of the beacon node 1 are obtained by calculating = −1 ( 1ℎ ) and = −1 ( 1V ), for 0 a rough estimate of the distance from itself to the beacon node 1 , by finding for = 1, 2, . . ., , Table 1 : Comparisons of localization errors. Table 2 : Comparison of localization schemes.
2017-02-14T10:15:46.678Z
2012-06-25T00:00:00.000
{ "year": 2013, "sha1": "3f4cdaa23585db4f227797d8a624fa18ce24ce48", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1155/2013/529489", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "a1aa544bc4085e601caa0960e789f278516d6369", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
246947543
pes2o/s2orc
v3-fos-license
High Intensity vs Low Intensity COVID-19 Gymnastics to Increase Aerobic Capacity and Mental Toughness in Different Sex of Adult People The COVID-19 outbreak has made changes in the process of social life, including sports activities. Individuals become limited in carrying out activities so that they often choose not to do physical activities. Individual inactivity causes various internal problems, such as fatigue and mental problems. The author creates an exercise that aims to overcome this problem by manipulating the intensity in it, namely COVID-19 gymnastics. This study aims to test the COVID-19 exercise with low and high intensity on increasing aerobic capacity and mental toughness in different sex of adult people. The experimental method for 6 weeks was carried out in this study to 150 subjects (65 men and 85 women) by professional gymnastics experts. The Walk Test and Mental Toughness Inventory were used as research instruments which were given twice before and after being given treatment. MANOVA test with SPSS version 24 was used as the data analysis of this study. The results showed that COVID-19 exercise with high and low intensity affected increasing aerobic capacity and mental toughness in adult men and women. However, there is a difference in the effect of highintensity and low-intensity COVID-19 exercise on increasing aerobic capacity and mental toughness in adult men and women. This study concluded that COVID-19 exercise with high intensity and low intensity both had a positive effect on increasing aerobic capacity and mental toughness in adult men and women. However, low-intensity COVID-19 exercise has a better effect than high-intensity because it is considered safer for adults. The author suggests doing fun sports activities such as COVID-19 gymnastics to stay healthy physiologically and psychologically, especially during this pandemic. Keywords— High Intensity, Low Intensity, COVID19 Gymnastics, Aerobic Capacity, Mental Toughness, Adult People I. INTRODUCTION The COVID-19 outbreak has spread throughout the world and made it a global pandemic that has made changes in the processes of human social life, including in Indonesia [1,2]. This change makes individuals need to readjust to various regulations implemented by the government to reduce the number of transmission and spread of this virus. One of the existing regulations requires individuals to stay at home doing all kinds of activities that are usually done outside the home, this is intended to avoid crowds and maintain distance which is one of the effective steps to prevent the transmission of this virus [3]. Especially for adults, being at home alone will certainly invite a lot of problems, both psychologically and physiologically. They will tend to be inactive or inactive because of the limitations of movement and socialization that can be done, as well as the duration of more screen time due to the demands of work during the COVID-19 pandemic, which in other words is a sedentary lifestyle [4]. Individuals become limited in doing activities so they often choose not to do activities. Individual inactivity causes various internal problems, such as fatigue and mental problems. For many individuals, the consequences of adversity negatively affect both physical and mental health and are often associated with impairments in social, educational, and occupational functioning [5]. A study conducted on approximately 10 million Google surveys regarding progress in the search for psychological health shortly after the stay-at-home or lockdown policy revealed that subjects were more likely to be characterized by tension, negative musings, restlessness, and self-destructive ideas developed significantly before lockdown [6]. The COVID-19 pandemic is also directly related to various psychological problems and sleep disorders in individuals [7]. This will be very dangerous if it continues to be ignored because it will have a long impact on a person's life, so appropriate steps are needed based on an analysis of the existing situation and conditions. The above problems need to be addressed as soon as possible so that there will be no negative effects in the long term in human life, namely by having mental toughness and good aerobic capacity. Mental toughness is a general term that requires positive psychological resources, which is very important in various achievement contexts and the mental health domain [5]. Mentally tough individuals will have natural or developed psychological advantages that enable a person to be better at many things, in general, cope better than others with many demands and in particular, be more consistent and better than others at remaining determined, focused, confident, and in control under pressure [8]. In addition to mental toughness, adults also really need to have the good aerobic capacity, because it directly describes a person's level of fitness. If a person has a good level of fitness, then he will be able to do many things more optimally and avoid various problems, psychological and physiological [9]. This aerobic capacity can be measured by looking at the body's maximum ability to provide and consume oxygen (VO2max) which is the product of maximum cardiac output (L of blood/min) and the difference in arterial oxygen (ml O2/L of blood) [10]. The author makes an exercise that aims to overcome this problem, namely COVID-19 gymnastics. This COVID-19 exercise provides manipulating the intensity in it (low-intensity and high-intensity) which can be used as a way to overcome the problems mentioned above. COVID-19 exercise makes a person more active and receives various positive effects on the body [11][12][13]. This is a novelty because the author has not found a similar form of exercise that can be done specifically in situations and conditions like today. In addition, there are no studies that examine the effects of implementing this COVID-19 exercise. Thus, this study aims to examine COVID-19 exercise with low and high intensity on aerobic capacity and mental toughness in adults of different sexes. This is intended to get an accurate picture of how the effects occur specifically based on sex in adults. II. METHOD This study used an experimental method for 6 weeks (July-August 2021) using a pretest-posttest group design research design [14]. The subjects in this study were 150 subjects (65 male and 85 female adults) from various gymnastics clubs in Indonesia with the age of 31.4 ± 2.729 years. Subjects were selected using purposive sampling based on several considerations, namely age (25-40 years), marital status (married), and daily activities (dominated by activities at home). Subjects were then divided into two groups (low intensity and high-intensity group) consisting of 75 people each. The treatment given in the form of COVID-19 exercise is given for 6 weeks (July-August 2021) with a frequency of 3 sessions/week [15], which is given directly by a professional gymnastics expert. Each session consisted of 15 minutes of active movement and was repeated three times with a break of 2-3 minutes so that the total duration of active movement or treatment volume was 45 minutes with low to medium intensity ( Figure 1). The research instruments used were The Walk Test [16] and Mental Toughness Inventory [8] which were given twice before and after being given treatment. The MANOVA test with SPSS version 24 was used as the data analysis of this study [17]. Gymnastics Figure 1 shows a graph of the volume and intensity of the COVID-19 exercise given during the treatment. It can be seen that the volume given every week is the same, which is 45 minutes, while the intensity given increases every week, starting from 40-60% for low intensity and 60-85% for high intensity. Determination of this intensity refers to sports performance guidelines for health [9]. III. RESULT The author presents the results of data processing and analysis in the form of images, which can be seen in Figure 2. Figure 2 shows the mental toughness variable error bar by gender. It can be seen that there was an increase in the scores of the pre-test and post-test for both groups and gender. However, the low-intensity group (men 13.6% and women 6.4%) had a higher percentage increase than the highintensity group (men 9.6% and women 5.3%). Although both groups experienced an increase, the low-intensity group gave a greater increase. This shows that low-intensity COVID-19 exercise is better than high-intensity in increasing mental toughness, both for adult men and women. Furthermore, the error bar of the aerobic capacity variable can be seen in Figure 3. Figure 3 shows the error bar of the aerobic capacity variable by gender. It can be seen that there was an increase in the scores of the pre-test and post-test for both groups and gender. However, the low-intensity group (men 5.5% and women 12.4%) had a higher percentage increase than the highintensity group (men 4.8% and women 6.4%). Although both groups experienced an increase, the low-intensity group gave a greater increase. This shows that low-intensity COVID-19 exercise is better than high-intensity in increasing aerobic capacity, both for adult men and women. Next, the writer tested the hypothesis. Before testing the hypothesis using MANOVA, the authors conducted a normality test and homogeneity test first as a prerequisite test. All data in this study were declared to be normally distributed, as well as the results of the homogeneity test which stated that all data were homogeneous. Based on the results of the MANOVA test, in the male sex, the Sig. 0.028 < 0.05, it can be concluded that there is a significant effect of COVID-19 exercise with low and high intensity on mental toughness and aerobic capacity of male adults. Advances in Health Sciences Research, volume 45 While for the female sex, the Sig. 0.000 < 0.05, it can be concluded that there is a significant effect of COVID-19 exercise with low and high intensity on mental toughness and aerobic capacity of female adults. IV. DISCUSSION The existence of the COVID-19 pandemic will certainly make humans more careful in carrying out their activities, coupled with regulations made to suppress the transmission and spread of this virus. However, it's a shame if this is not handled wisely, because those who initially want to maintain their health will have a bad impact on health if they are not handled wisely, both psychologically and physiologically. The problem that exists shows that the COVID-19 pandemic has made a person less active and not exercising. This is due to several things, such as the number of tasks to be done, sports venues being closed, and so on. Like a previous study which stated that the majority of gym participants never used sterile wipes or products before or after using gym equipment (61.6%), and 35.4% of gym staff did not use sterilizing materials distributed through fitness centers, and most of the fitness center participants had experienced an episode of skin infection or respiratory infection in the fitness center during the last 12 months (22.2%), while 80.8% were not aware of the tinea microbe that causes athlete's foot, and 65.7% of them used bathing in the gym. gym after exercise [18]. There should be no reason not to exercise, exercise is very necessary for our body because a person's inactivity in physical activity will be very dangerous because it becomes one of the main modifiable risk factors worldwide and all deaths [19]. Regular participation in such activities is associated with a longer and better quality of life, reduced risk of various diseases, and many psychological and emotional benefits [20]. Studies suggest that sports participation is recommended as a form of leisure time for various age groups in an effort not only to improve physical health (such as the obesity crisis) but also to improve psychological and social health outcomes [21]. Therefore, the authors apply COVID-19 exercise as a solution to overcome this problem, namely by increasing the mental toughness and aerobic capacity of adults. The author tries to conduct a review based on gender differences to clarify the understanding of the benefits provided because there are differences in abilities between men and women [22][23][24]. In male adults, the results showed that the COVID-19 exercise given had a significant effect on the mental toughness and aerobic capacity of adults. The increase in the percentage of mental toughness occurred by 9.6% for high intensity and 13.6% for low intensity, while the increase in aerobic capacity occurred by 4.8% for high intensity and 5.5% for low intensity. This shows that low-intensity COVID-19 exercise is better than high-intensity exercise in male adults to improve mental toughness and aerobic capacity. The author sees that this is due to the age-appropriate intensity that tends to be less able to carry out high-intensity activities. However, the high-intensity COVID-19 exercise provided can still be done by adults because it is designed with movements that are not difficult. In female adults, the results of the study showed the same thing as male adults, namely that there was a significant effect of COVID-19 exercise on the mental toughness and aerobic capacity of adults. However, the difference occurs in a large percentage which tends to be larger than male adults [25]. COVID-19 exercise with high intensity provides an increase in the percentage of mental toughness by 5.3%, while high intensity by 6.4%. While the increase in the percentage of aerobic capacity occurred by 6.4% for high intensity and 12.4% for low intensity. This result is quite interesting where previous studies suggest that high intensity provides a greater increase than low intensity [12]. These results are caused by the suitability of the intensity with age which may already tend to be less able to carry out high-intensity activities. Exercising properly, correctly, measurably, and regularly is something that humans need. Studies suggest that if this is done optimally, there will be an increase in a person's physical fitness [13]. This COVID-19 exercise is an alternative solution for individuals (especially adults) to be more active in exercising, especially in situations and conditions like today. This COVID-19 exercise is designed in such a way that it is easy to do and still provides positive benefits. Studies suggest that aerobic exercise has been shown to have a positive effect on brain function and various components of physical ability [11]. Physical activity and exercise can reduce stress and anxiety, increase happiness levels, increase self-confidence, increase brain power, sharpen memory and increase muscle and bone strength, and help in preventing and reducing heart disease, obesity, fluctuations in blood sugar, cardiovascular disease, and cancer [26]. Adult individuals should be more aware of the importance of doing sports for their health and should not only focus on work with all its demands. If the adult individual is not strong enough to deal with these things psychologically, the conflict will Advances in Health Sciences Research, volume 45 likely occur, so it is very important for the individual to still have mental toughness [27]. Likewise with aerobic capacity, if an adult individual does not have a good aerobic capacity, then it is certain that he is not able to carry out activities optimally because he does not have a good level of physical fitness [9], as well as being an indicator of cardiovascular health, the higher the capacity aerobics a person, the higher the level of cardiovascular health, and vice versa [10,12]. Doing sports activities with gymnastics or others during a pandemic like this still has a positive effect on individuals [28,29], so that the body is healthier and has a lower chance of being exposed to viruses. V. CONCLUSION This study concluded that high-intensity and low-intensity COVID-19 exercise both had a positive effect on increasing aerobic capacity and mental toughness in adult men and women. However, low-intensity COVID-19 exercise has a better effect than high-intensity because it is considered safer for adults. The author suggests doing fun sports activities such as COVID-19 gymnastics to stay healthy physiologically and psychologically, especially during this pandemic.
2022-02-19T16:20:43.914Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "32f8f008cb4d8e53e756cedd0800e0b1ee75afeb", "oa_license": "CCBYNC", "oa_url": "https://www.atlantis-press.com/article/125970120.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "65cc30a45e683f7936bf9065277e9e1f26b6d856", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [] }
5000378
pes2o/s2orc
v3-fos-license
Micro-Tomographic Investigation of Ice and Clathrate Formation and Decomposition under Thermodynamic Monitoring Clathrate hydrates are inclusion compounds in which guest molecules are trapped in a host lattice formed by water molecules. They are considered an interesting option for future energy supply and storage technologies. In the current paper, time lapse 3D micro computed tomographic (µCT) imaging with ice and tetrahydrofuran (THF) clathrate hydrate particles is carried out in conjunction with an accurate temperature control and pressure monitoring. µCT imaging reveals similar behavior of the ice and the THF clathrate hydrate at low temperatures while at higher temperatures (3 K below the melting point), significant differences can be observed. Strong indications for micropores are found in the ice as well as the THF clathrate hydrate. They are stable in the ice while unstable in the clathrate hydrate at temperatures slightly below the melting point. Significant transformations in surface and bulk structure can be observed within the full temperature range investigated in both the ice and the THF clathrate hydrate. Additionally, our results point towards an uptake of molecular nitrogen in the THF clathrate hydrate at ambient pressures and temperatures from 230 K to 271 K. Introduction Ice has been ubiquitous in colder climates worldwide and, since the advent of refrigeration, even in warmer regions [1,2]. In the universe, water is found mainly in the form of amorphous solid water (ASW) [3]. The versatility of the hydrogen bond becomes apparent when inspecting icy moons or the ice giants, which host a number of high-pressure ice polymorphs [4][5][6][7][8]. Apart from ices made of pure H 2 O, ice-like solids containing guest molecules are also seen in astrophysical environments, e.g., in cometary ice upon warming [9,10], in the mantle of icy moons [11][12][13] or on the Mars pole caps [14][15][16]. These ice-like solids are clathrate hydrates (CHs), sometimes also called gas hydrates or clathrates in short. Clathrates are inclusion compounds where a host lattice formed by water molecules provides space for guest molecules in cavities formed by tetra-, penta-, and hexagonal faces forming polyhedrons ("cages") [17]. These do not only occur naturally in space, but also in vast amounts on Earth, in particular in the permafrost and ocean floors. Over 130 different types of guest molecules are currently known, the most prominent of them are natural gas compounds, particularly methane [18][19][20]. Although estimates of the amount of methane stored in naturally occurring clathrates vary widely, even the most conservative estimates indicate that methane clathrates are a significant natural resource [21]. Many consider the exploitation of methane All samples have a melting point which is well above the base temperature of 238 K. Thus, in the initial phase of the ramp the samples remain solid. The samples densities change due to thermal expansion (α « 50ˆ10´6 K´1 in the case of ice [61], and 30% higher in the case of THF clathrate [62,63]), however, this effect is far smaller than the thermal expansion of the nitrogen atmosphere in the cell, which dominates pressure changes at temperatures below the melting points. The temperature field inside the cell is quasi-steady but non-uniform. Therefore, a simple model based on the ideal gas law at known temperature profile is derived in Section 4.4.1. It allows the prediction of the temperature-pressure relation in the experiment. This is best observed in the case of pure nitrogen (see Figure 1a). The model pressure (dashed blue) slightly deviates from the measured pressure (solid blue) during heating due to the dynamics of the temperature field. The pressure changes and the supply voltage behavior (solid with dot marker, black) in the initial phase of the experiments are identical in all four cases (Figure 1a-d, note that different scaling factors are used). Note that during heating the supply voltage is always less negative than during cooling since heat transfer from the relatively warm surroundings through the imperfect insulation is positive. Melting of Ice/Clathrate The reported melting points for ice, THF clathrate (1:17 mole fraction), and DXL clathrate (1:17 mole fraction) are 273.15 K, 277.3 K, and 270.5 K, respectively [64,65]. Once the melting points are reached, one can observe smooth melting curves, which seem to be characteristic to the substances used. In order to interpret the pressure signal one needs to consider density pairs of the solid and liquid substances at the melting point. These are (ρ 1 = 1.000 g/cm 3 /ρ s = 0.916 g/cm 3 ) for ice [65], (ρ 1 = 0.99 g/cm 3 /ρ s = 0.966 g/cm 3 ) for the THF clathrate, and (ρ 1 = 0.99 g/cm 3 /ρ s = 0.971 g/cm 3 ) for the DXL clathrate [66]. Clathrate densities were calculated assuming a large cage occupancy factor In the case of ice a slow melting can be observed at temperatures slightly above 273 K (see Figure 1b): the samples uptake of heat during melting helps the TEC to keep the sample below ambient temperature. Thus, the supply voltage is reduced (less negative) and shows a small but broad peak. Additionally, the pressure drops because liquid water has a higher density. This relates to a theoretical pressure difference of 15 mbar (cell volume 1.7 cm 3 ) and is also reflected in the data. After all of the ice has become liquid, the pressure starts to rise again due to the thermal expansion of the gas. In the case of the THF clathrate, at a temperature of 277 K, a sudden increase of pressure indicates the decomposition of the THF clathrate (see Figure 1c) since the released gaseous THF adds to the total pressure. The measured increase of 20 mbar in total pressure is much smaller than the vapor pressure of the water-THF solution (52 mbar at 277 K [67]) subtracted by the effect of volumetric contraction during melting (6 mbar). Note that during the decomposition the power supply signal shows a very small and broad peak, which is smaller than in the case of water. This reflects the difference in heat of fusion of water and THF clathrate which is 262 kJ/kg for the THF clathrate and 333.5 kJ/kg for water [64,65,68]. Furthermore, this peak in supply voltage ends at about 282 K although the pronounced increase in pressure seems to stop at a lower temperature. This indicates that a small fraction of the THF was released as gas very quickly, while the rest of it slowly dissolved in water after the melting of the clathrate. In the case of the DXL clathrate, decomposition starts at approximately 271 K (see Figure 1d) and shows the same characteristics as the THF clathrate. The pressure increase of 10 mbar caused by a release of gaseous DXL is less pronounced than in the case of the THF clathrate. This difference can be explained by the generally lower partial pressure of DXL compared to THF. Still, the increase is lower than the vapor pressure of the DXL/water solution (17 mbar at 273 K [67]) subtracted by the result of volumetric contraction during melting (5 mbar). Note that the heat of fusion of DXL clathrate (261 kJ/kg) is almost similar to that of THF clathrate [64,68]. Also note that THF and DXL guests are capable of H-bonding to water. Such guests are known to stabilize defects in the clathrate structure which in turn leads to fast transport of guest molecules in the clathrate hydrate [38,69]. This might support the sudden initial release of guest gas in both THF and DXL clathrates, before they start to melt. Thermal Expansion after Melting After melting, thermal expansion of the gaseous atmospheres and the liquid solutions are observed. Gas expansion dominates the pressure-temperature relation. With the exception of the pure nitrogen experiment at least two components constitute the gas atmosphere after melting (nitrogen-water, nitrogen-water-THF, and nitrogen-water-DXL, respectively). Although the thermal expansion model derived in Section 4.4.1 remains valid, gas composition and mass changes during heating. Thus, in order to predict the p pTq relation the change in mass and gas composition should be known. However, since in all cases more than 90% of the gas composition is formed by nitrogen and since the specific gas constant for nitrogen is more than twice those of THF and DXL (nitrogen 297 J/kg K, THF 115 J/kg K, and DXL 112 J/kg K) the change of slope of the p pTq-curve will be below 5%. This implies that the change of slope seen in the case of decomposing THF and DXL clathrate is primarily caused by an ongoing evaporation of the guest gases from the solution and not by the change in composition. Therefore, the change of slope is mainly governed by the vapor pressure p v of the guest/water solution. It is smallest for water (p v ă 17 mbar for T ă 288 K, see Figure 1b), medium for aqueous DXL (p v ă 44 mbar for T ă 288 K [67], see Figure 1d), and largest for aqueous THF (p v ă 92 mbar for T ă 288 K [67], see Figure 1c). An increase in pressure attributed to evaporation of gas can even be observed once the top temperature of the ramp has been reached. The peak pressures of the individual experiments yield an estimate of the amount of gas released during the heating phase. To this end, the volumetric changes of the solid samples during melting are extrapolated from the respective melting point to the peak temperature via application of model Equation (5). It is assumed that the contraction of the sample happens immediately at the melting Materials 2016, 9, 668 6 of 23 point and provides additional space for the nitrogen atmosphere. The volumetric contraction at the melting points has already been mentioned above. Pressures of special interest are given in Table 1. The results for ice and THF clathrate are consistent. In both cases the full vapor pressure of the solution is not reached. This is attributed to the temperature gradient in the cell that will lead to thermodiffusion, which raises the concentration of guest molecules over the liquid/gas interface and hence reduces the vapor pressure. In contrast, in the case of the DXL clathrate the peak pressure found is slightly above the vapor pressure of the DXL water solution at 288 K. Thermal Contraction before Crystallization It is assumed that when the peak temperature is reached, the guest gas concentration is saturated in all cases considered. With decreasing temperatures, the pressure reduces because of thermal contraction and a reduction of THF, DXL, and water vapor pressure. The fact that the slope of the p pTq-curve during cooling is slightly less steep than during thermal expansion after fusion may be explained by the fact that although the state changes are very slow, perfect equilibrium is not attained along the ramps. Crystallization Following the temperature profile down to the base temperature, characteristic crystallization peaks can be observed. The water sample crystallizes at 266.5 K. A rapid change in density and the release of heat is seen in the inset of Figure 1b. Although the supply voltage for cooling is increased rapidly the heat cannot be removed fast enough by the TEC and hence the temperature rises. The pressure increase of 15 mbar caused by the crystallization is equal to the pressure decrease during melting. Two peaks of formation are found in the experiment with the THF clathrate (see inset of Figure 1c). The first one occurs 8 K below the melting point and can be attributed to the formation of THF clathrate. The shape of this peak is different from the peak seen in the case of pure ice. No increase in temperature can be observed. Immediately after the peak of the clathrate formation, an additional pronounced decrease of pressure indicates an ongoing but minor formation of the clathrate with THF from the atmosphere. This is followed by a small peak at approximately 265 K, showing the same characteristics as already seen with the ice peak in Figure 1b. Unlike in the case of THF clathrate the first peak of formation of DXL clathrate appears at 264 K (see inset Figure 1d). It displays the shape of an ice peak and is followed by a tiny peak at 261 K. Thermal Contraction after Crystallization Reducing the temperature further again results in a contraction of the atmosphere and the solid samples. The latter is too small to be measured with the setup used. Remarkably, a hysteresis-like behavior can be seen in both the pressure as well as the supply voltage signal. The behavior of the supply voltage signal has been explained above and stems from an imperfect insulation. The hysteresis in pressure needs a more sophisticated consideration and is presumably attributed to thermal and chemical non-equilibrium. Pressure Monitored µCT Imaging of Ice and THF Clathrate Ice and THF clathrate samples were investigated using a series of µCT scans over 196 h each and a prescribed temperature profile in order to investigate structural and surface changes. Simultaneously, continuous pressure monitoring allows the determination of phase changes. Figure 2 exemplarily shows results of the ice sample at two different points in time to illustrate the spatial resolution achieved. Pressure Monitored µCT Imaging of Ice and THF Clathrate Ice and THF clathrate samples were investigated using a series of µCT scans over 196 h each and a prescribed temperature profile in order to investigate structural and surface changes. Simultaneously, continuous pressure monitoring allows the determination of phase changes. Figure 2 exemplarily shows results of the ice sample at two different points in time to illustrate the spatial resolution achieved. (a) (b) Figure 2. 3D snapshots of ice samples obtained from µCT scans in a long-term experiment over a period of 196 h. The course of the experiment is described in the text. (a) Initial state of the ice after loading the sample and performing scan S1; (b) Final state of the ice before melting (scan S9). Although big parts of the sample sintered together the initial structure is still visible. Ice Sample As a reference, ice particles are investigated. Figure 3 shows the temperature and pressure profile, corrected for pressure fluctuations caused by room temperature fluctuations (see Section 4.4.2). The ice sample was prepared as described in Section 4.1 and loaded at a temperature of 233 K together with a small amount of liquid nitrogen, which lowered the temperature of the cell to 225 K. After two minutes of waiting under a gaseous nitrogen flow the sample cell was tightly closed and heated to the start temperature of 243 K. The sample was kept at this temperature for 98 h before it was raised by 1 K/min to 270 K where it was kept for 94 h. During the overall time span of 192 h a series of nine scans, denoted by S1-S9, were carried out in 24 h intervals. After scan S9, the sample was heated above its melting point to 288 K and a last scan of the completely molten sample was conducted (S10). The melting of the ice is also visible in the pressure signal (see inset in Figure 3) which shows the same shape as already seen in the results of Section 2.1. Note the small bumps in the pressure signal during scan times, which are caused by X-ray radiation induced heating of approximately 0.5 K. Also note the rather large pressure decrease caused by the high permeability of the silicone O-ring (see Section 4.4.2). An ideal experiment with a lossless cell is modeled by the pressure signal Figure 4. Since the configuration of ice particles (see Figure 2) resembles a snow pack, the results are comparable with results from recent snow studies utilizing µCT [40][41][42]. Each depicted µCT slice has been taken at the identical position in space. The upper row shows significant ice crystal growth at the surfaces of the ice particles at a temperature 30 K below the melting temperature. This is similar to ref. [41], in which the temperature gradient is similar to the gradient of 0.1 K/mm found here. In this setting water vapor sublimates from tips at the surface, diffuses along the temperature gradient, Figure 2. 3D snapshots of ice samples obtained from µCT scans in a long-term experiment over a period of 196 h. The course of the experiment is described in the text. (a) Initial state of the ice after loading the sample and performing scan S1; (b) Final state of the ice before melting (scan S9). Although big parts of the sample sintered together the initial structure is still visible. Ice Sample As a reference, ice particles are investigated. Figure 3 shows the temperature and pressure profile, corrected for pressure fluctuations caused by room temperature fluctuations (see Section 4.4.2). The ice sample was prepared as described in Section 4.1 and loaded at a temperature of 233 K together with a small amount of liquid nitrogen, which lowered the temperature of the cell to 225 K. After two minutes of waiting under a gaseous nitrogen flow the sample cell was tightly closed and heated to the start temperature of 243 K. The sample was kept at this temperature for 98 h before it was raised by 1 K/min to 270 K where it was kept for 94 h. During the overall time span of 192 h a series of nine scans, denoted by S1-S9, were carried out in 24 h intervals. After scan S9, the sample was heated above its melting point to 288 K and a last scan of the completely molten sample was conducted (S10). The melting of the ice is also visible in the pressure signal (see inset in Figure 3) which shows the same shape as already seen in the results of Section 2.1. Note the small bumps in the pressure signal during scan times, which are caused by X-ray radiation induced heating of approximately 0.5 K. Also note the rather large pressure decrease caused by the high permeability of the silicone O-ring (see Section 4.4.2). An ideal experiment with a lossless cell is modeled by the pressure signal p abs,lossless in which Equation (6) is integrated over time and subtracted from p abs . The time constants used are α low " 0.0013 h´1 for T ď 250 K and α high " 0.0020 h´1 for T ą 250 K (see Section 4.4.2). During the heating phase (after 97 h of runtime), the pressure follows the temperature ramp as predicted by the model of thermal expansion derived in Section 4.4.1. The smooth pressure signal indicates absence of phase changes up to a runtime of 190 h. Several tomographic cross-sectional reconstructions taken from selected scans are shown in Figure 4. Since the configuration of ice particles (see Figure 2) resembles a snow pack, the results are comparable with results from recent snow studies utilizing µCT [40][41][42]. Each depicted µCT slice has been taken at the identical position in space. The upper row shows significant ice crystal growth at the surfaces of the ice particles at a temperature 30 K below the melting temperature. This is similar to ref. [41], in which the temperature gradient is similar to the gradient of 0.1 K/mm found here. In this setting water vapor sublimates from tips at the surface, diffuses along the temperature gradient, and recrystallizes. Eventually this effect becomes a transport mechanism from warmer to colder ice surface sites which can also be observed in the tomograms (image bottom is the cold side). It is remarkable that under moderate conditions in snow packs this effect, also called dry snow metamorphism, results in a total replacement of 60% of the snow mass within 12 h [39]. and recrystallizes. Eventually this effect becomes a transport mechanism from warmer to colder ice surface sites which can also be observed in the tomograms (image bottom is the cold side). It is remarkable that under moderate conditions in snow packs this effect, also called dry snow metamorphism, results in a total replacement of 60% of the snow mass within 12 h [39]. supply voltage obtained during a long-term experiment with an ice sample. The pressure has been corrected and normalized to a hypothetically constant cabin temperature (see Section 4.4.2). The markers S1-S10 indicate a series of consecutive µCT scans. The continuous decrease in pressure is due to diffusion of gas across the silicone O-ring. , shows the pressure in a lossless cell, which is obtained by subtracting an integrated pressure loss rate from . Small bumps in pressure, which correspond to an increase in cell temperature are seen in each scan. This increase in temperature of approximately 0.5 K is caused by energy deposited by the X-rays. The inset at the far right shows a zoom into the pressure signal at the melting point of ice. . Tomographic images obtained from CT scans (numbered S1, S3, S5, S6, S9, and S10) of the ice sample at different points in time. Each slice represents the exact same position in space. In S1-S5 the sample temperature is 243 K, while in S6-S9 it is 270 K. S10 shows the meniscus of the molten sample. The bright points in slice S10 stem from tiny metallic particles that produced metal artifacts which had to be corrected manually. The markers S1-S10 indicate a series of consecutive µCT scans. The continuous decrease in pressure is due to diffusion of gas across the silicone O-ring. p abs,lossless shows the pressure in a lossless cell, which is obtained by subtracting an integrated pressure loss rate . p loss from p abs . Small bumps in pressure, which correspond to an increase in cell temperature are seen in each scan. This increase in temperature of approximately 0.5 K is caused by energy deposited by the X-rays. The inset at the far right shows a zoom into the pressure signal at the melting point of ice. and recrystallizes. Eventually this effect becomes a transport mechanism from warmer to colder ice surface sites which can also be observed in the tomograms (image bottom is the cold side). It is remarkable that under moderate conditions in snow packs this effect, also called dry snow metamorphism, results in a total replacement of 60% of the snow mass within 12 h [39]. supply voltage obtained during a long-term experiment with an ice sample. The pressure has been corrected and normalized to a hypothetically constant cabin temperature (see Section 4.4.2). The markers S1-S10 indicate a series of consecutive µCT scans. The continuous decrease in pressure is due to diffusion of gas across the silicone O-ring. , shows the pressure in a lossless cell, which is obtained by subtracting an integrated pressure loss rate from . Small bumps in pressure, which correspond to an increase in cell temperature are seen in each scan. This increase in temperature of approximately 0.5 K is caused by energy deposited by the X-rays. The inset at the far right shows a zoom into the pressure signal at the melting point of ice. . Tomographic images obtained from CT scans (numbered S1, S3, S5, S6, S9, and S10) of the ice sample at different points in time. Each slice represents the exact same position in space. In S1-S5 the sample temperature is 243 K, while in S6-S9 it is 270 K. S10 shows the meniscus of the molten sample. The bright points in slice S10 stem from tiny metallic particles that produced metal artifacts which had to be corrected manually. . Tomographic images obtained from CT scans (numbered S1, S3, S5, S6, S9, and S10) of the ice sample at different points in time. Each slice represents the exact same position in space. In S1-S5 the sample temperature is 243 K, while in S6-S9 it is 270 K. S10 shows the meniscus of the molten sample. The bright points in slice S10 stem from tiny metallic particles that produced metal artifacts which had to be corrected manually. At much higher temperatures (270 K), but still in the stability region, the ice surface becomes very mobile and tends to sinter with neighboring surfaces (see lower row, S6 and S9) which reduces surface energy [70]. At temperatures close to melting, premelted ice will exist in layers of a few nanometers in thickness [71]. Dry snow metamorphosis continues, but water vapor now evaporates from the liquid layer and recondenses elsewhere. Almost no creep is visible [72,73]. The ice particle packing keeps its configuration until it melts, which might be related to the low self-weight of the configuration. The last slice (S10) shows the meniscus of the molten ice sample together with artifacts caused by tiny metallic particles. The artifacts are manually corrected before quantitative data analysis. Note the slightly curved bottom of the cell made of porous graphite, which is visible in all tomograms. THF Clathrate Sample The experiment described in Section 2.2.1 was repeated with a THF clathrate sample. The loading procedure was identical to the ice run. Immediately after loading the sample it was heated to 247 K where it was kept for 96 hours. It was then heated to 274 K at a rate of 1 K/min. At this point the temperature was reduced immediately to 273 K again since the sample was found to be much more stable at 273 K and was stored there for 17 h. After one scan at 273 K the temperature was raised to 274 K again and four additional µCT scans of the solid clathrate were carried out within a period of 78 h. After scan S10 the temperature was raised to 288 K by 1 K/min which caused the sample to melt. A final scan (S11) was done of the liquefied sample. The temperature and pressure data of the experiment are shown in Figure 5. In comparison to the identical experiment with ice the biggest difference becomes visible in the pressure signal. The effects seen there cannot be explained by leakage alone. Almost no difference between p abs and p abs,lossless (computed using identical parameters as described in Section 2.2.1) can be observed in the first half of the experiment. This indicates an uptake of nitrogen rather than leakage and will be discussed in detail in Section 2.3. At much higher temperatures (270 K), but still in the stability region, the ice surface becomes very mobile and tends to sinter with neighboring surfaces (see lower row, S6 and S9) which reduces surface energy [70]. At temperatures close to melting, premelted ice will exist in layers of a few nanometers in thickness [71]. Dry snow metamorphosis continues, but water vapor now evaporates from the liquid layer and recondenses elsewhere. Almost no creep is visible [72,73]. The ice particle packing keeps its configuration until it melts, which might be related to the low selfweight of the configuration. The last slice (S10) shows the meniscus of the molten ice sample together with artifacts caused by tiny metallic particles. The artifacts are manually corrected before quantitative data analysis. Note the slightly curved bottom of the cell made of porous graphite, which is visible in all tomograms. THF Clathrate Sample The experiment described in Section 2.2.1 was repeated with a THF clathrate sample. The loading procedure was identical to the ice run. Immediately after loading the sample it was heated to 247 K where it was kept for 96 hours. It was then heated to 274 K at a rate of 1 K/min. At this point the temperature was reduced immediately to 273 K again since the sample was found to be much more stable at 273 K and was stored there for 17 h. After one scan at 273 K the temperature was raised to 274 K again and four additional µCT scans of the solid clathrate were carried out within a period of 78 h. After scan S10 the temperature was raised to 288 K by 1 K/min which caused the sample to melt. A final scan (S11) was done of the liquefied sample. The temperature and pressure data of the experiment are shown in Figure 5. In comparison to the identical experiment with ice the biggest difference becomes visible in the pressure signal. The effects seen there cannot be explained by leakage alone. Almost no difference between and , (computed using identical parameters as described in Section 2.2.1) can be observed in the first half of the experiment. This indicates an uptake of nitrogen rather than leakage and will be discussed in detail in Section 2.3. , shows the pressure in a lossless cell, which is obtained by subtracting an integrated pressure loss rate from . The markers S1-S11 indicate a series of consecutive µCT scans. Markers P1 and P2 label points of irregular pressure behavior: at P1, the cell pressure crosses the atmospheric pressure , which cannot be explained by leakage. The increase in pressure at P2 is five times the increase in pressure caused by thermal expansion due to a temperature increase of 1 K. The inset at the far right shows a zoom into the pressure signal during the decomposition of the clathrate. Figure 6 shows slices of reconstructed µCT scans obtained from the scans S1, S3, S5, S6, S10, and S11. The effects already seen with the ice samples recur. At low temperatures (S1-S5) one observes the growth of small crystals on top of the surface while the bulk remains unchanged. It is not possible Figure 5. Sample and cabin temperature´Θ Sample , Θ Cabin¯a s well as cell pressure p abs and TEC supply voltage U S obtained during a long-term experiment with a THF clathrate sample. The pressure p abs has been corrected by fluctuations in cabin temperature (see Section 4.4.2). p abs,lossless shows the pressure in a lossless cell, which is obtained by subtracting an integrated pressure loss rate . p loss from p abs . The markers S1-S11 indicate a series of consecutive µCT scans. Markers P1 and P2 label points of irregular pressure behavior: at P1, the cell pressure crosses the atmospheric pressure p atm , which cannot be explained by leakage. The increase in pressure at P2 is five times the increase in pressure caused by thermal expansion due to a temperature increase of 1 K. The inset at the far right shows a zoom into the pressure signal during the decomposition of the clathrate. Figure 6 shows slices of reconstructed µCT scans obtained from the scans S1, S3, S5, S6, S10, and S11. The effects already seen with the ice samples recur. At low temperatures (S1-S5) one observes the growth of small crystals on top of the surface while the bulk remains unchanged. It is not possible to tell whether the crystals are ice or clathrate. However, considering their shape and growth rates they seem to be the result of the same phenomenon discussed above. the significant settling of THF clathrate to be due to mass transport by sublimation and recondensation faster than in ice. Additionally, the approximately three orders of magnitude greater concentration of defects in the THF clathrate than in ice (a result of occasional H-bonding by THF) might also enhance the mobility of the clathrate [38,69]. Note the gas bubble in the last slice. This is remarkable, since no gas space at the bottom is visible in the slices of scan S10. Since THF is liquid at these conditions, the bubble is unlikely to be filled with gaseous THF. However, if one assumes nitrogen stored in the clathrate (either in micropores or in empty cages of the sII-structure) it could form bubbles during decomposition, especially in our case, where the sample is heated from below. Figure 6. Tomographic images obtained from CT scans (numbered S1, S3, S5, S6, S10, and S11) of the THF clathrate sample at different points in time. Each slice represents the exact same position in space. In S1-S5 the sample temperature is 247 K, in S6 it is 273 K, and in S10 it is 274 K. S11 shows the meniscus of the molten sample together with gas bubbles, which can be found all over the liquefied sample. The bright point in slice S11 stems from a tiny metallic particle. The tomographic data obtained from both experiments in Section 2.2 are analyzed further to determine volumes and surface areas. They are extracted from the scans after segmentation into gas and solid using the random walk algorithm mentioned in Section 4.5. The error of the quantitative analysis after correcting for metal particles is estimated to be less than 2%. Figure 7a shows the evolution of ice/clathrate volume, while Figure 7b shows the surface area. At temperatures 30 K below their melting point, the volume of ice/clathrate remains constant while its surface area grows. This can be related to the growth of small crystals seen in the tomograms of Figures 4 and 6. Slightly below their melting points both samples begin to become mobile. In the case of ice, this mobility seems to have almost no effect on the volume. Apart from that, although significant transformations in the surface of the ice are observed, even the surface area remains stable after the temperature was changed. That is, the basic configuration of ice particles remains unchanged. In contrast, the volume and surface area of the THF clathrate decrease over time. In the last step both samples eventually Figure 6. Tomographic images obtained from CT scans (numbered S1, S3, S5, S6, S10, and S11) of the THF clathrate sample at different points in time. Each slice represents the exact same position in space. In S1-S5 the sample temperature is 247 K, in S6 it is 273 K, and in S10 it is 274 K. S11 shows the meniscus of the molten sample together with gas bubbles, which can be found all over the liquefied sample. The bright point in slice S11 stems from a tiny metallic particle. Unlike ice the THF clathrate becomes extremely mobile at a temperature 3 K below its melting point. Creep seems to be the dominant effect, which contrasts literature indicating clathrates to be more creep resistant than ice [18]. Furthermore, the clathrates self-weight is negligible. This suggests the significant settling of THF clathrate to be due to mass transport by sublimation and recondensation faster than in ice. Additionally, the approximately three orders of magnitude greater concentration of defects in the THF clathrate than in ice (a result of occasional H-bonding by THF) might also enhance the mobility of the clathrate [38,69]. Note the gas bubble in the last slice. This is remarkable, since no gas space at the bottom is visible in the slices of scan S10. Since THF is liquid at these conditions, the bubble is unlikely to be filled with gaseous THF. However, if one assumes nitrogen stored in the clathrate (either in micropores or in empty cages of the sII-structure) it could form bubbles during decomposition, especially in our case, where the sample is heated from below. The tomographic data obtained from both experiments in Section 2.2 are analyzed further to determine volumes and surface areas. They are extracted from the scans after segmentation into gas and solid using the random walk algorithm mentioned in Section 4.5. The error of the quantitative analysis after correcting for metal particles is estimated to be less than 2%. Figure 7a shows the evolution of ice/clathrate volume, while Figure 7b shows the surface area. At temperatures 30 K below their melting point, the volume of ice/clathrate remains constant while its surface area grows. This can be related to the growth of small crystals seen in the tomograms of Figures 4 and 6. Slightly below their melting points both samples begin to become mobile. In the case of ice, this mobility seems to have almost no effect on the volume. Apart from that, although significant transformations in the surface of the ice are observed, even the surface area remains stable after the temperature was changed. That is, the basic configuration of ice particles remains unchanged. In contrast, the volume and surface area of the THF clathrate decrease over time. In the last step both samples eventually become liquid. The drop in volume with the ice sample is 12% and thus 3% larger than we would expect from contraction during melting. With the THF clathrate sample a drop of 10% can be found, hence 6.5% larger than expected. We hypothesize this overestimation of volume is caused by sub micrometer-sized pores in both, the ice and the THF clathrate. These are smaller than the detection limit of the µCT setup used. While these pores are rather stable in the case of the ice, they tend to be filled over time in the case of the THF clathrate. This explains the decay in volume of the THF clathrate at a constant temperature of 274 K. become liquid. The drop in volume with the ice sample is 12% and thus 3% larger than we would expect from contraction during melting. With the THF clathrate sample a drop of 10% can be found, hence 6.5% larger than expected. We hypothesize this overestimation of volume is caused by sub micrometer-sized pores in both, the ice and the THF clathrate. These are smaller than the detection limit of the µCT setup used. While these pores are rather stable in the case of the ice, they tend to be filled over time in the case of the THF clathrate. This explains the decay in volume of the THF clathrate at a constant temperature of 274 K. Nitrogen Uptake in THF Clathrate The pressure-controlled cooling stage was tested with several different ice samples as well as THF clathrate. Whenever ice samples were investigated the pressure signal showed reproducible behavior. With THF clathrates the pressure signal of identical experiments strongly deviated from each other. These variations are attributed to the preparation process as well as the samples history. The following observations were made: (1) The longer the clathrate samples are stored in the freezer at 253 K, the higher the pressure after full decomposition of the clathrate; (2) The larger the surface area of the sample, the faster the pressure decreases in the cell; (3) The higher the cell temperature, the less pressure decrease-an unconventional reverse temperature dependence of leak rates within the range of operation; (4) Massive decreases in pressure at a temperature of 243 K do not stop at atmospheric pressure but produce negative gauge pressures far too high to be caused by thermal relaxation. Note that this can also not be explained by leakage, which actually prevents negative pressures. These observations can be explained if one considers an uptake of nitrogen in the THF clathrate structure. To our knowledge, this has not been observed at ambient pressure conditions in the temperature range from 230 to 273 K. However, it is known that this effect occurs at approximately 5 MPa and 268 K and has been used in a molecular sieving approach to separate gaseous hydrogen and nitrogen [74]. All observations mentioned above except for observation (2) are recorded in this study. Observation (1) is illustrated in Figure 8, which shows the pressure signal of a thermal cycling experiment with THF clathrates (cf. Section 2.1, now using a heating rate of 5 K/min, base temperature 243 K and a peak temperature of 283 K). This time, instead of freshly forming the clathrates in the sample cell, they were either stored in the freezer for one week (subscript "g" in the pressure signal) or in liquid nitrogen for four days (subscript "l" in the pressure signal), before loading them into the sample cell. In the first case contact to liquid nitrogen was limited to a few seconds during the loading process. Note the high pressure , in the first cycle after the THF clathrate decayed. The difference in pressure , from loading to the first peak is approximately 190 mbar. This is approximately 40 mbar more than the difference observed in the THF clathrate experiment of Section 2.1 where the total temperature difference was even smaller. Additionally, the pressure does not drop to its initial value but remains 40 mbar above it. This is also different from what we observed in Section 2.1 where the initial pressure was restored. After the first cycle was finished the cell was opened for one second to release the excess gas. Subsequently two additional cycles were conducted which did not show that unexpected behavior but the results known from Section 2.1. We hypothesize that the additional pressure in the first cycle was caused by nitrogen which was taken up by the clathrate Nitrogen Uptake in THF Clathrate The pressure-controlled cooling stage was tested with several different ice samples as well as THF clathrate. Whenever ice samples were investigated the pressure signal showed reproducible behavior. With THF clathrates the pressure signal of identical experiments strongly deviated from each other. These variations are attributed to the preparation process as well as the samples history. The following observations were made: (1) The longer the clathrate samples are stored in the freezer at 253 K, the higher the pressure after full decomposition of the clathrate; (2) The larger the surface area of the sample, the faster the pressure decreases in the cell; (3) The higher the cell temperature, the less pressure decrease-an unconventional reverse temperature dependence of leak rates within the range of operation; (4) Massive decreases in pressure at a temperature of 243 K do not stop at atmospheric pressure but produce negative gauge pressures far too high to be caused by thermal relaxation. Note that this can also not be explained by leakage, which actually prevents negative pressures. These observations can be explained if one considers an uptake of nitrogen in the THF clathrate structure. To our knowledge, this has not been observed at ambient pressure conditions in the temperature range from 230 to 273 K. However, it is known that this effect occurs at approximately 5 MPa and 268 K and has been used in a molecular sieving approach to separate gaseous hydrogen and nitrogen [74]. All observations mentioned above except for observation (2) are recorded in this study. Observation (1) is illustrated in Figure 8, which shows the pressure signal of a thermal cycling experiment with THF clathrates (cf. Section 2.1, now using a heating rate of 5 K/min, base temperature 243 K and a peak temperature of 283 K). This time, instead of freshly forming the clathrates in the sample cell, they were either stored in the freezer for one week (subscript "g" in the pressure signal) or in liquid nitrogen for four days (subscript "l" in the pressure signal), before loading them into the sample cell. In the first case contact to liquid nitrogen was limited to a few seconds during the loading process. Note the high pressure p abs,g in the first cycle after the THF clathrate decayed. The difference in pressure p abs,g from loading to the first peak is approximately 190 mbar. This is approximately 40 mbar more than the difference observed in the THF clathrate experiment of Section 2.1 where the total temperature difference was even smaller. Additionally, the pressure does not drop to its initial value but remains 40 mbar above it. This is also different from what we observed in Section 2.1 where the initial pressure was restored. After the first cycle was finished the cell was opened for one second to release the excess gas. Subsequently two additional cycles were conducted which did not show that unexpected behavior but the results known from Section 2.1. We hypothesize that the additional pressure in the first cycle was caused by nitrogen which was taken up by the clathrate during the storage time in the freezer. After decomposition, the additional nitrogen remains in the atmosphere since another uptake happens on much longer timescales than the formation of the THF clathrate. After the cell was opened, the extra nitrogen was released. The THF clathrate formed subsequently is not different from those of the experiment in Section 2.1. Remarkably, the above observations cannot be made in the case of the liquid nitrogen storage conditions, since p abs,l is entirely reversible in all three cycles. Materials 2016, 9,668 12 of 23 during the storage time in the freezer. After decomposition, the additional nitrogen remains in the atmosphere since another uptake happens on much longer timescales than the formation of the THF clathrate. After the cell was opened, the extra nitrogen was released. The THF clathrate formed subsequently is not different from those of the experiment in Section 2.1. Remarkably, the above observations cannot be made in the case of the liquid nitrogen storage conditions, since , is entirely reversible in all three cycles. Figure 8. Thermal cycling of THF clathrate samples subjected to a trapezoidal temperature profile with base temperature 243 K, peak temperature 283 K, and a heating/cooling rate of 5 K/min: (1) The sample whose pressure signal has subscript "g" was stored in a freezer for one week before being loaded into the sample cell. Contact with liquid nitrogen was limited to a few seconds during the loading process. After one hour, the pressure inside the cell was relieved by quickly opening and closing it; (2) The pressure signal with subscript "l" stems from a sample which was stored in liquid nitrogen for four days after being freshly formed. Observations (3) and (4) are illustrated in Figure 5. The pressure decrease after the start of the experiment is much larger than in the case of the ice experiment. At the point labeled P1 the pressure eventually crosses the line of atmospheric pressure after 30 h at constant temperature. The sample cell had to be opened under a dry nitrogen flow for pressure relief since the pressure sensor used is not suitable for negative gauge pressures. This had to be repeated after 49 h and 73 h and is reflected in the kinks in the pressure signal at these times. After 96 h the sample cell was heated from 247 K to 274 K. Following the model derived in Section 4.4.1 this corresponds to an increase of 55 mbar. However, a difference in pressure of 125 mbar is observed. At the point labeled P2, the pressure increased by 11 mbar, although the temperature was raised by only 1 K. This is five times the value predicted by the thermal expansion model. Right after scan S7, no significant changes in pressure are observed for almost 72 h. Leakage is even less than in the case of the ice sample. This can also be interpreted by a very slow release of nitrogen from the clathrate structure. During decomposition of the clathrate the pressure rises by 270 mbar while following the temperature ramp from 274 to 288 K. In the same experiment done with a freshly formed clathrate (see Section 2.1) a pressure difference of 100 mbar is seen for the same change of temperature. The pressure loss model derived in Section 4.4.2 is used to correct the pressure signal to a lossless cell. One can then sum up the remaining losses found at the temperature of 247 K and reinterpret them as a nitrogen uptake of 200 mbar. Adding these uptakes to the pressure right before the slope of the heat ramp (96 h) results in a total ramp bottom pressure of 1170 mbar. Applying the thermal expansion model to this (in two steps to include the volume contraction during melting) yields a ramp peak pressure of 1266 mbar. This is only 56 mbar below the peak pressure. This difference can be attributed to the vapor pressure of the water-THF solution and is in good agreement with the results obtained in the THF clathrate experiment of Section 2.1. Furthermore, unlike in the ice experiments, large gas bubbles are found in the molten clathrate sample (see Figure 6(S11)). No voids are found in the lower region in the last scan before melting, which could help to explain this. Thus it is assumed that the bubbles are formed during the decomposition of the clathrate by escaping nitrogen gas. Altogether, the effect is rather small: a 200 mbar uptake relates to 0.34 mL of gaseous nitrogen under standard conditions. Assuming that every empty dodecahedral cage takes up one single nitrogen molecule the total gas volume in a 250 mg sample would be 28 mL. That implies that only 1% of the empty cages are occupied, presumably in a thin layer at the surface. The depth of that Figure 8. Thermal cycling of THF clathrate samples subjected to a trapezoidal temperature profile with base temperature 243 K, peak temperature 283 K, and a heating/cooling rate of 5 K/min: (1) The sample whose pressure signal has subscript "g" was stored in a freezer for one week before being loaded into the sample cell. Contact with liquid nitrogen was limited to a few seconds during the loading process. After one hour, the pressure inside the cell was relieved by quickly opening and closing it; (2) The pressure signal with subscript "l" stems from a sample which was stored in liquid nitrogen for four days after being freshly formed. Observations (3) and (4) are illustrated in Figure 5. The pressure decrease after the start of the experiment is much larger than in the case of the ice experiment. At the point labeled P1 the pressure eventually crosses the line of atmospheric pressure after 30 h at constant temperature. The sample cell had to be opened under a dry nitrogen flow for pressure relief since the pressure sensor used is not suitable for negative gauge pressures. This had to be repeated after 49 h and 73 h and is reflected in the kinks in the pressure signal at these times. After 96 h the sample cell was heated from 247 K to 274 K. Following the model derived in Section 4.4.1 this corresponds to an increase of 55 mbar. However, a difference in pressure of 125 mbar is observed. At the point labeled P2, the pressure increased by 11 mbar, although the temperature was raised by only 1 K. This is five times the value predicted by the thermal expansion model. Right after scan S7, no significant changes in pressure are observed for almost 72 h. Leakage is even less than in the case of the ice sample. This can also be interpreted by a very slow release of nitrogen from the clathrate structure. During decomposition of the clathrate the pressure rises by 270 mbar while following the temperature ramp from 274 to 288 K. In the same experiment done with a freshly formed clathrate (see Section 2.1) a pressure difference of 100 mbar is seen for the same change of temperature. The pressure loss model derived in Section 4.4.2 is used to correct the pressure signal to a lossless cell. One can then sum up the remaining losses found at the temperature of 247 K and reinterpret them as a nitrogen uptake of 200 mbar. Adding these uptakes to the pressure right before the slope of the heat ramp (96 h) results in a total ramp bottom pressure of 1170 mbar. Applying the thermal expansion model to this (in two steps to include the volume contraction during melting) yields a ramp peak pressure of 1266 mbar. This is only 56 mbar below the peak pressure. This difference can be attributed to the vapor pressure of the water-THF solution and is in good agreement with the results obtained in the THF clathrate experiment of Section 2.1. Furthermore, unlike in the ice experiments, large gas bubbles are found in the molten clathrate sample (see Figure 6(S11)). No voids are found in the lower region in the last scan before melting, which could help to explain this. Thus it is assumed that the bubbles are formed during the decomposition of the clathrate by escaping nitrogen gas. Altogether, the effect is rather small: a 200 mbar uptake relates to 0.34 mL of gaseous nitrogen under standard conditions. Assuming that every empty dodecahedral cage takes up one single nitrogen molecule the total gas volume in a 250 mg sample would be 28 mL. That implies that only 1% of the empty cages are occupied, presumably in a thin layer at the surface. The depth of that layer had to be 5 µm for a surface area of 563 mm 2 obtained from the first scan. However, in the case of micropores, the actual surface area would be much bigger and the penetration depth smaller. We furthermore assume that this uptake did also take place in the experiments of Section 2.1 in both the THF as well as the DXL clathrate. It would explain the comparably large hysteresis in the pressure signal as well as the disproportionally high peak pressure in the DXL clathrate cycle. The latter would imply that the effect of nitrogen uptake is more pronounced in the DXL clathrate case. Discussion In this section, the most important results are discussed, and opportunities offered by the setup used in this study are suggested. In the first set of experiments we demonstrate that pressure and voltage signals provide critical information about the state of the sample and phase-change events. In similar experiments with differential scanning calorimeters (DSCs) or differential thermal analyzers (DTAs), one is usually unaware about pressures changes. However, in the case of forming and decomposing clathrates, an accurate pressure signal helps to understand the mechanisms involved in the formation and decomposition process. The full strength of the pressure-monitored cooling stage does not lie in the pressure signal alone. It is the provision of the complimentary quantities pressure, temperature, and power supply, which gives insight in interesting phenomena. In DSC studies of clathrate formation processes more than one peak of formation is often found [75]. It is difficult to relate, based on heat fluxes alone, these peaks to the formation of ice existing in islands, to homogeneous or heterogeneous nucleation, and to the formation of clathrates. The characteristic shapes in the pressure signal, found in our experiments, might help and can additionally provide estimates for density changes during the phase transformations. Although it is not straightforward to interpret the pressure signal in a multicomponent gas system at non-uniform temperatures a simplified model shows good agreement with the experimental data. The application of this model helps to estimate the amount of gas evaporated in the decomposition of the clathrates. With the ice and the THF clathrate the peak pressure found is less than one would expect from the summation of thermal gas expansion and vapor pressure of the water-THF solutions. This is attributed to the effect of thermodiffusion caused by the temperature gradient in the gas volume of the cell. Conversely, in the case of DXL clathrate the peak pressure is larger than that. An explanation for this could be additional nitrogen stored in the clathrate in either pores or unoccupied cages. After decomposition this nitrogen volume would add to the total pressure. Another interesting observation is that the release of guest gas to the atmosphere does not happen immediately after melting starts. Although it is assumed that, very soon after melting starts, enough liquid is available to attain full vapor pressure, less pressure was observed in both cases. The complementary information obtained with the cooling stage is extended by structural information gained from µCT imaging. By that it becomes possible to investigate ice and clathrate samples over a long period in a highly controlled fashion. The results show massive transformations of surface and bulk at temperatures 30 K and 3 K below the melting point. They happen on large time scales and are likely to be overlooked in short-term experiments. Results known from snow research are useful to explain not only the observations made with the ice sample, but are also applicable to the THF clathrate study. In porous media formed by snow/ice significant mass transport takes place by sublimation, temperature gradient induced diffusion, and recrystallization/recondensation. At temperatures 30 K below the melting point this process seems to be the cause of crystal growth at the ice/clathrate particles. In the case of THF clathrate particles it is yet unclear whether the crystals are formed by ice or clathrate. To our understanding, this process of metamorphism is dramatically increased with the THF clathrate at a temperature 3 K below its melting point. While the ice particle configuration is stable at the same thermal setting the configuration of THF clathrate particles collapses. Temperature gradients and higher vapor pressures at tips are the driving force for this effect. Furthermore, heat consumed or generated during sublimation and recrystallization adds to the formation of temperature gradients. To our knowledge this is not widely considered in the growth and decomposition phenomena of clathrates. Self-preservation is most prominently explained with the formation of a protecting ice layer. We propose to consider a contribution of the effect described above to the formation of such a layer since the sublimation pressure of water rises quickly from 27.7 Pa at 240 K [76]. In the attempt to explain massive deviations in the pressure signal between the ice and the THF clathrate experiments we find strong indications for an uptake of nitrogen in the THF clathrate at ambient pressures and temperatures from 230 K to 271 K. Since the total amount of nitrogen uptake is about one percent of the possible maximum we assume uptake to the surface, but no diffusion into the bulk. This correlates with the observation of increasing uptake rates with increasing surface areas. Still, the effect is rather small and quite difficult to be seen in experiments involving flow meters to determine gas release/uptake rates. The setup presented will be useful for the investigation of many interesting phenomena. It should be straightforward to upgrade the cell with a pressure control and a valve leading to a gas analysis device. Our own plans are mainly formation and decomposition studies with different clathrates including methane, where the phenomenon of self-preservation is still not completely understood. Results from snow research suggest that temperature gradients may have tremendous influence both in formation and decomposition. These gradients are not only governed by the vicinity but by the clathrate structure itself. The method proposed in this work is perfectly suitable to study this influence. Besides that, the method could also be promising to investigate the memory effect via thoroughly designed sample cell geometries [18]. Materials and Methods Sample preparation, the commercially available µCT and the custom built measurement cell for thermodynamic live monitoring are described. Sample Preparation Anhydrous grade tetrahydrofuran and 1,3-dioxolane, both obtained from Sigma-Aldirch (St. Louis, MO, USA), are mixed with deionized water on a Mettler-Toledo XA204DR analytical balance (Mettler-Toledo, Columbus, OH, USA). The mole fraction is 1:16.65 (THF/DXL:H 2 O) in both cases. This ensures a slight excess of guest molecules during the formation of clathrates: in both cases the stoichiometric mole fraction is 1:17. Twenty-five milliliters of each solution are stored in a refrigerator at 281 K in liquid form. The overall storage time was four weeks, it started with the experiments of Section 2.1. In those experiments the THF/DXL solutions were filled directly into the sample cell using a µL-pipette. The solid ice and clathrate particles for the experiments of Section 2.2 were obtained by freshly forming the ice/clathrates from the water/solution in a freezer at 253 K whenever needed. Chips from the frozen solution were crushed and filled into the sample cell containing some liquid nitrogen. The sample temperature was kept below 200 K in the whole filling procedure by working under liquid nitrogen. µCT Setup A commercially available GE nanotom-m is used to obtain high-resolution micro tomographic scans of the samples [77]. Table 2 specifies the scan parameters. X-ray images from full sample rotation scans are then used to reconstruct the 3D-structure of the samples using a GPU-unit and the manufacturer's reconstruction software datosx2 (GE Sensing & Inspection Technologies, Wunstorf, Germany). Figure 9 shows a sketch of the custom-made pressure-monitored cooling stage and a picture of the stage in front of the X-ray tube. The samples are placed in a graphite vessel with an inner diameter of 9 mm. Graphite is used for its unique combination of low X-ray absorption and high thermal conductivity. Since the microporous structure of graphite is not gas tight it is packed in a shell made of PEEK. The PEEK shell is connected to a pressure sensor (OMEGA PXM459-350HGI, OMEGA Engineering, Deckenpfronn, Germany) on the top side via an O-ring made of NBR. At the lower end the PEEK shell is linked to an aluminum heat sink. Here a silicone O-ring is used since the range of operation of NBR does not allow temperatures lower than´35˝C. The aluminum heat sink is cooled using a stack of two Peltier elements. Both have been purchased from Quick-Ohm, Wuppertal, Germany. The lower element (type QC-31-1.0-3.9MS) is more powerful than the upper one (type QC-17-1.4-3.7MS) since it needs to withdraw the electrical power of the upper one in addition to the heat from inside the sample cell. The current through the bottom element is set to be twice the current through the top one at all times. The upper current is set via a PID loop that controls the temperature returned from the thermocouple T0 (K-type; d " 1 mm). The hot side of the Peltier stack is cooled with water, which in turn is cooled in a chiller (LAIRD MRC300, Laird, London, UK) outside of the µCT cabin. The TEC is powered by a controllable VOLTCRAFT VSP2410 laboratory power supply (Conrad Electronic AG, Wollerau, Switzerland). All sensors as well as the power pack are connected to a NI cRIO-9022 DAQ (National Instruments, Austin, TX, USA) running a LabView program to collect the data. An additional thermocouple T1 (K-type; d " 1 mm) measures the temperature at the hot side of the Peltier stack. Furthermore, one thermocouple (K-type; d " 1 mm) and a barometric pressure sensor (OMEGA PX419-26HBI, OMEGA Engineering, Deckenpfronn, Germany) are placed inside the µCT cabin to measure the ambient temperature and the atmospheric pressure. For clarity, "stage" always means the sum of parts illustrated in Figure 9a, while "cell" means just the volume between the base of the graphite vessel to the tip of the pressure sensor. Temperature Management Many of the considerations in this paper rely on an accurate knowledge of cell temperature. Almost all temperature sensors involve metals. Metals produce "metal streak artifacts" in scans where the majority of the region of interest consists of materials with little X-ray absorption, such as ice. This has to be avoided to maintain scan quality. Instead we measure the temperature below the graphite vessel containing the sample. The sample temperature is then deduced from the temperature read out of the K-type thermocouple T0. In order to do so we first determined the errors of all thermocouples using the well-defined melting points of n-decane (T m = 243.5 K), n-dodecane (T m = 263.6 K), and water (T m = 273.15 K) together with the boiling point of liquid nitrogen (T b = 77 K) [78]. A quadratic fit through these four data points was then used to relate the thermocouple read out to the actual temperature. Since the errors of the thermocouples showed a strong non-linear behavior at low temperatures, the standard deviation for the error of the quadratic fit function to the melting points is σ = 0.6 K. To compensate for dynamic effects, e.g., for the experiments described in Section 2.1, reference runs were done using the error-corrected thermocouples. The only differences between reference run and actual experiment are a surrogate pressure sensor, a different sample substance, and an additional K-type thermocouple immersed in the sample. In order to maintain a comparable thermal situation we chose a very thin thermocouple (d = 75 µm) for that to minimize heat flux. A glycerin-water mixture (250 mL, weight fraction 2:1) is used as a reference substance. This mixture has a freezing point of 226.7 K and lower vapor pressure than water. The specific heat of the mixture is approximately 60% of the specific heat of water [79]. The effect of varying thermal mass inside the cell can be neglected due to the small ratio of the sample and the overall thermal mass of the cooling stage. Figure 10c shows the results of this reference run. There are no differences in sample temperature during heating and cooling. The temperature of the sample is always above the temperature of the Peltier element due to the temperature gradient from the bottom to the top. This effect increases with decreasing Peltier temperature. In all experimental results discussed in this paper this difference was added to the measured temperature. This means the temperatures given in this text always relate to the sample mean temperature and are uncertain to 1 K at most. and an additional K-type thermocouple immersed in the sample. In order to maintain a comparable thermal situation we chose a very thin thermocouple (d = 75 µm) for that to minimize heat flux. A glycerin-water mixture (250 mL, weight fraction 2:1) is used as a reference substance. This mixture has a freezing point of 226.7 K and lower vapor pressure than water. The specific heat of the mixture is approximately 60% of the specific heat of water [79]. The effect of varying thermal mass inside the cell can be neglected due to the small ratio of the sample and the overall thermal mass of the cooling stage. Figure 10c shows the results of this reference run. There are no differences in sample temperature during heating and cooling. The temperature of the sample is always above the temperature of the Peltier element due to the temperature gradient from the bottom to the top. This effect increases with decreasing Peltier temperature. In all experimental results discussed in this paper this difference was added to the measured temperature. This means the temperatures given in this text always relate to the sample mean temperature and are uncertain to 1 K at most. Temperature variations in the region of interest, i.e., the graphite cup, are investigated numerically. The steady state heat conduction equation is solved using the open source 3D solver ELMER [80]. Convection inside the cell can be neglected due to stable temperature stratification and the small Grasshof number of the problem. A temperature boundary condition = 240 K at the cold side of the Peltier stack as well as a heat transfer boundary condition to the ambient (ambient temperature = 298 K, heat transfer coefficient (h = 25 W/m 2 K) is applied. Figure 10a,b shows the temperature field inside the cell from simulation. The simulated temperature variation in the lower third of the cell is less than 1 K. Temperature variations in the region of interest, i.e., the graphite cup, are investigated numerically. The steady state heat conduction equation is solved using the open source 3D solver ELMER [80]. Convection inside the cell can be neglected due to stable temperature stratification and the small Grasshof number of the problem. A temperature boundary condition T " 240 K at the cold side of the Peltier stack as well as a heat transfer boundary condition to the ambient (ambient temperature T a " 298 K, heat transfer coefficient (h = 25 W/m 2 K) is applied. Figure 10a,b shows the temperature field inside the cell from simulation. The simulated temperature variation in the lower third of the cell is less than 1 K. Pressure Management The pressure sensor is used to detect the pressure effects of phase transitions and gas release. The pressure signal is also influenced by the temperature dependent behavior of the dry nitrogen taking up the rest of the cell. In addition, the low temperatures at the bottom side of the cell require using a high permeability silicone O-ring. This causes considerable leakage. Hence we require: (1) a model for thermal expansion of the nitrogen gas; and (2) quantitative information of leakage during the experiments. Thermal Expansion and Contraction in Non-Uniform Temperature Fields When applying a state equation, e.g., the ideal gas law, to the measurement cell, the non-uniform temperature field must be accounted for. The temperature field ( ) illustrated in Figure 10 is considered. Since the gas inside the cell is motionless, the pressure must be constant throughout the cell. Hydrostatic pressure variations are neglected. Locally, the ideal gas law states the pressure where is the gas density, is the mass specific gas constant, and is the compressibility factor of a non-ideal gas. Assuming an absolutely tight cell, mass conservation and integration over the cell volume yields for the total gas mass . Reformulation of Equation (2) leads to Pressure Management The pressure sensor is used to detect the pressure effects of phase transitions and gas release. The pressure signal is also influenced by the temperature dependent behavior of the dry nitrogen taking up the rest of the cell. In addition, the low temperatures at the bottom side of the cell require using a high permeability silicone O-ring. This causes considerable leakage. Hence we require: (1) a model for thermal expansion of the nitrogen gas; and (2) quantitative information of leakage during the experiments. Thermal Expansion and Contraction in Non-Uniform Temperature Fields When applying a state equation, e.g., the ideal gas law, to the measurement cell, the non-uniform temperature field must be accounted for. The temperature field T pxq illustrated in Figure 10 is considered. Since the gas inside the cell is motionless, the pressure must be constant throughout the cell. Hydrostatic pressure variations are neglected. Locally, the ideal gas law states the pressure where ρ is the gas density, R is the mass specific gas constant, and Z is the compressibility factor of a non-ideal gas. Assuming an absolutely tight cell, mass conservation and integration over the cell volume V yields m " for the total gas mass m. Reformulation of Equation (2) leads to The pressure can then be calculated for any known inhomogeneous temperature field given mass m either by numerical or analytical integration. The temperature field in Figure 10b is approximated by a one-dimensional temperature field, neglecting radial gradients T pzq " It considers two regions: (1) the region inside the graphite cup (0 ă z ď h 0 ) where a uniform temperature field is assumed; and (2) the region from h 0 to the tip of the pressure sensor with a linear temperature variation. Integration yields where A 0 and A 1 refer to the cross sectional areas of the two regions. Equation (5) is used as a model for thermal expansion and contraction in this study. A 0 , A 1 , h 1 , and the total volume of the cell are determined using the tomographic reconstruction of the empty cell. The value h 0 " 3 mm was extracted from a fit to the data obtained in the empty cell experiment of Section 2.1 and used in every other occasion although with loaded samples some of the lower volume is occupied by the sample instead of nitrogen. Pressure Loss due to Leakage In order to consider pressure losses, pressure fluctuations introduced into the system by changes in ambient temperature have to be subtracted. Equation (5) can be used to extract the current total gas mass m, which changes over time due to leakage, evaporation and condensation. This mass is then again inserted in Equation (5), where T 0 is still the bottom temperature but T 1 is a fixed mean ambient temperature. By that the pressure has been corrected to a constant ambient temperature T 1 . A result of this procedure is depicted in Figure 11a. After the pressure signal is corrected by fluctuations in ambient temperature, the linear ansatz . p loss "´α p rel (6) is used to model pressure loss rate as a function of gauge pressure p rel . The rate of change of p rel is . p rel " . p loss´. p atm , it depends on leakage as well as atmospheric pressure p atm . Inserting Equation (6) in Equation (7) yields . p rel`. p atm "´α p rel . Both gauge pressure and atmospheric pressures are measured during the experiments and thus also their rates of change are known. The sum of . p rel and . p atm are obtained from a series of independent pressure experiments using an empty cell at two different temperatures. Numerical differentiation of the pressure signal was done after filtering the data with a Savitzky-Golay filter and a subsequent cubic spline interpolation [81]. The measured loss rates strongly depend on the age of the O-rings. While new O-rings show almost no loss, rings that were in heavy use show significant loss. The worst results are displayed in Figure 11b as a function of p rel and provide an upper bound on the amount of leakage. A linear fit to that data yields the time constants α 243 " 0.00412˘0.00001 h´1 (cell bottom temperature 243 K) as well as α 283 " 0.00731˘0.00001 h´1 (cell bottom temperature 283 K). From this we get the pressure loss rate as a function of the gauge pressure. Note that the loss rate at higher temperatures is slightly larger than at lower temperatures. This was found in all pressure tests and is probably related to the generally higher gas permeability of silicone at higher temperatures [82]. Nevertheless, since the sealing force will also be reduced at lower temperatures this effect will be small if present at all [83]. Image Reconstruction and Post Processing Image stacks obtained from µCT are reconstructed using GE's phoenix datosx2 reconstruction software (GE Sensing & Inspection Technologies, Wunstorf, Germany). In a first step a smaller region of interest, containing solely the interior of the graphite sample cell, is extracted from the 3D raw data. A Gaussian filter with the smallest possible kernel size of three voxels is then applied to reduce the noise level while maintaining all details. Image stacks (8-bit, jpg) are then written and analyzed in an in-house post processing toolbox. Random walk segmentation as presented by Grady [52] is applied in 3D situations which requires efficient memory management and solution methods for very large linear systems with approximately 300 × 10 6 degrees of freedom. Figure 12 shows the result of the random walk segmentation on a single CT slice obtained from a THF clathrate sample of the experiment described in Section 2.2.2. Sample volumes are calculated based on the segmented data by voxel counting. The surface area A is computed using the derivative of the 2-point probability function, S2, at its origin [84,85]: The 2-point probability function is computed for distances of 0 and 0.9 voxel side lengths using Monte Carlo integration with 10 million sampling points each. The derivative is approximated by the difference quotient of these two points. (a) (b) Figure 12. Segmentation of noisy CT images using the random walk segmentation filter described by Grady [52]: (a) Original CT slice taken from scan S5 of the THF clathrate experiment of Section 2.2.2. The bright phase is the THF clathrate, the dark phase is gaseous nitrogen; (b) The same slice after segmentation. Image Reconstruction and Post Processing Image stacks obtained from µCT are reconstructed using GE's phoenix datosx2 reconstruction software (GE Sensing & Inspection Technologies, Wunstorf, Germany). In a first step a smaller region of interest, containing solely the interior of the graphite sample cell, is extracted from the 3D raw data. A Gaussian filter with the smallest possible kernel size of three voxels is then applied to reduce the noise level while maintaining all details. Image stacks (8-bit, jpg) are then written and analyzed in an in-house post processing toolbox. Random walk segmentation as presented by Grady [52] is applied in 3D situations which requires efficient memory management and solution methods for very large linear systems with approximately 300ˆ10 6 degrees of freedom. Figure 12 shows the result of the random walk segmentation on a single CT slice obtained from a THF clathrate sample of the experiment described in Section 2.2.2. Image Reconstruction and Post Processing Image stacks obtained from µCT are reconstructed using GE's phoenix datosx2 reconstruction software (GE Sensing & Inspection Technologies, Wunstorf, Germany). In a first step a smaller region of interest, containing solely the interior of the graphite sample cell, is extracted from the 3D raw data. A Gaussian filter with the smallest possible kernel size of three voxels is then applied to reduce the noise level while maintaining all details. Image stacks (8-bit, jpg) are then written and analyzed in an in-house post processing toolbox. Random walk segmentation as presented by Grady [52] is applied in 3D situations which requires efficient memory management and solution methods for very large linear systems with approximately 300 × 10 6 degrees of freedom. Figure 12 shows the result of the random walk segmentation on a single CT slice obtained from a THF clathrate sample of the experiment described in Section 2.2.2. Sample volumes are calculated based on the segmented data by voxel counting. The surface area A is computed using the derivative of the 2-point probability function, S2, at its origin [84,85]: The 2-point probability function is computed for distances of 0 and 0.9 voxel side lengths using Monte Carlo integration with 10 million sampling points each. The derivative is approximated by the difference quotient of these two points. (a) (b) Figure 12. Segmentation of noisy CT images using the random walk segmentation filter described by Grady [52]: (a) Original CT slice taken from scan S5 of the THF clathrate experiment of Section 2.2.2. The bright phase is the THF clathrate, the dark phase is gaseous nitrogen; (b) The same slice after segmentation. Figure 12. Segmentation of noisy CT images using the random walk segmentation filter described by Grady [52]: (a) Original CT slice taken from scan S5 of the THF clathrate experiment of Section 2.2.2. The bright phase is the THF clathrate, the dark phase is gaseous nitrogen; (b) The same slice after segmentation. Sample volumes are calculated based on the segmented data by voxel counting. The surface area A is computed using the derivative of the 2-point probability function, S 2 , at its origin [84,85]: The 2-point probability function is computed for distances of 0 and 0.9 voxel side lengths using Monte Carlo integration with 10 million sampling points each. The derivative is approximated by the difference quotient of these two points.
2016-09-10T08:43:00.142Z
2016-08-01T00:00:00.000
{ "year": 2016, "sha1": "8d93454d6ad94fd46107dbd9bd71d0fd59ac6fcd", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/9/8/668/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8d93454d6ad94fd46107dbd9bd71d0fd59ac6fcd", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
262417059
pes2o/s2orc
v3-fos-license
Measurement of Time-Dependent CP-Violating Asymmetries and Constraints on sin(2 beta+gamma) with Partial Reconstruction of B->D*-+pi+- Decays We present a measurement of the time-dependent CP-violating asymmetries in decays of neutral B mesons to the final states D*-+ pi+-, using approximately 232 million BBbar events recorded by the BABAR experiment at the PEP2 e+e- storage ring. Events containing these decays are selected with a partial reconstruction technique, in which only the high-momentum pi+- from the B decay and the low-momentum pi-+ from the D*-+ decay are used. % We measure the parameters related to 2beta+gamma to be a_D*pi=-0.034 +- 0.014 +- 0.009 and c_l_D*pi = -0.019 +- 0.022 +- 0.013. With some theoretical assumptions, we interpret our results in term of the lower limits |\sin(2beta+gamma)|>0.62 (0.35) at 68% (90%) confidence level. We present a measurement of the time-dependent CP -violating asymmetries in decays of neutral B mesons to the final states D * ∓ π ± , using approximately 232 million BB events recorded by the BABAR experiment at the PEP-II e + e − storage ring.Events containing these decays are selected with a partial reconstruction technique, in which only the high-momentum π ± from the B decay and the low-momentum π ∓ from the D * ∓ decay are used.We measure the parameters related to 2β + γ to be aD * π = −0.034± 0.014 ± 0.009 and c ℓ D * π = −0.019± 0.022 ± 0.013.With some theoretical I. INTRODUCTION The Cabibbo-Kobayashi-Maskawa (CKM) quarkmixing matrix [1] provides an explanation of CP violation and is under experimental investigation aimed at constraining its parameters.A crucial part of this program is the measurement of the angle γ = arg (−V ud V * ub /V cd V * cb ) of the unitarity triangle related to the CKM matrix.The decay modes B → D * ∓ π ± have been proposed for use in measurements of sin(2β + γ) [2], where β = arg (−V cd V * cb /V td V * tb ) is well measured [3].In the Standard Model the decays B 0 → D * − π + and B 0 → D * − π + proceed through the b → cud and b → ucd amplitudes A c and A u .Fig. 1 shows the tree diagrams contributing to these decays.The relative weak phase between A u and A c in the usual Wolfenstein convention [4] is γ.When combined with B 0 B 0 mixing, this yields a weak phase difference of 2β + γ between the interfering amplitudes.In Υ (4S) → BB decays, the decay rate distribution for B → D * ∓ π ± is where τ is the B 0 lifetime averaged over the two mass eigenstates, ∆m is the B 0 B 0 mixing frequency, and ∆t is the difference between the time of the B → D * ∓ π ± (B rec ) decay and the decay of the other B (B tag ) in the event.The upper (lower) signs in Eq. ( 1) indicate the flavor of the B tag as a B 0 (B 0 ), while η = +1 (−1) and ζ = + (−) for the B rec final state D * − π + (D * + π − ).The parameters C and S ± are given by C ≡ 1 − r * 2 1 + r * 2 , S ± ≡ 2r * 1 + r * 2 sin(2β + γ ± δ * ).(2) * Also with Università della Basilicata, Potenza, Italy † Deceased Here δ * is the strong phase difference between A u and A c , and r * = |A u /A c |. Since A u is doubly CKM-suppressed with respect to A c , one expects r * ≈ We report a study of the CP -violating asymmetry in B → D * ∓ π ± decays using the technique of partial reconstruction, which allows us to achieve a high efficiency for the selection of signal events.We use approximately twice the integrated luminosity of our previous analysis of this process [5], and employ an improved method to eliminate a measurement bias, as described in Sec.III F 2. Many of the tools and procedures used in this analysis were validated in a previous analysis dedicated to the measurement of the B 0 lifetime [6]. In this analysis, terms of order r * 2 , to which we currently have no sensitivity, have been neglected.The interpretation of the measured asymmetries in terms of sin(2β + γ) requires an assumption regarding the value of r * , discussed in Sec.VI. II. THE BABAR DETECTOR AND DATASET The data used in this analysis were recorded with the BABAR detector at the PEP-II asymmetric-energy storage rings, and consist of 211 fb −1 collected on the Υ (4S) resonance (on-resonance sample), and 21 fb −1 collected at an e + e − center-of-mass (CM) energy approximately 40 MeV below the resonance peak (off-resonance sample).Samples of Monte Carlo (MC) [7] events with an equivalent luminosity approximately four times larger than the data sample were analyzed using the same reconstruction and analysis procedure. The BABAR detector is described in detail in Ref. [8].We provide a brief description of the main components and their use in this analysis.Charged-particle trajectories are measured by a combination of a fivelayer silicon vertex tracker (SVT) and a 40-layer drift chamber (DCH) in a 1.5-T solenoidal magnetic field.Tracks with low transverse momentum can be reconstructed in the SVT alone, thus extending the chargedparticle detection down to transverse momenta of about 50 MeV/c.We use a ring-imaging Cherenkov detector (DIRC) for charged-particle identification and augment it with energy-loss measurements from the SVT and DCH.Photons and electrons are detected in a CsI(Tl) electromagnetic calorimeter (EMC), with photon-energy resolution σ E /E = 0.023(E/ GeV) −1/4 ⊕ 0.014.The instrumented flux return (IFR) is equipped with resistive plate chambers to identify muons. In the partial reconstruction of a B → D * ∓ π ± candidate (B rec ), only the hard (high-momentum) pion track π h from the B decay and the soft (low-momentum) pion track π s from the decay D * − → D 0 π − s are used.The cosine of the angle between the momenta of the B and the hard pion in the CM frame is then computed: where M x is the nominal mass of particle x [9], E h and p h are the measured CM energy and momentum of the hard pion, E CM is the total CM energy of the incoming e + e − beams, and Events are required to be in the physical region | cos θ Bh | < 1.Given cos θ Bh and the measured momenta of the π h and π s , the B four-momentum can be calculated up to an unknown azimuthal angle φ around p h .For every value of φ, the expected D four-momentum p D (φ) is determined from four-momentum conservation, and the corresponding φ-dependent invariant mass m(φ) ≡ |p D (φ)| 2 is calculated.We define the missing mass m miss ≡ 1 2 [m max + m min ], where m max and m min are the maximum and minimum values of m(φ).In signal events, m miss peaks at the nominal D 0 mass M D 0 , with a gaussian width of about 3 MeV/c 2 (Fig. 2).The m miss distribution for combinatoric background events is significantly broader, making the missing mass the primary variable for distinguishing signal from background.The discrimination between signal and background provided by the m miss distribution is independent of the choice of the value of φ.With the arbitrary choice φ = 0, we use four-momentum conservation to calculate the CM D and B momentum vectors, which are used as described below. B. Backgrounds In addition to B → D * ∓ π ± events, the selected event sample contains the following kinds of events: • Peaking BB background, defined as decays other than B → D * ∓ ρ ± , in which the π h and π s originate from the same B meson, with the π s originating from a charged D * decay.The m miss distribution of these events peaks broadly under the signal peak. • Combinatoric BB background, defined as all remaining BB background events. • Continuum e + e − → qq, where q represents a u, d, s, or c quark. C. Event Selection To suppress the continuum background, we select events in which the ratio of the 2nd to the 0th Fox-Wolfram moment [10], computed using all charged particles and EMC clusters not matched to tracks, is smaller than 0.40.Hard-pion candidates are required to be reconstructed with at least twelve DCH hits.Kaons and leptons are rejected from the π h candidate lists based on information from the IFR and DIRC, energy loss in the SVT and DCH, or the ratio of the candidate's EMC energy deposition to its momentum (E/p). We define the D * helicity angle θ D * to be the angle between the flight directions of the D and the B in the D * rest frame.Taking advantage of the longitudinal polarization in signal events, we suppress background by requiring | cos θ D * | to be larger than 0.4. All candidates are required to satisfy m miss > 1.81 GeV/c 2 .Multiple candidates are found in 5% of the events.In these instances, only the candidate with the m miss value closest to M D 0 is used. D. Fisher Discriminant To further discriminate against continuum events, we combine fifteen event-shape variables into a Fisher discriminant [11] F .Discrimination originates from the fact that qq events tend to be jet-like, whereas BB events have a more spherical energy distribution.Rather than applying requirements to the variable F , we maximize the sensitivity by using it in the fits described below.The fifteen variables are calculated using two sets of particles.Set 1 includes all tracks and EMC clusters, excluding the hard and soft pion candidates; Set 2 is composed of Set 1, excluding all tracks and clusters with CM momentum within 1.25 radian of the CM momentum of the D. The variables, all calculated in the CM frame, are 1) the scalar sum of the momenta of all Set 1 tracks and EMC clusters in nine 20 • angular bins centered about the hard pion direction; 2) the value of the sphericity, computed with Set 1; 3) the angle with respect to the hard pion of the sphericity axis, computed with Set 2; 4) the direction of the particle of highest energy in Set 2 with respect to the hard pion; 5) the absolute value of the vector sum of the momenta of all the particles in Set 2; 6) the momentum | p h | of the hard pion and its polar angle. E. Decay Time Measurement and Flavor Tagging To perform this analysis, ∆t and the flavor of the B tag must be determined.We tag the flavor of the B tag using lepton or kaon candidates.The lepton CM momentum is required to be greater than 1.1 GeV/c to suppress leptons that originate from charm decays.If several flavortagging tracks are present in either the lepton or kaon tagging category, the only track of that category used for tagging is the one with the largest value of θ T , the CM angle between the track momentum and the momentum of the "missing" (unreconstructed) D. The tagging track must satisfy cos θ T < C T , where C T = 0.75 (C T = 0.50) for leptons (kaons), to minimize the impact of tracks originating from the decay of the missing D. If both a lepton and a kaon satisfy this requirement, the event is tagged with the lepton. We measure ∆t using ∆t = (z rec − z tag )/(γβc), where z rec (z tag ) is the decay position of the B rec (B tag ) along the beam axis (z) in the laboratory frame, and the e + e − boost parameter γβ is calculated from the measured beam energies.To find z rec , we use the π h track parameters and errors, and the measured beam-spot position and size in the plane perpendicular to the beams (the x − y plane).We find the position of the point in space for which the sum of the χ 2 contributions from the π h track and the beam spot is a minimum.The z coordinate of this point determines z rec .The beam spot has an r.m.s.size of approximately 120 µm in the horizontal dimension (x), 5 µm in the vertical dimension (y), and 8.5 mm along the beams (z).The average B flight in the x − y plane is 30 µm.To account for the B flight in the beam-spot-constrained vertex fit, 30 µm are added to the effective x and y sizes for the purpose of conducting this fit. In lepton-tagged events, the same procedure, with the π h track replaced by the tagging lepton, is used to determine z tag . In kaon-tagged events, we obtain z tag from a beamspot-constrained vertex fit of all tracks in the event, excluding π h , π s and all tracks within 1 radian of the D momentum in the CM frame.If the contribution of any track to the χ 2 of the vertex is more than 6, the track is removed and the fit is repeated until no track fails the χ 2 < 6 requirement. The ∆t error σ ∆t is calculated from the results of the z rec and z tag vertex fits.We require |∆t| < 15 ps and σ ∆t < 2 ps. F. Probability Density Function The probability density function (PDF) depends on the variables m miss , ∆t, σ ∆t , F , s t , and s m , where s t = 1 (−1) when the B tag is identified as a B 0 (B 0 ), and s m = 1 (−1) for "unmixed" ("mixed") events.An event is labeled unmixed if the π h is a π − (π + ) and the B tag is a B 0 (B 0 ), and mixed otherwise. The PDF for on-resonance data is a sum over the PDFs of the different event types: where the index i = {D * π, D * ρ, peak, comb, qq} indicates one of the event types described above, f i is the relative fraction of events of type i in the data sample, and P i is the PDF for these events.The PDF for off-resonance data is P qq .The parameter values for P i are different for each event type, unless indicated otherwise.Each P i is a product, where the factors in Eq. ( 5) are described below. mmiss and F PDFs The m miss PDF for each event type i is the sum of a bifurcated Gaussian plus an ARGUS function [12]: where f Ĝ i is the fractional area of the bifurcated Gaussian function.The functions Ĝi and where M i is the peak of the bifurcated Gaussian, σ Li and σ Ri are its left and right widths, ǫ i is the ARGUS exponent, M A i is its end point, and θ is the step function.The proportionality constants are such that each of these functions is normalized to unit area within the m miss range.The m miss PDF of each event type has different parameter values. The Fisher discriminant PDF F i for each event type is parameterized as the sum of two Gaussians.The parameter values of F D * π , F D * ρ , F peak , and F comb are identical. Signal ∆t PDFs The ∆t PDF T ′ D * π (∆t, σ ∆t , s t , s m ) for signal events corresponds to Eq. 1 with O(r * 2 ) terms neglected, modified to account for several experimental effects, described below. The first effect has to do with the origin of the tagging track.In some of the events, the tagging track originates from the decay of the missing D. These events are labeled "missing-D tags" and do not provide any information regarding the flavor of the B tag .In lepton-tagged events, we further distinguish between "direct" tags, in which the tagging lepton originates directly from the decay of the B tag , and "cascade" tags, where the tagging lepton is a daughter of a charmed particle produced in the B tag decay.Due to the different physical origin of the tagging track in cascade and direct tags, these two event categories have different mistag probabilities, defined as the probability to deduce the wrong B flavor from the charge of the tagging track.In addition, the measured value of z tag in cascade-lepton tags is systematically larger than the true value, due to the finite lifetime of the charmed particle and the boosted CM frame.This creates a correlation between the tag and vertex measurements that we address by considering cascade-lepton tags separately in the PDF.In our previous analysis [5] we corrected for the bias of the S ± parameters caused by this effect and included a systematic error due to its uncertainty.In kaon tags, z tag is determined using all available B tag tracks, so the effect of the tagging track on the z tag measurement is small.Therefore, the overall bias induced by cascadekaon tags is small, and there is no need to distinguish them in the PDF. The second experimental effect is the finite detector resolution in the measurement of ∆t.We address this by convoluting the distribution of the true decay time difference ∆t tr with a detector resolution function.Putting these two effects together, the ∆t PDF of signal events is where ∆ǫ D * π is half the relative difference between the detection efficiencies of positive and negative leptons or kaons, the index j = {dir, cas, miss} indicates direct, cascade, and missing-D tags, and f j D * π is the fraction of signal events of tag-type j in the sample.We set for lepton tags, with the value f cas D * π = 0.12 ± 0.02 obtained from the MC simulation.For kaon tags f dir D * π = 0.The function T j D * π (∆t tr , s t , s m ) is the ∆t tr distribution of tag-type j events, and R j D * π (∆t−∆t tr , σ ∆t ) is their resolution function, which parameterizes both the finite detector resolution and systematic offsets in the measurement of ∆z, such as those due to the origin of the tagging particle.The parameterization of the resolution function is described in Sec.III F 4. The functional form of the direct and cascade tag ∆t tr PDFs is where j = {dir, cas}, the mistag rate ω j D * π is the probability to misidentify the flavor of the B tag averaged over B 0 and B 0 , and ∆ω j D * π is the B 0 mistag rate minus the B 0 mistag rate.The factor S j D * π describes the effect of interference between b → ucd and b → cūd amplitudes in both the B rec and the B tag decays: where a D * π , b D * π , and c D * π are related to the physical parameters through and r ′ (δ ′ ) is the effective magnitude of the ratio (effective strong phase difference) between the b → ucd and b → cud amplitudes in the B tag decay.This parameterization is good to first order in r * and r ′ .In the following we will refer to the parameters a D * π , b D * π , c D * π and related parameters for the background PDF as the weak phase parameters.Only a D * π and b D * π are related to CP violation, while c D * π can be non-zero even in the absence of CP violation when 2β + γ = 0.The inclusion of r ′ and δ ′ in the formalism accounts for cases where the B tag undergoes a b → ucd decay, and the kaon produced in the subsequent charm decay is used for tagging [13].We expect r ′ ∼ 0.02.In lepton-tagged events r ′ = 0 (and hence b D * π = 0) because most of the tagging leptons come from B semileptonic decays to which no suppressed amplitude with a different weak phase can contribute.The ∆t tr PDF for missing-D tags is where ρ D * π is the probability that the charge of the tagging track is such that it results in a mixed flavor measurement.In this analysis, we have neglected the term proportional to sin(∆m D * π ∆t tr ) of Eq. 13.The systematic error on b D * π due to this approximation is negligible due to the small value of f miss D * π reported below. Background ∆t PDFs The ∆t PDF of B → D * ∓ ρ ± has the same functional form and parameter values as the signal PDF, except that the weak phase parameters a D * ρ , b D * ρ , and c D * ρ are set to 0 and are later varied to evaluate systematic uncertainties.The validity of the use of the same parameters for T ′ D * ρ and T ′ D * π is established using simulated events, and stems from the fact that the π h momentum spectrum in the B → D * ∓ ρ ± events that pass our selection criteria is almost identical to the signal spectrum. The ∆t PDF of the peaking background accounts sep-arately for charged and neutral B decays: where T 0 ′ peak has the functional form of Eq. ( 9) and the subsequent expressions, Eqs., but with all D * π-subscripted parameters replaced with their peaksubscripted counterparts.The integral in Eq. ( 14) accounts for the contribution of charged B decays to the peaking background, with and R + peak (∆t − ∆t tr , σ ∆t ) being the three-Gaussian resolution function for these events described below. The Combinatoric BB background PDF T ′ comb is similar to the signal PDF, with one substantial difference.Instead of parameterizing T ′ comb with the four parameters f dir comb , ω dir comb , ∆ω dir comb , ρ comb , we use the set of three parameters With these parameters and f cas comb = 0, the combinatoric BB background ∆t PDF becomes where R comb (∆t− ∆t tr , σ ∆t ) is the 3-Gaussian resolution function and with As in the case of T D * ρ , the weak phase parameters of the peaking and combinatoric background (a peak , b peak , c peak and a comb , b comb , c comb ) are set to 0 and are later varied to evaluate systematic uncertainties.Parameters labeled with superscripts "peak" or "comb" are empirical and thus do not necessarily correspond to physical parameters.In general, their values may be different from those of the D * π-labeled parameters. The PDF T qq for the continuum background is the sum of two components, one with a finite lifetime and one with zero lifetime: with where f δ qq is the fraction of zero-lifetime events. Resolution Function Parameterization The resolution function for events of type i and optional secondary type j (j = {dir, cas, miss} for leptontagged signal events and j = {+, 0} for the peaking and combinatoric BB background types) is parameterized as the sum of three Gaussians: where t r = ∆t − ∆t tr is the residual of the ∆t measurement, and G n j i , G w j i , and G o j i are the "narrow", "wide", and "outlier" Gaussians.The narrow and wide Gaussians have the form where the index k takes the values k = n, w for the narrow and wide Gaussians, and b k j i and s k j i are parameters determined by fits, as described in Sec.III G.The outlier Gaussian has the form where in all nominal fits the values of b o j i and s o j i are fixed to 0 ps and 8 ps, respectively, and are later varied to evaluate systematic errors. G. Analysis Procedure The analysis is carried out with a series of unbinned maximum-likelihood fits, performed simultaneously on the on-and off-resonance data samples and independently for the lepton-tagged and kaon-tagged events.The analysis proceeds in four steps: 1.In the first step, we determine the parameters f D * ρ + f D * π , f peak , and f comb of Eq. ( 4).In order to reduce the reliance on the simulation, we also obtain in the same fit the parameters f Ĝ qq of Eq. ( 6), ǫ qq of Eq. ( 8), σ L for the signal m miss PDF (Eq.( 7)), and all the parameters of the Fisher discriminant PDFs.This is done by fitting the data with the PDF instead of Eq. ( 5); i.e. by ignoring the time dependence.The fraction f qq of continuum events is determined from the off-resonance sample and its integrated luminosity relative to the on-resonance sample.All other parameters of the M i PDFs and the value of f D * π /(f D * π + f D * ρ ) = 0.87 ± 0.03 are obtained from the MC simulation. 2. In the second step, we repeat the fit of the first step for data events with cos θ T ≥ C T , to obtain the fraction of signal events in that sample.Given this fraction and the relative efficiencies for direct, cascade, and missing-D signal events to satisfy cos θ T < C T requirement, we calculate f miss D * π = 0.011 ± 0.001 for lepton-tagged events and f miss D * π = 0.055 ± 0.001 for kaon-tagged events.We also calculate the value of ρ D * π from the fractions of mixed and unmixed signal events in the cos θ T ≥ C T sample relative to the cos θ T < C T sample. 3. In the third step, we fit the data events in the sideband 1.81 < m miss < 1.84 GeV/c 2 with the 3-dimensional PDFs of Eq. ( 5).The parameters of M i (m miss ) and F i (F ), and the fractions f i are fixed to the values obtained in the first step.From this fit we obtain the parameters of T ′ comb , as well as those of T ′ qq .4. In the fourth step, we fix all the parameter values obtained in the previous steps and fit the events in the signal region m miss > 1.845 GeV/c 2 , determining the parameters of T ′ D * π and T ′ qq .Simulation studies show that the parameters of T ′ comb are independent of m miss , enabling us to obtain them in the sideband fit (step 3) and then use them in the signal-region fit.The same is not true of the T ′ qq parameters; hence they are free parameters in the signal-region fit of the last step.The parameters of T ′ peak are obtained from the MC simulation. IV. RESULTS The fit of step 1 finds 18710 ± 270 signal B → D * ∓ π ± events in the lepton-tag category and 70580 ± 660 in the kaon-tag category.The m miss and F distributions for data are shown in Figs. 2 and 3, with the PDFs overlaid.The results of the signal region fit (fourth step) are summarized in Table I, and the plots of the ∆t distributions for the data are shown in Fig. 4 for the leptontagged and the kaon-tagged events.The goodness of the fit has been verified with the Kolmogorov-Smirnov test and by comparing the likelihood obtained in the fit with the likelihood distribution of many parameterized MC experiments generated with the PDF's obtained in the fit on the data.Fig. 5 shows the raw, time-dependent CP asymmetry In the absence of background and with high statistics, perfect tagging, and perfect ∆t measurement, A(∆t) would be a sinusoidal oscillation with amplitude a D * π .For presentation purposes, the requirements m miss > 1.855 GeV/c 2 and F < 0 were applied to the data plotted in Figs. 4 and 5, in order to reduce the background.These requirements were not applied to the fit sample, so they do not affect our results.The fitted values of ∆m reported in Table I are in good agreement with the world average (0.502 ± 0.007) ps 3: The F distributions for on-resonance lepton-tagged (top) and kaon-tagged (bottom) data.The contributions of the BB (dashed-dotted line) and the continuum (dashed line) PDF components are overlaid, peaking at approximately −0.6 and −0.1, respectively.The total PDF is also overlaid.[9].The fitted values of the B 0 lifetime need to be corrected for a bias observed in the simulated samples, ∆τ = τ f it − τ gen = (−0.03± 0.02) ps for the lepton-tag and ∆τ = (−0.04± 0.02) ps for the kaontag events.After this correction, the measured lifetimes, τ (B 0 ) = (1.48 ± 0.02 ± 0.02) ps and τ (B 0 ) = (1.49± 0.01 ± 0.04) ps for the lepton-tag and kaon-tag, respectively, are in reasonable agreement with the world average τ (B 0 ) = (1.536± 0.014) ps [9].The correlation coefficients of a ℓ D * π (c ℓ D * π ) with ∆m and τ (B 0 ) are −0.021 and 0.019 (−0.060 and −0.056). V. SYSTEMATIC STUDIES The systematic errors are summarized in Table II.Each item below corresponds to the item with the same number in Table II. 1.The statistical errors from the fit in Step 1 are propagated to the final fit.This also includes the systematic errors due to possible differences between the PDF line shape and the data points. 2. The statistical errors from the m miss sideband fit (Step 3) are propagated to the final fit (Step 4). 3-4.The statistical errors from the Step 2 fits are propagated to the final fit. (h) t (ps) ∆ Events/(0.24ps) FIG.4: ∆t distributions for the lepton-tagged (a-d) and kaontagged (e-h) events separated according to the tagged flavor of Btag and whether they were found to be mixed or unmixed: a,e) B 0 unmixed, b,f) B 0 unmixed, c,g) B 0 mixed, d,h) B 0 mixed.The solid curves show the PDF, calculated with the parameters obtained by the fit.The PDF for the total background is shown by the dashed curves. The statistical errors associated with the parame- ters obtained from MC are propagated to the final fit.In addition, the full analysis has been performed on a simulated sample to check for a possible bias in the weak phase parameters measured. No statistically significant bias has been found and the statistical uncertainty of this test has been assigned as a systematical error. 6.The effect of uncertainties in the beam-spot size on the vertex constraint is estimated by increasing the beam spot size by 50 µm. 7. The effect of the uncertainty in the measured length of the detector in the z direction is evaluated by applying a 0.6% variation to the measured values of ∆t and σ ∆t .8. To evaluate the effect of possible misalignments in the SVT, signal MC events are reconstructed with different alignment parameters, and the analysis is repeated.9-11.The weak phase parameters of the B → D * ∓ ρ ± , peaking, and combinatoric BB background are fixed to 0 in the fits.To study the effect of possible interference between b → ucd and b → cūd amplitudes in these backgrounds, their weak phase parameters are varied in the range ±0.04 and the Step-4 fit is repeated.We take the largest variation in each weak phase parameter as its systematic error. 12. In the final fit, we take the values of the parameters of T ′ peak from a fit to simulated peaking BB background events.The uncertainty due to this is evaluated by fitting the simulated sample, setting the parameters of T ′ peak to be identical to those of T ′ comb . 13.The uncertainty due to possible differences between the ∆t distributions for the combinatoric background in the m miss sideband and signal region is evaluated by comparing the results of fitting the simulated sample with the T ′ comb parameters taken from the sideband or the signal region. 14.The ratio f D * ρ /f D * π is varied by the uncertainty in the corresponding ratio of branching fractions, obtained from Ref. [9]. VI. PHYSICS RESULTS Summarizing the values and uncertainties of the weak phase parameters, we obtain the following results from the lepton-tagged sample: The results from the kaon-tagged sample fits are Combining the results for lepton and kaon tags gives the amplitude of the time-dependent CP asymmetry, where the first error is statistical and the second is systematic.The systematic error takes into account correlations between the results of the lepton-and kaontagged samples coming from the systematic uncertainties related to detector effects, to interference between b → ucd and b → cūd amplitudes in the backgrounds and from B(B → D * ∓ ρ ± ).This value of a D * π deviates from zero by 2.0 standard deviations.Previous results of time-dependent CP asymmetries related to 2β + γ appear in Ref. [5,14].This measurement supersedes the results of the partial reconstruction analysis reported in Ref. [5] and improves the precision on a D * π and c D * π with respect to the average of the published results. We use a frequentist method, inspired by Ref. [15], to set a constraint on 2β + γ.To do this, we need a value for the ratio r * of the two interfering amplitudes.This is done with two different approaches. In the first approach, to avoid any assumptions on the value of r * , we obtain the lower limit on | sin(2β + γ)| as a function of r * . We define a χ 2 function that depends on r * , 2β + γ, and δ * : where ∆x j is the difference between the result of our measurement of a K D * π , a ℓ D * π , or c ℓ D * π (Eqs.( 28) and ( 27)) and the corresponding theoretical expressions given by Eq. (12).We fix r * to a trial value r 0 .The measurements of b K D * π and c K D * π are not used in the fit, since they depend on the unknown values of r ′ and δ ′ .The measurement error matrix V is nearly diagonal, and accounts for correlations between the measurements due to correlated statistical and systematic uncertainties.We minimize χ 2 as a function of 2β + γ and δ * , and obtain χ 2 min , the minimum value of χ 2 .In order to compute the confidence level for a given value x of 2β + γ, we perform the following procedure: 1. We fix the value of 2β + γ to x and minimize χ 2 as a function of δ * .We define χ ′2 min (x) to be the minimum value of the χ 2 in this fit, and δ * toy to be the fitted value of δ * .We define ∆χ 2 (x) ≡ χ ′2 min (x) − χ 2 min .2. We generate many parameterized MC experiments with the same sensitivity as the data sample, taking into account correlations between the observables, expressed in the error matrix V of Eq. (30).To generate the observables a K D * π , a ℓ D * π , and c ℓ D * π , we use the values (2β + γ) = x, r * = r 0 and δ * = δ * toy .For each experiment we calculate the value of ∆χ 2 (x), computed with the same procedure used for the experimental data. 3. We interpret the fraction of these experiments for which ∆χ 2 (x) is smaller than ∆χ 2 (x) in the data to be the confidence level (CL) of the lower limit on (2β + γ) = x. The resulting 90% CL lower limit on | sin(2β + γ)| as a function of r * is shown in Fig. 6.The χ 2 function is invariant under the transformation 2β + γ → π/2 + δ * and δ * → π/2 − 2β + γ.The limit shown in Fig. 6 is always the weaker of these two possibilities.In the second approach, we estimate r * as originally proposed in Ref. [2], and assume SU(3) flavor symmetry.With this assumption, r * can be estimated from the Cabibbo angle θ C , the ratio of branching fractions [16], and the ratio of decay constants f D * s /f D * = 1.10 ± 0.02 [17], This value depends on the value of B(D + s → φπ + ), for which we use our recent measurement [18].Equation (31) has been obtained with two approximations.In the first approximation, the exchange diagram amplitude E contributing to the decay B 0 → D * + π − has been neglected and only the tree-diagram amplitude T has been considered.Unfortunately, no reliable estimate of the exchange term for these decays exists.The only decay mediated by an exchange diagram for which the rate has been measured is the Cabibboallowed decay B 0 → D − s K + .The average of the BABAR and Belle branching fraction measurements [16,19] , which confirms that the exchange diagrams are strongly suppressed with respect to the tree diagrams.Detailed analyses [20] of the B → Dπ and B → D * π decays in terms of the topological amplitudes conclude that |E ′ /T ′ | = 0.12 ± 0.02 for B 0 → D − π + and | Ē/ T | < 0.10 for B 0 → D * − π + decays, where E ′ , Ē and T ′ , T are the exchange and tree amplitudes for these Cabibbo-allowed decays.We assume that a similar suppression holds for the Cabibbo-suppressed decays considered here. The second approximation involves the use of the ratio of decay constants f D * /f D * s to take into account SU(3) breaking effects and assumes factorization.We attribute a 30% relative error to the theoretical assumptions involved in obtaining the value of r * of Eq. ( 32), and use it as described below. We add to the χ 2 of Eq. (30) the term ∆ 2 (r * ) that takes into account both the Gaussian experimental errors of Eq. (32) and the 30% theoretical uncertainty according to the prescription of Ref. [21]: where ξ r * ≡ (r * − r * meas )/r * meas . To obtain the confidence level we have repeated the procedure described above with the following changes. VII. SUMMARY We present a measurement of the time-dependent CP asymmetries in a sample of partially reconstructed B 0 → D * + π − events.In particular, we have measured the parameters related to 2β + γ to be VIII. ACKNOWLEDGMENTS We are grateful for the extraordinary contributions of our PEP-II colleagues in achieving the excellent luminosity and machine conditions that have made this work possible.The success of this project also relies critically on the expertise and dedication of the computing organizations that support BABAR.The collaborating institutions wish to thank SLAC for its support and the kind hospitality extended to them.This work is supported by the US Department of Energy and National Science Foundation, the Natural Sciences and Engineering Research Council (Canada), Institute of High Energy Physics (China), the Commissariat à l'Energie Atomique and Institut National de Physique Nucléaire et de Physique des Particules (France), the Bundesministerium für Bildung und Forschung and Deutsche Forschungsgemeinschaft (Germany), the Istituto Nazionale di Fisica Nucleare (Italy), the Foundation for Fundamental Research on Matter (The Netherlands), the Research Council of Norway, the Ministry of Science and Technology of the Russian Federation, and the Particle Physics and Astronomy Research Council (United Kingdom).Individuals have received support from CONACyT (Mexico), the A. P. Sloan Foundation, the Research Corporation, and the Alexander von Humboldt Foundation. − 1 Events FIG.3:The F distributions for on-resonance lepton-tagged (top) and kaon-tagged (bottom) data.The contributions of the BB (dashed-dotted line) and the continuum (dashed line) PDF components are overlaid, peaking at approximately −0.6 and −0.1, respectively.The total PDF is also overlaid. FIG. 5 : FIG.5: Raw asymmetry for (a) lepton-tagged and (b) kaontagged events.The curves represent the projections of the PDF for the raw asymmetry.A nonzero value of aD * π would show up as a sinusoidal asymmetry, up to resolution and background effects.The offset from the horizontal axis is due to the nonzero values of ∆ǫD * π and ∆ωD * π. FIG. 7 : FIG. 7: The shaded region denotes the allowed range of | sin(2β + γ)| for each confidence level.The horizontal lines show, from top to bottom, the 68% and 90% CL. FIG. 8 : FIG.8: Contours of constant probability (color-coded in percent) for the position of the apex of the unitary triangle to be inside the contour, based on the results of Fig.7.The cross represents the value and errors on the position of the apex of the unitarity triangle from the CKMFitter fit using the "ICHEP04" results excluding this measurement[22]. TABLE I : Results of the fit to the lepton-and kaon-tagged events in the signal region 1.845 < mmiss < 1.880 GeV/c 2 .Errors are statistical only.See Sections III F 2, III F 3, and III F 4 for the definitions of the symbols used in this table. TABLE II : Systematic errors in a ℓ D * π and c ℓ D * π for lepton-tagged events and a K D * π , b K D * π , and c K D * π for kaon-tagged events.
2019-04-14T02:27:20.748Z
2005-04-19T00:00:00.000
{ "year": 2005, "sha1": "4ab4491420e796576f2f42a1ce71059e43806ae2", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ex/0504035", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4ab4491420e796576f2f42a1ce71059e43806ae2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
257954409
pes2o/s2orc
v3-fos-license
Ultrasensitive colorimetric detection of tetracyclines based on in-situ growth of gold nanoflowers A colorimetric method based on in-situ generation of gold nanoflowers for the detection of tetracyclines (TCs) was proposed. We found that gold nanoflowers could be formed in the HAuCl4-NH2OH redox reaction directly without the addition of small-sized gold nanoparticles (Au NPs) as seeds when an alkaline borax buffer solution was employed as the reaction medium. Interestingly, the shape and size of the generated gold nanoflowers were regulated with TC. Briefly, large flower-like gold nanoparticles were formed with a low concentration of TC while small spherical gold nanoparticles were generated with a high concentration of TC. The generated gold nanoflowers exhibited different surface plasmon absorption (SPR) properties. Thus, a simple and rapid colorimetric method was established for the detection of TC antibiotics. This method exhibited high sensitivity for the detection of TC, oxytetracycline (OTC), and doxycycline (DC) with detection limits of 2.23 nM, 1.19 nM, and 5.81 nM, respectively. The proposed colorimetric method was applied to the determination of TC in both milk samples and water samples. Introduction Tetracycline drugs (TCs) are antibiotics with broad-spectrum antibacterial activity, which have been used to treat various animal diseases in animal husbandry [1]. In livestock and poultry production, TC antibiotics are widely used as feed additives to prevent and treat intestinal infections and to promote animal growth. However, the abuse of TCs will inevitably lead to antibiotic residues in animal-derived foods which will be uptaken by humans through the food chain causing antibiotic resistance [2]. In addition, the residue TC antibiotics can be excreted from animals causing environmental pollution and the emergence of super-bacteria and drug-resistant microorganisms. Simple and convenient methods for the rapid on-site detection of TCs are important. Various methods have been established for the routine detection of TCs, including microbiological method [3], chromatography methods such as high performance liquid chromatography (HPLC) [4], gas chromatography (GC) [5], HPLC coupled with mass spectrometry (HPLC-MS), GC-MS [6], capillary electrophoresis (CE) [7] and enzymelinked immunosorbent assay (ELISA) [8,9]. All of these methods have achieved sensitive detection of TCs, however, suffered from time-consuming processes, needing expensive equipment and professional operators. To achieve simple and rapid detection of TCs, colorimetric [10], fluorescent [11][12][13][14], electrochemical [15], chemiluminescent [16], surface-enhanced Raman scattering-based sensing methods [17][18][19] have been proposed for the detection of tetracycline residues in milk, serum, and water samples. Among these methods, the colorimetric sensing method is one of the most promising approaches for the on-site detection of TCs because the colorimetric signal can be observed by the naked eye directly. For example, Qi's group established a colorimetric sensing method for the detection of TC with a limit of detection of 0.2 μM based on the 3, 3', 5, 5'-tetramethylbenzidine (TMB)-H 2 O 2 colorimetric reaction catalyzed by gold nanoclusters having peroxidase-like activity [20]. Although colorimetric sensing methods are simple and do not require special equipments, most of the colorimetric methods suffer from low detection sensitivity. Au NPs are inorganic nanomaterials with unique optical properties that their surface plasmon absorption property 1 3 depends on the size, dielectric environment, and aggregation state of Au NPs [21,22]. Based on the SPR absorption property of Au NPs, Wang's group proposed a colorimetric sensing method to detect doxycycline and oxytetracycline based on the discovery that TCs could cause the aggregation of Au NPs in a weak alkaline environment [23]. In addition, TCs having phenol group were reported to reduce gold ions to gold atoms [24,25]. By employing the reduction property of TCs, several colorimetric methods have been developed based on the reduction of gold ions by TCs to gold atoms and form large Au NPs. For example, Shen's group employed citrate-capped Au NPs as seeds on which the reduced gold atoms were deposited forming large Au NPs. Based on the seeds-mediated approach, the colorimetric method could detect TC, OTC, and DC with LODs of 383, 369, and 541 nM, respectively [26]. To avoid the preparation of gold seeds in advance, our group developed an in-situ generation strategy which was combined with an NH 2 OH-HAuCl 4 reduction reaction and achieved sensitive detection of TC with LOD of 18.9 nM. Although the improved sensitivity was achieved, the colorimetric assay required incubation at 70 °C for 30 min and the detection process needed about 60 min [27]. To overcome these shortages, the present study proposed a rapid and convenient colorimetric method for detecting TCs. The NH 2 OH-HAuCl 4 reduction reaction is a classical approach to preparing large-sized gold nanoparticles [28,29]. In our preliminary experiments, we found that gold nanoflowers could be generated in the HAuCl 4 -NH 2 OH redox reaction directly even if there were no small-sized Au NPs as seeds when alkaline borate borax buffer solution was employed as the reaction mediate [30]. Interestingly, the size or shape of the generated gold nanoflowers could be regulated by TC, thus, resulting in a different SPR absorption band. Based on this phenomenon, a rapid colorimetric method for the detection of TCs was established and used to quantify the content of tetracycline in both milk samples and water samples. The detection process could be completed at 25 ± 5 °C within 20 min, in addition, also featured high sensitivity and good selectivity. Apparatus The UV-vis spectra and absorbance were recorded on a multifunctional microplate reader (SpectraMax M 2 , Molecular Devices Corporations, USA). Transmission electron microscopic (TEM) images were recorded on a high-resolution transmission electron microscope (FEI Tecnai G2 F20, USA). The diameters of gold nanoflowers were measured by a Zetasizer Nano ZS90 (Malvern, UK). Preparation of gold nanoflowers The gold nanoflowers were prepared based on a hydroxylamine reduction method. Briefly, 300 μL of 1 mM HAuCl 4 was mixed with 1.5 mL of 13 mM boric acid borax buffer solution. Then, 200 μL of different concentrations of NH 2 OH were added to the mixture solution. The color of the solution changed from colorless to blue immediately after the addition of NH 2 OH. The reduction reaction can be completed immediately at 25 ± 5 °C. The prepared gold nanoflowers were purified by centrifugation at 7000 rpm for 3 min and redispersed in ultrapure water. The purified gold nanoflowers were stored at 4 °C for further use. Gold nanoflowers with different morphologies can be prepared by changing the concentration ratio of NH 2 OH to HAuCl 4 . Colorimetric detection of TC For the detection of TC, 150 µL of different concentrations of TC in 13 mM borax buffer solution (pH 8.5) was mixed with 30 µL of 1 mM HAuCl 4 in the wells of a 96-well plate. The mixtures were incubated at 25 ± 5 °C for 20 min. Then, 20 μL of 1.75 mM NH 2 OH was added to the mixture solution, after which the absorbance of the mixture at 580 nm and 800 nm was measured by using a microplate reader. Detection of TC in milk Milk samples were obtained from a local supermarket near Tianjin University. Then, 4.5 µM, 16 µM, and 32 µM TC were spiked in 1 mL of the liquid milk. The TC-spiked milk samples were pretreated according to the previous reports [31]. In short, 2 mL of 0.1 M Mcllvaine-EDTA buffer (pH 4) was added to 1 mL of the milk sample. After vortexing for 2 min, the mixture was centrifuged at 10,000 rpm for 10 min at 4 °C. Then, 2 mL of 0.1 M Mcllvaine-EDTA buffer (pH 4) was added to the precipitate again and the above process was repeated. Subsequently, the supernatant was collected and mixed with trichloroacetic acid at a final concentration of 5%. After centrifugation at 12,000 rpm for 10 min, the precipitate was removed and the supernatant was collected. The procedure of colorimetric detection of TC in pretreated milk samples was the same as described above. Detection of TC in lake water Lake water was obtained from Jingye lake on the campus of Tianjin University. Then, 10 nM, 20 nM, 53 nM, and 107 nM TC were added to 1 mL of lake water respectively, after which the solid impurities were removed by passing the lake water through a filter membrane with a pore size of 0.22 µm. Then, 25 µL of 20 × borax buffer solution (pH 8.5) was added to 475 µL of the lake water. The content of TC in the lake water was determined under the optimal conditions as described above. Formation of gold nanoflowers in borax buffer solution We found that gold nanoflowers can be generated in the HAuCl 4 -NH 2 OH redox reaction by using borax buffer solution as a reaction mediate even without the addition of small gold nanoparticles as seeds [32]. To verify the role of borax buffer solution in the formation of gold nanoflowers, the HAuCl 4 -NH 2 OH redox reaction was performed in ultrapure water and borax buffer solution (pH 9.0), respectively. As shown in Fig. 1, the reaction solution was kept colorless for the ultrapure water group. However, the borax buffer solution group in which 1 mM HAuCl 4 was mixed with 13 mM borax buffer solution (pH 9.0) and incubated at 25 ± 5 °C for 30 min exhibited a blue color immediately after the addition of 2 mM NH 2 OH. The product has a maximum absorption wavelength located at 605 nm which suggested that large gold nanoparticles were generated in the redox reaction. The TEM images (Fig. S1, Supporting information) showed that gold nanoflowers were formed in the HAuCl 4 -NH 2 OH redox reaction which took place in the borax buffer solution (pH 9.0). The above phenomenon indicated that the borax buffer solution (pH 9.0) is important for the formation of gold nanoflowers. Furthermore, the effect of incubation time on the formation of gold nanoflowers was investigated. With the prolongation of incubation time, the UV-vis absorption intensity of the formed gold nanoflowers increased. The incubation of HAuCl 4 with borax buffer solution benefited the production of gold nanoflowers (Fig. S2d, Supporting information). We speculated that gold nuclei may be formed during the incubation of HAuCl 4 with the alkaline borax buffer solution. These gold nuclei further catalyzed the reduction of Au 3+ ions by NH 2 OH to form Au atoms which were deposited on these gold nuclei forming gold nanoflowers. The influence of other experimental conditions including the pH and concentration of borax buffer solution and the concentration ratio of NH 2 OH to HAuCl 4 on the formation of gold nanoflowers was studied. When the pH was lower than 7.5, the reaction solution had no color change indicating that there were no gold nanoflowers produced in the redox reaction (Fig. S2a, Supporting information). Under pH 8.0, irregularly shaped gold nanoparticles were generated and the products showed a pink color with a maximum SPR absorption peak at 630 nm. Higher pH conditions (pH 8.5 and pH 9.0) resulted in gold nanoparticles with a flower-like shape the color of which was blue. The particle size of gold nanoflowers formed at pH 8.0 and 9.0 were 46.88 ± 9.38 nm and 69.69 ± 8.12 nm, respectively ( Fig. S3a and S3b, Supporting information). By fixing the pH of the borax buffer at 9.0, the effect of the concentration of the borax buffer solution was investigated (Fig. S2b, Supporting information). When the Fig. 1 The UV-vis absorption spectra of reaction products obtained from the NH 2 OH-HAuCl 4 redox reaction taking place in water and borax buffer solution concentration of the borax buffer solution was 267 mM, the products were unstable and aggregated finally. Stable gold nanoflowers could be formed with lower concentrations of borax buffer solution. The concentration of the borax buffer solution affected the size of the gold nanoflowers. Gold nanoflowers with sizes of 68.56 ± 6.79 nm, 86.98 ± 3.35 nm, 74.37 ± 4.93 nm, and 55.18 ± 1.93 nm (Fig. S4, Supporting information) were produced using 13 mM, 27 mM, 67 mM, and 133 mM borax buffer solution, respectively. Then, by fixing the concentration of HAuCl 4 at 1 mM, the influence of the concentration ratio of NH 2 OH to HAuCl 4 (R N/A ) was investigated (Fig. S2c, Supporting information). The concentration ratio affected both the size and morphology of the products. When the R N/A was set at 0.5:1, the reaction solution changed from colorless to blue-gray. The SPR absorption band was low indicating a small number of gold nanoflowers were formed. Branched gold nanoparticles with a size of 77.73 ± 10.06 nm were formed when the R N/A was 1:1. Higher R N/A resulted in shorter branches and flower-like gold nanoparticles. Spherical gold nanoparticles could be obtained using a high concentration of NH 2 OH. When the R N/A was 30:1, the spherical gold nanoparticles with a size of 48.31 ± 6.44 nm (Fig. S5, Supporting information) were formed in the redox reaction. Effect of TC on the formation of gold nanoflowers Gold nanoflowers can be formed in borate borax buffer through the HAuCl 4 -NH 2 OH reduction reaction. Interestingly, we found that the addition of TC to the HAuCl 4 -NH 2 OH reduction reaction in borate borax buffer solution affected the morphology of the formed gold nanoflowers. In our preliminary experiments, 150 μL of different concentrations of TC was mixed and incubated with 30 μL of 1 mM HAuCl 4 at 25 ± 5 °C for 20 min. After the addition of 20 μL of 1.75 mM NH 2 OH to the mixture solution, the color of the reaction solution changed immediately. As the concentration of TC increased from 53 to 533 nM, the color of the reaction solution gradually changed from blue to purple, then to red. Their UV-vis absorption spectra were recorded on a microplate reader. As shown in Fig. 2, without the addition of TC in the reaction solution, a weak and broad absorption band around 700 nm was observed. When 53 nM of TC was added to the reduction reaction, the product was blue with a strong SPR absorption band located at 615 nm. When the concentration of TC increased to 107 nM and 533 nM, the maximum absorption wavelength moved to 590 nm and 555 nm, respectively. The blueshift of the maximum SPR absorption of the product with increasing concentration of TC suggested that TC may influence on the size (or morphology) of gold nanoflowers produced in the NH 2 OH reduction reaction. To verify this hypothesis, the TEM images of these products and their DLS sizes were determined. As shown in Fig. S6 (Supporting information), the results were consistent with their UV-vis spectra. Large gold nanoflowers with a DLS size of 137 ± 5.35 nm were formed in the reduction reaction without the addition of TC. Smaller gold nanoflowers with a DLS size of 77.81 ± 6.24 nm were formed when 53 nM TC was added. With a higher concentration of TC, the morphology of the products changed. The DLS sizes were 46.24 ± 8.47 nm and 23.34 ± 3.32 nm for the products with addition of 107 nM and 533 nM TC, respectively. TC was assumed to act as both the reducing agent to reduce HAuCl 4 to form gold nuclei and the capping agent binding on the surface of gold nanoflowers (Scheme 1). According to previous report [33], molecules with polyphenolic structures can reduce HAuCl 4 to gold atoms forming gold nanoparticles while the polyphenolic groups are oxidized to quinones. We assumed that TC with phenol group may reduce HAuCl 4 to form gold nuclei. By comparing the UV-vis spectra of gold nanoflowers generated with and without the addition of TC (Fig. 2), we found that the TC-generated gold nanoflowers exhibited a shorter maximum SPR absorption wavelength with a higher absorbance. The shorter maximum SPR absorption wavelength suggested that the TC-generated gold nanoflowers had a smaller size which was consistent with their TEM images (Fig. S6, Supporting information). Smaller gold nanoparticles have a lower absorption coefficient. However, the TCgenerated gold nanoflowers exhibited a higher absorbance at the maximum absorption wavelength. This result demonstrated that a higher concentration of gold nanoflowers was generated in the reaction solution with the addition of TC. Based on this fact, TC was considered a reductant to reduce HAuCl 4 to form gold nuclei which were the seeds Fig. 2 The UV-vis spectra of reaction solutions with different concentrations of TC in the HAuCl 4 -NH 2 OH reaction, thus, producing more gold nanoflowers. Also, TC was considered as the capping agent on the generated gold nanoflowers. The binding of TC on the surface of generated gold nanoparticles was verified by determining their UV-vis spectra. Briefly, 67 μM TC was added to the NH 2 OH-HAuCl 4 redox reaction under the optimal reaction condition. The product was obtained by centrifugation at 12,000 rpm for 10 min and washed once with ultrapure water. The precipitate was then redispesed in ultrapure water. The UV-vis absorption spectra of the product and 67 μM TC were recorded on the multifunctional microplate reader, respectively. As shown in Fig. S7, the generated gold nanoflowers had an SRP absorption ranging from 500 to 600 nm and absorption peaks at 272 nm and 370 nm which were consistent with the absorption of TC. The UV-vis absorption spectra verified the binding of TC on the nanoflowers as a capping agent. Optimization of conditions for colorimetric detection of TCs To establish the colorimetric detection system for TCs, the experimental conditions including the pH, concentration of HAuCl 4 and NH 2 OH, and incubation time of HAuCl 4 and TCs were optimized by using TC as a model analyte (Fig. 3). Figure 3a showed the effect of pH on the colorimetric detection of TC. ΔA 580/800 refers to the difference between the absorbance ratio at 580 nm to 800 nm (A 580/800 ) of the sample test and A 580/800 of the blank test. ΔA 580/800 was employed as the criteria to optimize the experimental conditions for the quantitative detection of TC. The optimal conditions for the colorimetric detection of TC were 13 mM borate borax buffer (pH 8.5), 1 mM HAuCl 4 , 1.75 mM hydroxylamine, and an incubation time of 20 min. Figure 4a showed the colorimetric responses of different concentrations of TC. With the increase of TC concentration, the color of the reaction solution changed from light blue to deep blue and then to purple, and finally to red. The maximum absorption peak in the UV-vis absorption spectra gradually shifted to a shorter wavelength. Figure 4b showed that ΔA 580/800 increased with increasing concentration of TC ranging from 7 to 267 nM, then leveled off when the concentration of TC was higher than 533 nM. When the concentration of TC was in the range of 7 nM to 133 nM, ΔA 580/800 has a good linear relationship with the concentration of TC. The regression equation was I = 0.0247C-0.1865, and the correlation coefficient was 0.9908. According to the rule of 3σ/S, the LOD value was calculated to be 2.23 nM. Analytical performance In addition to TC, other tetracycline antibiotics, such as OTC and DC were also determined with the established colorimetric assay. As shown in Fig. 4c, when the concentration of OTC was between 7 to 107 nM, a good linear range was obtained. The regression equation was I = 0.0141C-0.1171 and the correlation coefficient was 0.9898. The LOD value for OTC was calculated to be 1.19 nM. For the colorimetric detection of DC, the detection linear range was from 7 to 133 nM as shown in Fig. 4d, and the regression equation was I = 0.0094C-0.0279 with a correlation coefficient of 0.9984. The LOD value for the colorimetric detection of DC was 5.81 nM. Compared with reported methods, the established colorimetric method which is based on in-suit generation of gold nanoflowers featured high sensitivity, short detection time, and conventional operation process (Table 1). To evaluate the reproducibility of the established colorimetric system, a set of TC, OTC, and DC samples at a final concentration of 107 nM were measured. The relative Scheme 1 Graphic illustration of the effect of TC on the generation of gold nanoflowers standard deviations (RSD) for TC, OTC, and DC were 0.34, 0.13, and 0.14%, respectively. The intra-day error of the proposed assay was evaluated by determining the colorimetric signal of 107 nM TC for 6 consecutive days. The RSD was calculated to be 4.0% indicating good inter-day precision. Specificity and selectivity To assess the specificity of the colorimetric detection system, 16 kinds of interfering substances including common ions (Na + , K + , Ca 2+ , Mg 2+ , HPO 4 2− , H 2 PO 4 − ), amino acids (Pro, Val, Ile, Arg, His, Phe, Cys), proteins (BSA) and other antibiotics (GM, DOX) were detected. 107 nM TC was set as a positive control. The tolerable concentrations of these interfering substances which were determined by 20% deviation of the colorimetric signal were summarized in Table S1 (Supporting information). Fig. S8 (Supporting information) showed their colorimetric responses in the HAuCl 4 -NH 2 OH redox reaction. These interfering substances with a concentration lower than the tolerance concentration would not interfere with the detection of TC. Real sample testing The proposed colorimetric assay was applied to the detection of TC in liquid milk samples and water samples respectively. The applicability of the assay in detecting TC in milk samples was investigated first. The effect of the matrix in milk samples was studied. The milk samples without the addition of TC were treated with Mcllvaine-EDTA solution followed by trichloroacetic acid treatment to remove proteins in the milk. Then, different concentrations (27 nM, 53 nM, 80 nM, and 107 nM) of TC were spiked in 100-fold diluted pretreated milk samples, and determined under the optimal reaction conditions. The recoveries for 27 nM, 53 nM, 80 nM, and 107 nM were 82.9% ± 0.7%, 103.4% ± 8.8%, 112.0% ± 3.6%, and 97.3% ± 2.8%, respectively. The above results suggested that the matrix in a 100-fold diluted pretreated milk sample didn't affect the detection of TC. To evaluate the accuracy of the proposed colorimetric assay for the detection of TC in milk, 4.5 µM, 16 µM, and 32 µM TC was spiked in 1 mL of liquid milk samples. After removing proteins, the obtained supernatant was diluted 100-fold with 13 mM borax buffer solution (pH 8.5). The recovery This work for 4.5 µM, 16 µM, and 32 µM TC was 99.3% ± 2.8%, 103.1% ± 6.0%, and 103.3% ± 0.5%, respectively. Also, the applicability of the proposed colorimetric assay for the detection of TC in water samples was investigated. Lake water was obtained from Jingye lake on the campus of Tianjin University. Then, 10 nM, 20 nM, 53 nM, and 107 nM TC were added to 1 mL of lake water. After removing the solid impurities, the content of TC in the lake water was determined. The recoveries for 10 nM, 20 nM, 53 nM, and 107 nM TC were 113.3% ± 0.7%, 99.1 ± 2.8% 109.3% ± 1.7%, 112.7% ± 4.9%indicating that the proposed colorimetric assay can be used in the routine detection of TC in water samples. Conclusion In summary, a colorimetric method for the detection of TCs was developed based on the in-situ generation of gold nanoflowers in the borax buffer solution. With the addition of TC, the shape and size of the generated gold nanoflowers were different, based on which a colorimetric method for the detection of TC was established. The method could detect TC, OTC, and DC with detection limits of 2.23 nM, 1.19 nM, and 5.81 nM, respectively. With the employment of borax buffer solution with pH 8.5, the colorimetric assay for the detection of TCs can be performed at 25 ± 5 °C which facilitates the on-site detection of TCs. Compared with previously reported methods, this method has the advantages of simplicity, rapidity, and significantly improved detection sensitivity. We believe it will be a convenient route for the routine detection of TCs.
2023-04-06T06:16:31.312Z
2023-04-05T00:00:00.000
{ "year": 2023, "sha1": "7bf4a60631e5d7e6c072534d1a6cb1060ab4df4a", "oa_license": null, "oa_url": "https://doi.org/10.1007/s44211-023-00332-6", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "c59a7607e723e864c8d3663ca5913e4a4a47c10f", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
256697215
pes2o/s2orc
v3-fos-license
Balanced Allocations in Batches: The Tower of Two Choices In the balanced allocation framework, the goal is to allocate m balls into n bins, so as to minimize the gap (difference of maximum to average load). The One-Choice process allocates each ball to a bin sampled independently and uniformly at random. The Two-Choice process allocates balls sequentially, and each ball is placed in the least loaded of two sampled bins. Finally, the (1+β)-process mixes these processes, meaning each ball is allocated using Two-Choice with probability β in (0,1), and using One-Choice otherwise. Despite Two-Choice being optimal in the sequential setting, it has been observed in practice that it does not perform well in a parallel environment, where load information may be outdated. Following [BCEFN12], we study such a parallel setting where balls are allocated in batches of size b, and balls within the same batch are allocated with the same strategy and based on the same load information. For small batch sizes b in [n, n log n], it was shown in [LS22c] that Two-Choice achieves an asymptotically optimal gap among all allocation processes with two (or any constant number of) samples. In this work, we focus on larger batch sizes b in [n log n, n³]. It was proved in [LS22a] that Two-Choice leads to a gap of Θ(b/n). As our main result, we prove that the gap reduces to O(√((b/n) log n)), if one runs the (1+β)-process with an appropriately chosen β (in fact this result holds for a larger class of processes). This not only proves the phenomenon that Two-Choice is not the best (leading to the formation of "towers" over previously light bins), but also that mixing two processes (One-Choice and Two-Choice) leads to a process which achieves a gap that is asymptotically smaller than both. We also derive a matching lower bound of Ω(√((b/n) log n)) for any allocation process, which demonstrates that the above (1+β)-process is asymptotically optimal. Our analysis also works in the presence of randomly weighted balls, and also implies exponential tails for the number of bins above a certain load value. Introduction Sequential balanced allocations.In the sequential balanced allocations framework, there are m tasks (balls) to be allocated into n servers (bins).It is well-known that allocating the balls into bins sampled uniformly at random (a.k.a.One-Choice) leads w.h.p. 1 to a maximum load of Θ(log n/ log log n) for m = n and a gap (maximum load minus average load) of Θ (m/n) • log n for m n log n. An improvement over One-Choice is the d-Choice process [3,6,15], where each ball is allocated to the least loaded of d bins sampled uniformly at random.For any m n, this process achieves w.h.p. an log d log n + Θ(1) gap, i.e., a gap that does not depend on m.For d = 2, this great improvement is known as "power-of-two-choices" (see also surveys [28,38] for more details).Despite the simplistic nature of the balanced allocation framework, the Two-Choice process has had a significant impact on practical applications such as load balancing and distributed storage systems, which was also acknowledged by the "ACM Paris Kanellakis Theory and Practice Award 2020 " [2] (see also Applications below). Several variants of Two-Choice have been studied.Of particular importance to this work is the (1+β)-process, where each ball is allocated using Two-Choice with probability β ∈ (0, 1] and One-Choice otherwise.Mitzenmacher [26,Section 4.4.1]introduced this process as a model of Two-Choice with erroneous comparisons.Peres, Talwar and Wieder [33] showed that for β := β(n) 1, it achieves w.h.p. a Θ((log n)/β) gap (see also [20]), which becomes worse for smaller β, but still remains independent of m.The (1 + β)-process has also been applied to the analysis of Two-Choice in the popular graphical setting [4,16,33], where bins are organized as vertices in a graph, and each ball is allocated to the lesser loaded of two adjacent vertices of an edge sampled uniformly at random. Another variant of Two-Choice that has received some attention recently is the family of Two-Thinning processes [12,13], where the ball is allocated to the second sample only if the first one does not meet a certain criterion, e.g., based on a threshold on its load or a quantile on its rank. It should be noted that the analyses of all these processes strongly rely on the fact that the load information of each bin is updated after each allocation.In effect this means balls can only be allocated sequentially, which is a downside in distributed and parallel environments. Outdated information settings.In this work, we demonstrate that in outdated information settings by choosing an appropriately small β, (1 + β) achieves the asymptotically optimal gap among a large class of processes, including not only Two-Choice (and One-Choice), but even adaptive processes that may allocate with a different scheme after each batch.This confirms earlier empirical observations that the performance of the Two-Choice process deteriorates under outdated information and delays [8,14,27,31,37]. Berenbrink, Czumaj, Englert, Friedetzky and Nagel [5] introduced the b-Batched setting where balls are allocated in batches of size b.That means, in every batch the b balls are allocated in parallel, as the decision where to allocate the ball only depends on the load configuration before that batch of balls arrived.For b = n, they proved that Two-Choice achieves w.h.p. an O(log n) gap.This bound was recently improved to Θ(log n/ log log n) in [22], and in the same work, it was shown that Two-Choice has a gap that matches the maximum load of One-Choice for b balls, for any batch size b ∈ [n • e − log Θ(1) n , n log n], and so it is asymptotically optimal.In contrast, for b n log n, Two-Choice (and a family of other processes) have w.h.p. a Θ(b/n) gap [20], a bound which was shown to hold even in the presence of weights and on some graphs.This analysis also demonstrates that increasing d in the d-Choice process, does not always improve the gap, which is in sharp contrast to the sequential setting.In [22], a more powerful setting, τ -Delay was studied for the Two-Choice process, where an adversary can choose to report for each of the bins any load from the last τ steps.For b = τ , b-Batched is a special instance of τ -Delay and for any τ n log n, the same asymptotic bounds where shown to hold. Outdated information settings have also been studied in the queuing setting [1,14,18,27,37].In particular, Mitzenmacher [27] studied the corresponding version of the b-Batched setting, called the bulletin board model with periodic updates, showing that some processes requiring centralized coordination can outperform Two-Choice, but no explicit rigorous bounds were proven.This shortcoming of Two-Choice was characterized as herd behavior, meaning that some of the initially lighter bins receive disproportionately many balls, turning them into heavy bins.In another empirically study, Dahlin [8] also observed the herd behavior and suggested similar centralized strategies to improve upon d-Choice.Regarding identifying optimal processes, Whitt [37] remarks: We have shown that several natural selection rules are not optimal in various situations, but we have not identified any optimal rules.Identifying optimal rules in these situations would obviously be interesting, but appears to be difficult.Moreover, knowing an optimal rule might not be so useful because the optimal rule may be very complicated. Applications.Recently, several distributed low-latency schedulers, including Sparrow [31], Eagle [9], Hawk [10], Peacock [17], Pigeon [36] and Tarcil [11], have used variants of the Two-Choice process.In [31], with regards to the implementation of Sparrow, the authors state: The power of two choices suffers from two remaining performance problems: first, server queue length is a poor indicator of wait time, and second, due to messaging delays, multiple schedulers sampling in parallel may experience race conditions. Similar observations have been made in the context of distributed stream processing [29,30] and load balancers [24].These studies support that batch sizes b = Ω(n log n) for which Two-Choice is no longer optimal are relevant to real-world applications. Weighted settings.Several works study balanced allocation processes with weights [7,20,33,35].We will be focusing on weights sampled independently from probability distributions with bounded moment generating functions as in [20] and [33], which includes the geometric, exponential and Poisson distributions. Our results.In this work, we prove that a family of processes satisfying a mild technical condition achieve the asymptotically optimal gap2 of O (b/n) • log n in the weighted b-Batched setting for b ∈ [2n log n, n 3 ], leading to roughly a quadratic improvement over the gap of the Two-Choice process.This family of processes includes the (1 + β)-process, which is a process that can be easily implemented in a decentralized manner, and demonstrates that by setting β = (n/b) • log n we attain this asymptotically optimal gap. We also provide lower bounds establishing the tightness of our upper bounds.Interestingly, the lower bound of Ω( (b/n) • log n) applies to a much more powerful class of allocation processes, where the allocation rule is arbitrarily tailored at the beginning of the batch. The intuition for these optimal processes relates to the herd behavior observed in [27] and [8].For the d-Choice process, the maximum probability of allocating to a bin is max i∈[n] p i ≈ d/n.This means that, for example, in Two-Choice in a batch of b balls there are some bins that receive ≈ 2b/n balls and so a gap of ≈ b/n arises.This becomes worse as d grows.To avoid this, Two-Choice (1 + β)-process we will investigate processes where max i∈[n] p i = (1 + o(1))/n, which means that in expectation no bin receives too many balls in any particular batch.For example, the (1 + β)-process has max i∈[n] p i ≈ (1+β)/n, which means that this mixing of One-Choice steps with Two-Choice steps circumvents the herd behavior.See Fig. 1.1 for a visualization of how (1 + β) achieves a more balanced distribution than Two-Choice over one batch, and Fig. 1.2 for how the gaps of different processes are getting worse with larger max i∈[n] p i .The asymptotic gap bounds of the One-Choice, Two-Choice and (1 + β) processes in the b-Batched setting are summarized in Table 1.3.Our results also imply bounds for the shape of the load vector (see Remark 4.3).Our analysis also applies in the presence of randomly weighted balls, and also implies exponential tails for the number of bins above a certain load value. Our techniques.Our techniques build on and refine those in [20], making use of the hyperbolic cosine potential function [33] and variants.More specifically, a slightly weaker version of our tight upper bound is based on [20,Theorem 3.1] and a refinement of [20,Lemma 4.1].For our tight gap bound, our approach uses an interplay between two hyperbolic cosine potential functions to prove concentration and then an exponential potential with a larger smoothing parameter to deduce the refined gap.A similar method was used in [20, Section 5], but one crucial novelty here is that we consider allocation processes whose probability allocation vector have a small ∞ distance from the uniform distribution.We believe that relating and comparing different allocation processes based on their ∞ distance (or other metrics) could be a promising avenue for future work.This can be also seen as a natural relaxation of the majorization technique, which has been the dominant tool to relate different allocation processes [21,33]. Organization.In Section 2, we introduce the basic notation for balanced allocations, and define the processes and settings that we will be working with.In particular, in Section 2.3 we define general conditions on the probability allocation vector used by the processes, under which our upper bounds on the gap apply.In Section 3, we prove the O b/n • log n bound on the gap for a family of processes in the weighted b-Batched setting.In Section 4, we perform a refined analysis and improve this bound to O (b/n) • log n .In Section 5, we show that this achieved gap is asymptotically optimal, and in Section 6, we present some empirical results on the gap of some specific processes.Finally, in Section 7, we summarize the results and conclude with some open problems. Process Gap in Sequential Setting Gap in b-Batched Setting Batch Size For the sake of simplicity, we focus on the setting with unit weights and only list results for (1 + β).Among all these processes, One-Choice produces the worst gap in both settings, even though the gap does not change between the b-Batched and sequential setting.For Two-Choice, the gap becomes b/n in the b-Batched setting with b = Ω(n log n), whereas for (1+β) the gap is improved to (b/n) • log n (for a suitable β). Notation, Processes and Settings In this section, we introduce notation, processes and settings used throughout this work. Basic Notation We consider the allocation of m balls into n bins, which are labeled [n] := {1, 2, . . ., n}.For the moment, the m balls are unweighted (or equivalently, all balls have weight 1).For any step t 0, x t is the n-dimensional load vector, where x t i is the number of balls allocated to bin i in the first t allocations.In particular, x 0 i = 0 for every i ∈ [n].Finally, the gap is defined as x t i − t n . It will also be convenient to sort the load vector x.To this end, let x t := x t − t n .Then, relabel the bins such that y t is a permutation of x t and y t 1 We call a bin i ∈ [n] overloaded, if y t i 0 and underloaded otherwise.A probability vector p ∈ R n is any vector satisfying . Following [33], many allocation processes can be described by a time-invariant probability allocation vector p t , which is the probability vector with p t i being probability of allocating a ball to the i-th heaviest bin. By F t we denote the filtration of the process until step t, which in particular reveals the load vector x t . Processes We start with a formal description of the One-Choice process. One-Choice Process: Iteration: For each t 0, sample one bin i, independently and uniformly at random.Then update: We continue with a formal description of the Two-Choice process. Two-Choice Process: Iteration: For each t 0, sample two bins i 1 and i 2 , independently and uniformly at random.Let i ∈ {i 1 , i 2 } be such that x t i = min{x t i 1 , x t i 2 }, breaking ties randomly.Then update: It is immediate that the probability allocation vector of Two-Choice is Following [33], we recall the definition of the (1 + β)-process which interpolates between One-Choice and Two-Choice: (1 + β) Process: Parameter: A mixing factor β ∈ (0, 1].Iteration: For each t 0, sample two bins i 1 and i 2 , independently and uniformly at random.Let i ∈ {i 1 , i 2 } be such that x t i = min x t i 1 , x t i 2 , breaking ties randomly.Then update: In other words at each step, the (1+β)-process allocates the ball following the Two-Choice rule with probability β, and otherwise allocates the ball following the One-Choice rule.Therefore, its probability allocation vector is given by Recall that in [33] (and [20]), it was shown that Gap(m) = O log n β for any m n and β ∈ (0, 1]; so in particular, this gap (bound) does not grow with m. The next process is another relaxation of Two-Choice. Quantile(δ) Process: Parameter: A quantile δ ∈ {1/n, 2/n, . . ., 1}.Iteration: For each t 0, sample two bins i 1 and i 2 , independently and uniformly at random.Then update: Note that the Quantile(δ) processes can be implemented as a two-phase procedure: First probe the bin i 1 and place the ball there if i 1 is not among the δn heaviest bins.Otherwise, take a second sample i 2 and place the ball there.Since we only need to know whether a bin's rank is above or below a value, the response by a bin can be encoded as a single bit (at the cost of knowing the rank of each bin).The probability allocation vector of Quantile(δ) is given by: Conditions on Probability Vectors In [20], the weighted b-Batched setting was analyzed for probability allocation vectors satisfying the following two conditions.The first condition says that the process has a small ε/n bias to place away from overloaded and towards underloaded bins; and the second condition says that no bin has too high probability of being allocated. • Condition C 1 : There exist constant quantile3 δ ∈ (0, 1) and (not necessarily constant) ε ∈ (0, 1), such that for any 1 k δn, and similarly for any δn + 1 k n, In the same paper [20,Proposition 7.4] it was shown that any process with max i∈[n] p i 1+ε n for ε = Ω(1) also has Gap(m) = Ω(b/n) for any b = Ω(n log n).Therefore, to improve on this asymptotic gao bound, we have to consider processes with max i∈[n] p i = 1+o (1) n .In our analysis in Sections 3 and 4 we will make use of the following condition based on the ∞ -distance between the probability allocation vector p and the uniform distribution (i.e., One-Choice): Note that this condition implies condition C 2 for the same C > 1, but unlike C 2 it imposes both an upper and a lower bound on the p i 's.It is easy to verify that (1 + β)-process satisfies all three conditions.Lemma 2.1.For any β ∈ (0, 1], the Proof.Recall that for the (1 + β)-process, the probability allocation vector satisfies We will first show that C 1 holds with δ = 1/4 and ε = β/2.For any 1 k δn, since p is non-decreasing the prefix sums satisfy Similarly, for any δn + 1 k n, the suffix sums satisfy Note that in contrast to Two-Choice which satisfies C 3 for C = 2 − 1 n , by choosing β small enough we can make the probability allocation vector arbitrarily close to uniform. We also note that for any process P satisfying condition C 3 for some C > 1, we can define a process P satisfying condition C 3 for C ∈ (1, C) by mixing the probability allocation vector of P with that of One-Choice with probability η = C −1 C−1 .For instance, the Quantile(1/2) process satisfies condition C 3 for any ).Therefore, mixing Quantile(1/2) with One-Choice with probability η ∈ [0, 1], gives the following probability allocation vector satisfying condition Observation 2.2.The process obtained by mixing Quantile(1/2) with One-Choice satisfies condition Weighted and Batched Settings As in [20], we now extend the definitions of Section 2.1 and Section 2.2 to weighted balls and later to the batched setting.To this end, let w t 0 be the weight of the t-th ball to be allocated for t 1.By W t we denote the total weights of all balls allocated after the first t 0 allocations, so W t := n i=1 x t i = t s=1 w s .The normalized loads are x t i := x t i − W t n , and with y t i being again the decreasingly sorted, normalized load vector, we have Gap(t) = y t 1 .The weight of each ball will be drawn independently from a fixed distribution W over [0, ∞).Following [33], we assume that the distribution W satisfies: Specific examples of distributions satisfying above conditions (after scaling) are the geometric, exponential, binomial and Poisson distributions. In the analysis we will be using the following property (see also [33]) and refer to these distributions as Finite-MGF(ζ) (or Finite-MGF(S)): Lemma 2.3 ([20, Lemma 2.4]). There exists S := S(ζ) max{1, 1/ζ}, such that for any γ ∈ (0, min{ζ/2, 1}) and any κ ∈ We will now describe the allocation of weighted balls into bins using a batch size of b n.For the sake of concreteness, let us first describe the b-Batched setting if the allocation is done using Two-Choice.For a given batch size consisting of b consecutive balls, each ball of the batch performs the following.First, it samples two bins i 1 and i 2 independently and uniformly at random, and compares the load the two bins had at the beginning of the batch (let us denote the bin which has less load by i min ).Secondly, a weight is sampled from the distribution W. Then a weighted ball is added to bin i min .Recall that since the load information is only updated at the beginning of the batch, all allocations of the b balls within the same batch can be performed in parallel. In the following, we will use a more general framework, where the process of sampling (one or more) bins and then deciding where to allocate the ball to is described by a probability allocation vector p over the n bins (Section 2.1).Also for the analysis, it will be convenient to focus on the normalized and sorted load vector y, which is why the definition below is based on y rather than the actual load vector x.b-Batched Setting with Weights Parameters: Batch size b n, probability allocation vector p, weight distribution W. Iteration: 2. Sample b weights w t+1 , w t+2 , . . ., w t+b from W. Update for each bin 4. Let y t+b be the vector z t+b , sorted decreasingly. We also look at the version of the processes that perform random tie-breaking between bins of the same load.For b = 1, this makes no observable difference to the process, but for multiple steps, this effectively averages out the probability over (possibly) multiple bins that have the same load.This would, for instance, correspond to Two-Choice, randomly deciding between the two bins if they have the same load.In particular, if p is the original probability allocation vector, then the one with random tie-breaking is p(y t ) (for t being the beginning of the batch), where 1.Let p := p(y t ) be the probability allocation vector accounting for random tie-breaking. Update for each bin i ∈ [n] , 5. Let y t+b be the vector z t+b , sorted decreasingly. 3 Warm-up: In this section, we will refine the analysis of [20,Section 4] to prove an O( b/n • log n) bound on the gap for a family of processes.This will also be used as a starting point for the analysis in Section 4 to obtain the tighter bound.The main theorem that we prove is the following. In particular, by choosing β = Θ n/b we get a process that is asymptotically better than Two-Choice and which is within just a √ log n multiplicative factor from the optimal gap bound proven for unit weights in Section 5. The analysis is based on the hyperbolic cosine potential which is defined for smoothing parameter γ > 0 as We also decompose Γ t by defining Further, we use the following shorthands to denote the changes in the potentials over one step We will make use of the following drift theorem shown in [20].Note that in statement of the theorem, rounds could consist of multiple single-step allocations and in that case p t is not necessarily the probability allocation vector, but it could be a probability vector giving an estimate for the "average number of balls" allocated to a bin.Theorem 3.3 (cf.[20,Theorem 3.1]).Consider any allocation process P and a probability vector p t satisfying condition C 1 for some constant δ ∈ (0, 1) and some ε ∈ (0, 1) at every round t 0. Further assume that there exist K > 0, γ ∈ 0, min 1, εδ 8K and R > 0, such that for any round t 0, process P satisfies for potentials Φ := Φ(γ) and Ψ := Ψ(γ) that, and Then, there exists a constant c := c(δ) > 0, such that for Γ := Γ(γ) and any round t 0, Now we will show that any process satisfying condition C 3 , also satisfies the preconditions of Theorem 3.3 for the expected change of the potential functions Φ and Ψ over one batch.Lemma 3.4.Consider any allocation process with probability allocation vector p t satisfying condition C 3 for some C ∈ (1, 1.9) at every step t 0. Further, consider the weighted b-Batched setting with weights from a Finite-MGF(S) distribution with constant S 1 and a batch size b and Consider an arbitrary bin i ∈ [n].Define the binary vector Z ∈ {0, 1} b , where Z j indicates whether the j-th ball was allocated to bin i.The expected change for the overload potential Φ t i of the bin is given by, In the following, let us upper bound the factor of Φ t i : using in (a) that the weights are independent given F t , in (b) Lemma 2.3 twice with κ = 1 − 1 n and with κ = − 1 n respectively (and that (1 − 1/n) 2 1), in (c) the binomial theorem and in (d) that p i 1 n 2 by condition C 3 for C ∈ (1, 1.9).Let us define We will now show that 1, which holds indeed since using in (a) using in (a) that 1 + v e v for any v, in (b) that e v 1 + v + v 2 for v 1.75 and (3.8), and in Similarly, for the underloaded potential Ψ t , for any bin i ∈ [n], As before, we will upper bound the factor of Ψ t i : Similarly, to (3.8), we get that So, using in (a) that 1 + v e v for any v, in (b) that e v 1 + v + v 2 for v 1.75 and (3.12), and in (c) Having verified the preconditions for Theorem 3.3, we are now ready to prove the bound on the gap for this family of processes. Remark 3.5.The same upper bound in Theorem 3.1 also holds for processes with random tie breaking.The reason for this is that (i) averaging probabilities in (2.1) can only reduce the maximum entry (and increase the minimum) in the allocation vector p t , i.e. max i∈[n] p t i (x t ) max i∈[n] p i , so it still satisfies condition C 3 and (ii) moving probability between bins i, j with x t i = x t j (and thus Φ t i = Φ t j and Ψ t i = Ψ t j ), implies that the aggregate upper bounds Hence, by Theorem 3.3, there exists a constant c := c(δ) > 0 such that for any step m 0 which is a multiple of b, Therefore, by Markov's inequality To conclude the claim, note that when Γ m 8c δ • n 3 holds, then also, Consider any process with probability allocation vector p t satisfying at every step t 0, condition C 1 for constant δ ∈ (0, 1) and ε, as well as condition C 3 for C = 1 + ε.Then, there exists a constant κ := κ(δ, S) > 0, such that for any step m 0 being a multiple of b, There are two key steps in the proof: Step 1: Similarly to the analysis in [21], we will use two instances of the hyperbolic cosine potential (defined in (3.1)), in order to show that it is concentrated at O(n).More specifically, we will be using Γ 1 := Γ 1 (γ 1 ) with the smoothing parameter γ 1 := δ 40S • n/(b log n) and Γ 2 := Γ 2 (γ 2 ) with γ 2 := γ 1 8•30 , i.e., with a smoothing parameter which is a large constant factor smaller than γ 1 .So, in particular Γ t 2 Γ t 1 at any step t 0. In the following lemma, proven in Section 4.1, we show that w.h.p.Γ 2 = O(n) for any log 3 n consecutive batches. The proof follows the interplay between the two hyperbolic cosine potentials, in that conditioning on Γ t 1 = poly(n) (which follows w.h.p. by the analysis in Section 3) implies that ∆Γ t+1 2 n 1/4 • (n/b) • log n (Lemma 4.5 (ii)).This in turn allows us to apply a bounded difference inequality to prove concentration for Γ 2 .In contrast to [21] and [22], here we need a slightly different concentration inequality Theorem A.6 (also used in [20]), as in a single batch the load of a bin may change by a large amount (with small probability).The complete proof is given in Section 4.1. Step 2: Consider an arbitrary step s = t + j • b where {Γ s 2 cn} holds.Then, the number of bins i with load y s i at least z := With this in mind, we define the following potential function for any step t 0, which only takes into account bins that are overloaded by at least z balls: where λ := ε 4CS = Θ( (n/b) • log n) and we define Λ t i = 0 for the rest of the bins i.This means that when {Γ s 2 cn} holds, the probability of allocating to one of these bins is p s i 1−ε n , because of the condition C 1 .Hence, the potential drops in expectation over one batch (Lemma 4.9) and this means that w.h.p.Λ m = poly(n), which implies that Gap(m Step 1: Concentration of the Γ Potential Recall that in Theorem 4.1, we considered the weighted b-Batched setting with any b ∈ [2n log n, n 3 ] and weights sampled independently from a Finite-MGF(S) distribution with constant S 1, for any allocation process with probability allocation vector p t satisfying condition C 1 for constant δ ∈ (0, 1) and ε ∈ (0, 1) as well as condition C 3 for some C > 1, at every step t 0. The proof of this lemma is similar to the proofs in [20, Section 5] and [21, Section 5], in that we use the interplay between two instances of the hyperbolic cosine potential Γ 1 := Γ 1 (γ 1 ) and Γ 2 := Γ 2 (γ 2 ) with smoothing parameter γ 2 being a large constant factor smaller than γ 1 .More specifically, we will be working with γ 1 := δ 40S • n/(b log n) and γ 2 := γ 1 8•30 .The rest of this section is organized as follows.In Section 4.1.1,we establish some basic properties for the potentials Γ 1 and Γ 2 and in Section 4.1.2we use these to show that w.h.p.Γ t 2 = O(n) for at least log 3 n batches, and complete the proof of Lemma 4.2.Then, in Section 4.2, we complete the proof of Theorem 4.1. Preliminaries We define the following event, for any step t 0 Further, let x t be the load vector obtained by moving any ball of the load vector x t to some other bin, then using that γ 2 := γ 1 8•30 .By aggregating, we get the first claim . Second statement.Let bin j ∈ [n] be the bin where the j-th ball was allocated.We consider the following cases for the contribution of a bin i to Γ t 2i : Case 1 [i = j and y t j 0]: Since j ∈ [n] is overloaded, we have that Case 2 [i = j and y t j < 0]: Similarly, if j is underloaded, we have that Case 3 [i = j and y t i 0]: The contribution of the rest of the bins is due to the change in the average load.More specifically, for any overloaded bin i ∈ Case 4 [i = j and y t i < 0]: Similarly, for any underloaded bin i ∈ [n] \ {j}, Hence, aggregating over all bins for sufficiently large n.Third statement.Let i, j ∈ [n] be the differing bins between x t and x t .Then since H t holds, it follows that w t 15 ζ • log n, so for bin i, Hence, Next, we will show that E[ Γ 2 ] = O(n) and that when Γ 2 is sufficiently large, it drops in expectation over the next batch.Lemma 4.6.Consider any process satisfying the conditions in Lemma 4.2.Then, there exists a constant c := c(δ) such that for any step t 0 being a multiple of b, Further, Hence, by Theorem 3.3 we get the conclusion by setting c := 16c/δ, for some constant c := c(δ) > 0. Similarly for the potential Third statement.Furthermore, by Lemma 3.4 and Theorem 3.3, we also get that for any t 0, We define the constant When Γ t 2 cn holds, then (4.3) yields, Fourth statement.Similarly, when Γ t 1 < cn, (4.3) yields, In the next lemma, we show that w.h.p.Γ 1 is poly(n) for every step in an interval of length 2b log 3 n.Proof.We will start by bounding Γ s 1 at steps s being a multiple of b.Using Lemma 4.6 (i), Markov's inequality and the union bound over 2 log 3 n + 1 steps, we have for any t 0, Similarly, using (3.9) in Lemma 3.4, Hence, combining and aggregating over the bins, Applying Markov's inequality, for any r ∈ [0, b), Hence, by a union bound over the 2b log 3 n 2n 3 log 3 n possible steps (since b n 3 ) for s ∈ [0, 2 log 3 n] and r ∈ [0, b), Finally, taking the union bound of (4.4) and (4.5), we conclude We will now show that w.h.p. there is a step every b log 3 n steps, such that the exponential potential Γ 2 becomes O(n).We call this the recovery phase.δ be the constant defined in Lemma 4.6.For any step t 0 being a multiple of b, Proof.By Lemma 4.6 (ii), using Markov's inequality at step t being a multiple of b, we have We will be assuming Γ t 2 cn 9 .By Lemma 4.6 (iii), for any step r 0, then In order to prove that Γ t+s•b 2 is small for some s ∈ [0, b log 3 n], we define the "killed" potential function for any r ∈ [0, log 3 n], . Hence, the Γ potential satisfies unconditionally the drop inequality of Lemma 4.6 (iii), that is, Inductively applying this for log 3 n batches and using that Γ t So by Markov's inequality, By combining with (4.6), Due to the definition of Γ 2 , at any step t 0, deterministically Γ t 2 2n.So, we conclude that w.p. at least 1 − 2n −8 , we have that Γ holds, which implies the conclusion. Completing the Proof of Lemma 4.2 We are now ready to prove Lemma 4.2, using a method of bounded differences with a bad event Theorem A.6 ([19, Theorem 3.3]). Proof of Lemma 4.2.Our starting point is to apply Lemma 4.8, which proves that there is at least one step t Note that if t < b log 3 n, then deterministically Γ 0 2 = 2n cn (which corresponds to ρ = −t/b).We are now going to apply the concentration inequality Theorem A.6 to each of the batches starting at t + ρ • b, . . ., t + (log 3 n) • b and show that the potential remains cn at the last step of each batch.More specifically, we will show that for any r ∈ [ρ, log 3 n], for r = t + b • r, Within a single batch all allocations are independent, so we apply Theorem A.6, choosing γ k := 1 b and N := b, which states that for any T > 0 and µ : By Lemma 4.6 (iv), we have µ Hence, for T := n/ log 2 n, since 2n log n b n 3 , we have By union bound of ( and Then, where in the last inequality we have used (4.9) and the fact ρ − log 3 n.So, Note that for any ρ ∈ [− log 3 n, 0], we have that A ρ ∩ K log 3 n ρ ⊆ A. Hence we conclude by the union bound of (4.10) and (4.11), that 4.2 Step 2: Completing the Proof of Theorem 4.1 We will now show that when Γ t 2 = O(n), the stronger potential function Λ t drops in expectation over the next batch.This will allow us to prove that Λ m = poly(n) and deduce that w.h.p. Proof.Consider an arbitrary step t 0 being a multiple of b and consider a labeling of the bins so that they are sorted by load.Assuming that {Γ t 2 cn} holds, the number of bins with load For any bin i ∈ [n] with y t i z, we get as in (3.5) (using that λ 1 and that p satisfies C 3 for C ∈ (1, 1.9)), Since there are at most δn such bins (i.e., i δn), p satisfies condition C 1 and the normalized vector y t is sorted, by Lemma A.2 the upper bound on and in (c) that 1 + v e v for any v.For the rest of the bins with i > δn, Aggregating the contributions over all bins, We define the killed potential Λ, with Λ t 0 := Λ t 0 and for j > 0, Since Λ t Λ t , we have that by Lemma 4.9 for t = t 0 + j • b, we have that When E t 0 +j•b does not hold, then deterministically Λ t 0 +(j+1)•b = Λ t 0 +j•b = 0. Hence, we have the following unconditional drop inequality Assuming E t 0 holds, we have for sufficiently large n.Recalling that γ 2 = Θ(λ • log n), there exists a constant κ 1 > 0 such that Applying Lemma A.1 to (4.13) with a := e − λε 2n •b and b := n 2 for log 3 n steps, Hence, by (4.12), Combining with (4.12), we have Finally, {Λ m 2n 5 } implies that ), so the claim follows.For the case when m < b • log 3 n, it deterministically holds that Λ t 0 n, which is a stronger starting point in (4.14) to prove that E[ Λ m ] 2n 5 , which in turn implies the gap bound. Lower Bounds on the Gap In this section, we prove two lower bounds of Ω( (b/n) • log n) on the gap.Both lower bounds hold even in the unit weights case. Observation 5.1.Consider the b-Batched setting with any b n log n, and assume all balls have unit weights.Then, for any process which uses the same probability allocation vector within each batch with random tie breaking, Proof.Any such process behaves exactly like One-Choice in the first batch and so the lower bound follows from that of One-Choice for b balls into n bins (cf.[34] and [23,Lemma A.2]). The next lower bound is more involved.This bound also applies to processes which are allowed to adjust the probability allocation vector from one batch to another arbitrarily; e.g., the probability for a heavily underloaded bin might be set close to (or even equal to) 1, and similarly, the probability for a heavily overloaded bin might be set close to (or equal to) 0. Additionally, the lower bound below applies to any two consecutive batches, and not only to the end of the first batch as in Observation 5.1. Theorem 5.2.Consider the b-Batched setting with any b = Ω(n log n) in the unit weights case.Furthermore, consider an allocation process which may adaptively change the probability allocation vector for each batch.Then there is a constant κ > 0 such that for any allocation process (which may adaptively change the probability for each batch) it holds that for every t 0 being a multiple of b, Proof.In the proof, we shall prove a slightly stronger statement: That is, there is no load configuration and no probability allocation vector (depending on F t ) such that the gap is small, both before and at the end of an arbitrary batch. For notational convenience, we will prove this statement by assuming that t = 0, and x 0 is an arbitrary load vector satisfying i∈[n] x 0 i = 0 (in other words, we shift time backwards by t steps) and p = p 0 is the probability allocation vector used by the process.Consider one arbitrary bin j ∈ [n].Then, For a sufficiently large constant C > 0, let us now assume max j∈[n] z j C/2 • (b/n) • log n; clearly, if this is not the case, we already have a large gap already before the next batch. Next consider a bin j ∈ [n] with We will now apply a Chernoff bound (Lemma A.4) for x b j ∼ Bin(b, p j ), with δ : and thus bin j will not contribute to the gap at step b. Hence in the remainder of the proof, we would like to assume that for all bins j ∈ [n], and ( x b i ) i∈[n] be a load vector where these locations are sampled according to p. Clearly, there is a coupling so that for every j ∈ [n] \ J , x b j x b j (since p b j p b j ).Further, for any j ∈ J , by a union bound, Hence it follows that, for any threshold T > 0, Therefore, in the remainder of the proof, we will lower bound Pr max j∈[n]\J x b j T for a suitable value of T = Ω( b/n • log n).We will also use the definition Finally, we define ξ = 0.1 as a (sufficiently) small constant. Case 1: We have at least n − n ξ bins for which ϕ j −C b/n.Since i∈[n] ϕ i = 0, this implies that there must be at least one bin with j ∈ Further, using that the median of a Bin(N, q) r.v. is either N q or N q , then Pr it follows that with probability at least 1/2 we will have a large gap.Case 2: We have at least n ξ bins with ϕ j −C b/n; call this set B. We further know that, due to the definition of p, we have for all bins j ∈ [n] that p j Experimental Results In this section, we complement our theoretical analysis with some experimental results for the b-Batched setting.In Fig. 6.1, we plot the gap of the (1 + β)-process for various batch sizes and different values of β ∈ (0, 1] (Two-Choice corresponding to β = 1).The plot strongly suggests the existence of an optimal β, which seems to increase as the batch size b grows.In Fig. 6.2, we present the corresponding empirical results of Fig. 6.1 for the Quantile process (mixed with One-Choice).As with the (1 + β)-process, the optimal mixing factor η tends to increase as the batch size grows.The Quantile with the optimized mixing factor seems to perform slightly worse than the optimized (1 + β)-process.In Fig. 6.3, we plot the gap of Two-Choice, Three-Choice and (1 + β) versus the batch size.For small values of b, the gap of Two-Choice and Three-Choice is small, but soon grows rapidly, diverging from the asymptotically optimal (1 + β)-processes as predicted by the theoretical analysis.Similar, results are observed for weights sampled from an exponential distribution Fig. 6.4.Finally, in Table 6.5, we show the gap of the (1 + β) and Quantile compared to Two-Choice and One-Choice with b balls (which is the theoretically optimal attainable value), for slightly larger values of n ∈ {10 4 , 10 5 }.The for large b, the (1 + β) has roughly half the gap of Two-Choice and is close to the theoretically optimal value of One-Choice for m = b balls. Conclusions In this work, we revisited the outdated information setting of [5], where balls are allocated to bins in batches of size b, using the load information available at the beginning of the batch.We established that by defining the mixing factor β carefully as a function of the batch size b, (1 + β) achieves the asymptotically optimal gap for any b n log n.That is, by having β chosen appropriately small, (1 + β) circumvents the "herd behavior" (as called in [27]), where some of the previously underloaded bins are chosen too frequently, turning them into heavily overloaded bins in the next batch.Similarly, β should also not be too small, as otherwise the process would be too close to One-Choice. There are several directions for future work.First, recall that our lower bounds apply to a large class of processes which allocate all balls within the same batch independently.However, there are processes which allocate multiple balls in a coordinated way.For example, the process of Park [32] draws d samples, and then places into each of the k least loaded bins one ball.It would be interesting to explore the gap of this type of processes in the b-Batched setting.A second avenue is to analyze Two-Thinning processes (and in particular processes that use a fixed load threshold relative to the average) in outdated information settings.An experimental study of threshold processes with outdated information was already conducted in 1989 [25, Figure 8], but no rigorous bounds were proven.A third possibility is to investigate whether the (1 + β) and related processes are superior to Two-Choice in other settings, like the τ -Delay or random noise settings studied in [22].Finally, one could study settings where the load information of bins is updated at different rates, depending on the specific bin.In such a setting, when deciding between sampled bins, both their reported load estimates and update rates should be taken into account. A.1 Auxiliary Probabilistic Claims For convenience, we add the following well-known inequality for a sequence of random variables, whose expectations are related through a recurrence inequality. Lemma A.1.Consider a sequence of random variables (X i ) i∈N such that there exist a ∈ (0, 1) and b > 0 such that every i 1, Then, for every i 1, Proof.We will prove by induction that for every i ∈ N, For i = 0, it trivially holds that E X 0 | X 0 X 0 .Assuming the induction hypothesis holds for some i 0, then since a > 0, The claims follows using that for a ∈ (0, 1), ∞ j=0 a j = 1 1−a .For the next lemma, we define for two n-dimensional vectors x, y, x, y := n i=1 x i • y i . Lemma A.2 ([20, Lemma A.7]).Let (p k ) n k=1 , (q k ) n k=1 be two probability vectors and (c k ) n k=1 be non-negative and non-increasing.Then if p majorizes q, i.e., for all 1 k n, k i=1 p i k i=1 q i holds, then p, c q, c . We continue with an "anti-concentration" result, i.e., a lower bound on the probability that a binomial random variable is significantly larger than its expectation. Lemma A.3.Let m, n be integers such that m n log n.Further, let p be a probability satisfying p ∈ [1/(2n), 1/2] and let X ∼ Bin(m, p).Then for any constant ξ ∈ (0, 1), there exists a constant κ 0, such that Proof.Since X ∼ Bin(m, p), we know that A.2 Concentration Inequalities We now proceed by stating a standard Chernoff bound. Figure 1 . 1 : Figure 1.1:The b = 750 balls of the latest batch shown in red allocated over the n = 35 bins (left) for Two-Choice and (right) (1 + β) with β = 1/2.Observe that Two-Choice allocates more aggressively on the bins that are lightly loaded at the beginning of the batch, while (1 + β) spreads the allocations more evenly. Figure 1 . 2 : Figure 1.2:In the b-Batched setting for large batch size b, the gaps achieved by the processes are ordered by their maximum entry in the probability allocation vector p: Three-Choice with max i∈[n] p i ≈ 3 n , Two-Choice with max i∈[n] p i ≈ 2 n , (1 + β) with max i∈[n] p i ≈ 1+β n for β = 0.5, β = (n/b) • log n and β = (n/b) • log n.See Fig. 6.3 for full details of the experiment. ) b-Batched Setting with Weights and Random Tie-Breaking Parameters: Batch size b n, probability allocation vector p, weight distribution W. Iteration: For each t = 0 • b, 1 • b, 2 • b, . ..: Further, consider the weighted b-Batched setting with weights from a Finite-MGF(S) distribution with S 1 and a batch size b 2CS (C−1) 2 • n.Then, there exists a constant k := k(δ) > 0, such that for any step m 0 being a multiple of b, Pr max i∈[n] Corollary 3 . 2 . Let b n log n and consider the weighted b-Batched setting with weights from a Finite-MGF(S) distribution with S ∈ [1, b/4n].Then, there exists a constant k > 0 such that for the (1 + β)-process with β = 4S • n b and for any step m 0 being a multiple of b, . 3 ) The proof proceeds in a similar manner to[20, Lemma 4.1], but we bound the terms in (3.6) and (3.10) more tightly using the new condition C 3 .Compared to the statement of [20, Lemma 4.1], the coefficients of the term γ 2 n change from 5C 2 S 2 b 2 n to 5(C −1) 2 b 2 n .Note that C is replaced by C − 1, which makes a difference when C = 1 + o(1), and that S does not appear as we have assumed that b 2CS (C−1) 2 • n.Proof.Consider an arbitrary step t 0 being a multiple of b and for convenience let p = p t .First note that the given assumptions γ n 2(C−1)•b and b 4Theorem 4 . 1 . 1 . Tight Bound: O( (b/n) • log n) Gap In this section, we will prove the stronger O (b/n) • log n bound on the gap for a family of processes in the weighted b-Batched setting (with b ∈ [2n log n, n 3 ]).More specifically, these processes are a subset of the ones analyzed in Section 3 and include the (1 + β)-process with β = (n/b) log n, as well as Quantile(1/2) mixed with One-Choice.As we will show in Section 5, these processes achieve the asymptotically optimal bound.Consider the weighted b-Batched setting with any b ∈ [2n log n, n 3 ] and weights from a Finite-MGF(S) distribution with constant S Further let ε = (n/b) • log n. using in (a) the Taylor estimate e v 1 + 2v (for v 1 )• log n 1 log n and S 1 ζ and that γ 2 • 15 ζ and in (b) that (4.2), Now we are ready to complete the proof of Theorem 4. 1 . 2 cn Proof of Theorem 4.1.First consider the case when m b • log 3 n.Let t 0 = m − b • log 3 n.Let E t := Γ t 1 2n.T n −ξ/ 2 .p j 1 2n, Hence, we set T := b• p j +κ• b • p j • log n, and applying Lemma A.3 yields for any bin j ∈ B, Pr x b j Since |B| n ξ , the claim follows. Table 1 . 3 : Overview of the gap bounds in previous works (rows in Gray ) and the gap bounds derived in this work (rows in Green ).All gap bounds hold with probability at least 1 − o(1).Lower bounds hold for sufficiently large enough m. [20,n , which means that the weight of the ball sampled in step t is O(log n) (since by assumption ζ > 0 is constant).By a simple Chernoff bound and a union bound, we can deduce that this holds for a poly(n)-long interval.Lemma 4.4 (cf.[20,Lemma 5.4]).Consider any Finite-MGF(ζ) distribution W with constant ζ > 0.Then, for any steps t 0 0 and t 1 ∈ [t 0 , t 0 + n 3 log 3 n], we have that cn26and H t holds.We start by bounding the load of any bin i ∈ [n], n) and γ 2 := γ 1 8•30 .Consider any step t 0, such thatΓ t 1 2where in the second implication we used log(2 c) + 26 γ 1 log n 27 γ 1 log n, for sufficiently large n.First statement.Using (4.1), we bound the contribution of any bin i ∈ [n] to Γ t 2 as follows,
2023-02-10T06:42:35.274Z
2023-02-09T00:00:00.000
{ "year": 2023, "sha1": "202cfb9269568560aae0958203e1190c66d6765e", "oa_license": "CCBY", "oa_url": "https://dl.acm.org/doi/pdf/10.1145/3558481.3591088", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "202cfb9269568560aae0958203e1190c66d6765e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
5786565
pes2o/s2orc
v3-fos-license
Are We Really Getting Conservation So Badly Wrong? international treaty to protect biodiversity. In Nagoya, governments reaffirmed their concern over the continuing loss of biodiversity and set new targets to address the crisis. Among the national delegations, indigenous people, and diverse stakeholders present, lobbying by international nongov-ernmental organizations (NGOs) contributed to a COP decision to expand protected areas to 17% of terrestrial ecosystems (up from the current 12.9% coverage globally) and marine areas to 10% (currently at just over 1%). Against this background, Rosaleen Duffy argues that mainstream conservation efforts are failing in her new book, Nature Crime: How We're Getting Conservation Wrong. Duffy, a professor of international politics at Manchester University, questions the importance of international NGOs in setting global and national conservation agendas, the development of alliances between NGOs and the private sector, and the alienation of local communities by conservation practices (which constitute, in Duffy's words, ''the darker side of conservation''). With chapters that cover the international wildlife trade, global markets, the local costs of conservation, poaching, ivory trade bans, and the role that conflicts play in habitat and resource loss, Duffy addresses many controversial topics in this thought-provoking book. She questions the need for biodiversity conservation to be linked to the concept of wilderness and the exclusion of local people, illustrating her arguments with studies based on her own field work and other publications. The book challenges the idea that poverty is a primary driver of habitat and wildlife loss; many conservationists would agree. The global trade in wildlife, for instance, is big business and so profitable that it is often run by organized crime syndicates that also specialize in drug running and human trafficking. The trade is global, dramatically impacting rare and endangered wildlife in developing countries but also targeting countries such as the United States and United Kingdom as well. It is horrifying to realize that bears are being killed in US national parks so that their gall bladders can be shipped to the Far East for the ''traditional medicine trade'' and that illegal immigrants lost their lives harvesting cockles in the treacherous sands of Morecombe Bay to provide gourmet dishes for Western Europe. Because of the influence of global markets on resource use—everything from rhino horn to sapphires and coltan, a metallic ore used in mobile phones—Duffy believes that conservationists, especially the large international NGOs like WWF, Wildlife Conservation Society, Conservation International, and The Nature Con-servancy, are taking the wrong approach … In October 2010, some 18,000 delegates from 193 nations met in Nagoya, Japan, for the 10th Conference of the Parties (COP) to the Convention on Biological Diversity, the international treaty to protect biodiversity. In Nagoya, governments reaffirmed their concern over the continuing loss of biodiversity and set new targets to address the crisis. Among the national delegations, indigenous people, and diverse stakeholders present, lobbying by international nongovernmental organizations (NGOs) contributed to a COP decision to expand protected areas to 17% of terrestrial ecosystems (up from the current 12.9% coverage globally) and marine areas to 10% (currently at just over 1%). Against this background, Rosaleen Duffy argues that mainstream conservation efforts are failing in her new book, Nature Crime: How We're Getting Conservation Wrong. Duffy, a professor of international politics at Manchester University, questions the importance of international NGOs in setting global and national conservation agendas, the development of alliances between NGOs and the private sector, and the alienation of local communities by conservation practices (which constitute, in Duffy's words, ''the darker side of conservation''). With chapters that cover the international wildlife trade, global markets, the local costs of conservation, poaching, ivory trade bans, and the role that conflicts play in habitat and resource loss, Duffy addresses many controversial topics in this thought-provoking book. She questions the need for biodiversity conservation to be linked to the concept of wilderness and the exclusion of local people, illustrating her arguments with studies based on her own field work and other publications. The book challenges the idea that poverty is a primary driver of habitat and wildlife loss; many conservationists would agree. The global trade in wildlife, for instance, is big business and so profitable that it is often run by organized crime syndicates that also specialize in drug running and human trafficking. The trade is global, dramatically impacting rare and endangered wildlife in developing countries but also targeting countries such as the United States and United Kingdom as well. It is horrifying to realize that bears are being killed in US national parks so that their gall bladders can be shipped to the Far East for the ''traditional medicine trade'' and that illegal immigrants lost their lives harvesting cockles in the treacherous sands of Morecombe Bay to provide gourmet dishes for Western Europe. Because of the influence of global markets on resource use-everything from rhino horn to sapphires and coltan, a metallic ore used in mobile phones-Duffy believes that conservationists, especially the large international NGOs like WWF, Wildlife Conservation Society, Conservation International, and The Nature Conservancy, are taking the wrong approach to saving wild nature. She feels that conservation initiatives focus too little on the real drivers of biodiversity loss-the resource demands of the rich world-and too much on local problems, so that efforts to conserve wildlife criminalize local communities and even promote violence against them. While most conservation funding is targeted to protected areas and supplementing national efforts to protect unique habitats and wildlife, NGOs such as WWF, Conservation International, and Flora and Fauna International are all also involved in awareness campaigns to change resource use patterns in the richer nations. TRAFFIC and Wildlife Conservation Society are working with national agencies to address cross-border illegal wildlife trade. Nevertheless, protected areas are the cornerstones of biodiversity conservation, and for some large-ranging species will be the only places where they can survive. Not all reserves need to be managed by state agencies, however; there is good evidence that reserves managed by indigenous and local communities can be equally or more effective in protecting habitats and species. Conservationists in general are very aware that establishing new protected areas may reduce access to resources for poor communities. The World Bank and other development agencies even have specific operational policies to mitigate such impacts, and many projects have tried to reconcile the legitimate needs of conservation and local people. Unfortunately, there are no silver bullets in conservation, and most successes involve ''trade-offs'' and an integrated menu of enforcement, incentives, and champions. Sadly, many integrated conservation and development projects (ICDPs), though well-intentioned, have failed to meet either their conservation or development objectives. New livelihood options tend to be supplementary rather than alternative and often less profitable than the more damaging activities they seek to replace. Why stop illegal logging or clearing protected forest for high-value crops such as cinnamon or coffee if there is no danger of arrest and wealthy and highplaced officials are backing the venture? Moreover, it is totally unrealistic to expect under-resourced conservation organizations to take on responsibility for poverty alleviation and good governance in situations where policy failures, weak government, and poor law enforcement enable illegal logging, wildlife trade, and overexploitation of natural resources. Indeed, the lessons to be drawn from past ICDPs will be highly relevant to implementation of the Reducing Emissions from Deforestation and Degradation (REDD) agenda, which aims to link efforts to reverse climate change with better forest management and conservation. As human populations continue to grow and natural habitats and species are lost to agricultural expansion and over-harvesting, humankind faces some difficult choices. Biodiversity loss, water shortages, and food security, already serious problems, will become even more urgent environmental issues in the coming decades, and will only worsen with climate change. Protected areas can help to mitigate some of these impacts by storing and sequestering carbon and safeguarding critical ecosystem services-such as water flows and water quality, coastal and flood protection, fisheries production, and pollination-on which all human societies depend. Greater appreciation and protection of these values could help people cope better with the impacts of climate change, especially the poorest and most vulnerable communities for whom Duffy has special concern. Overall, this book is an interesting, but sometimes infuriating, read. The author raises valid concerns about important issues, but there is much to challenge. For instance, most conservation NGOs (and the World Bank) understand very well the limited potential of ecotourism, which may produce substantial local benefits at some popular sites but is certainly not a universal remedy. While acknowledging the social, economic, and political complexity of many conservation problems, Duffy herself falls into the trap of oversimplification. Her sympathies for local people lead to exaggerated and dramatic statements that conservation promotes ''shoot to kill'' policies and portrays Africans as ''black/bad/poachers/rebels'' while conservationists are seen as ''white/ good/saviors of wildlife.'' This may make good copy, but is far from the truth. Conservation is not just a Western agenda. Protected areas are national commitments and international and local NGOs work with local field staff. It is hard to overstate the dedication and pride of rangers in the Virunga and Garamba national parks in Democratic Republic of Congo who work to protect endangered wildlife in a region wracked by civil war. Or the empowerment of tribal communities and village women in India who benefit from ecodevelopment projects. From South America to the Pacific islands and from Africa to East Asia, conservation dollars are making a positive difference for wildlife and local peoples. So read the book and think carefully about the arguments. But don't cancel your subscription to WWF just yet.
2016-05-12T22:15:10.714Z
2011-01-01T00:00:00.000
{ "year": 2011, "sha1": "9d7bb16559af3c2c07f388e7c1f905ad8560de58", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosbiology/article/file?id=10.1371/journal.pbio.1001010&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9d7bb16559af3c2c07f388e7c1f905ad8560de58", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
73421613
pes2o/s2orc
v3-fos-license
Nonadditivity of the Adsorption Energies of Linear Acenes on Au(111): Molecular Anisotropy and Many-Body Effects Adsorption energies of chemisorbed molecules on inorganic solids usually scale linearly with molecular size and are well described by additive scaling laws. However, much less is known about scaling laws for physisorbed molecules. Our temperature-programmed desorption experiments demonstrate that the adsorption energy of acenes (benzene to pentacene) on the Au(111) surface in the limit of low coverage is highly nonadditive with respect to the molecular size. For pentacene, the deviation from an additive scaling of the adsorption energy amounts to as much as 0.7 eV. Our first-principles calculations explain the observed nonadditive behavior in terms of anisotropy of molecular polarization stemming from many-body electronic correlations. The observed nonadditivity of the adsorption energy has implications for surface-mediated intermolecular interactions and the ensuing on-surface self-assembly. Thus, future coverage-dependent studies should aim to gain insights into the impact of these complex interactions on the selfassembly of π-conjugated organic molecules on metal surfaces. F π-conjugated organic molecules adsorbed on metallic substrates, the binding mainly arises from the π-electrons interacting with the metal surface and accordingly the binding strength strongly depends on the size of the π-electron system. For chemisorbed molecules, e.g., benzene and naphthalene on Pt(111), it is widely accepted that the adsorption energy is additive and scales linearly with the size of the π-electron system. However, for physisorbed molecules no clear picture exists about the scaling of the binding energy with the extent of the electronic system. Consequently, investigating the effects of the size of the π-electron system on the binding strength of physisorbed compounds on a metal surface allows the much needed benchmarks for metal−π interactions to be obtained. The benzene/Au(111) system has been investigated in detail, and recently we precisely established the binding energy of benzene on Au(111) and also on the other (111)oriented coinage metal surfaces by a combined experimental and theoretical study. In contrast, for the larger acenes adsorbed on (111)-coinage metals only for some molecular systems binding energies have been determined from rather unprecise methods, such as the Redhead approximation. In addition, van der Waals (vdW) interactions have been included at a rather low level in available DFT+vdW calculations; thus, reliable adsorption energy values for acenes on metals are not yet available. Here, we systematically study the binding properties of naphthalene, anthracene, tetracene, and pentacene on Au(111) by means of detailed coverage dependent temperatureprogrammed desorption (TPD) measurements and by applying the so-called complete analysis in combination with recently developed first-principles calculations that account for the nonlocality of electron correlation. Both our experiments and calculations yield a strongly nonadditive adsorption energy as a function of the acene length. We attribute this nonadditive effect to the anisotropic polarization of the acenes stemming from electronic many-body correlations. Figure 1 shows prototypical TPD spectra for acenes of varying size on Au(111). One can observe a shift toward higher desorption temperatures with rising number of phenyl rings (π-electrons), in agreement with ref 10. Specifically, for all acenes a low-temperature desorption peak is observed, which can be associated with desorption from the second or higher layers (multilayers). The desorption features at higher temperatures can be assigned to desorption from the monolayer (ML), i.e., molecules in direct contact with the metal substrate, which are more strongly bound compared to molecules above the monolayer. In the case of benzene, naphthalene, and pentacene, a third desorption peak is found. We attribute this to desorption from a more densely packed compressed phase, as reported for other aromatic molecules on noble metal surfaces. Note that the temperature range in Received: January 29, 2019 Accepted: February 15, 2019 Published: February 15, 2019 Letter pubs.acs.org/JPCL Cite This: J. Phys. Chem. Lett. 2019, 10, 1000−1004 © XXXX American Chemical Society 1000 DOI: 10.1021/acs.jpclett.9b00265 J. Phys. Chem. Lett. 2019, 10, 1000−1004 D ow nl oa de d vi a FR E IE U N IV B E R L IN o n Fe br ua ry 2 0, 2 01 9 at 1 0: 25 :3 3 (U T C ). Se e ht tp s: //p ub s. ac s. or g/ sh ar in gg ui de lin es f or o pt io ns o n ho w to le gi tim at el y sh ar e pu bl is he d ar tic le s. F or π-conjugated organic molecules adsorbed on metallic substrates, the binding mainly arises from the π-electrons interacting with the metal surface and accordingly the binding strength strongly depends on the size of the π-electron system. For chemisorbed molecules, e.g., benzene and naphthalene on Pt(111), it is widely accepted that the adsorption energy is additive and scales linearly with the size of the π-electron system. 1−5 However, for physisorbed molecules no clear picture exists about the scaling of the binding energy with the extent of the electronic system. 6 Consequently, investigating the effects of the size of the π-electron system on the binding strength of physisorbed compounds on a metal surface allows the much needed benchmarks for metal−π interactions to be obtained. The benzene/Au(111) system has been investigated in detail, 4,7 and recently we precisely established the binding energy of benzene on Au(111) and also on the other (111)oriented coinage metal surfaces by a combined experimental and theoretical study. 8,9 In contrast, for the larger acenes adsorbed on (111)-coinage metals only for some molecular systems binding energies have been determined 10−13 from rather unprecise methods, such as the Redhead approximation. 14 In addition, van der Waals (vdW) interactions have been included at a rather low level in available DFT+vdW calculations; thus, reliable adsorption energy values for acenes on metals are not yet available. 15,16 Here, we systematically study the binding properties of naphthalene, anthracene, tetracene, and pentacene on Au(111) by means of detailed coverage dependent temperature-programmed desorption (TPD) measurements and by applying the so-called complete analysis 17,18 in combination with recently developed first-principles calculations that account for the nonlocality of electron correlation. 19 Both our experiments and calculations yield a strongly nonadditive adsorption energy as a function of the acene length. We attribute this nonadditive effect to the anisotropic polarization of the acenes stemming from electronic many-body correlations. Figure 1 shows prototypical TPD spectra for acenes of varying size on Au(111). One can observe a shift toward higher desorption temperatures with rising number of phenyl rings (π-electrons), in agreement with ref 10. Specifically, for all acenes a low-temperature desorption peak is observed, which can be associated with desorption from the second or higher layers (multilayers). The desorption features at higher temperatures can be assigned to desorption from the monolayer (ML), i.e., molecules in direct contact with the metal substrate, which are more strongly bound compared to molecules above the monolayer. In the case of benzene, naphthalene, and pentacene, a third desorption peak is found. We attribute this to desorption from a more densely packed compressed phase, as reported for other aromatic molecules on noble metal surfaces. 20−27 Note that the temperature range in which monolayer desorption occurs increases with increasing number of phenyl rings. To precisely analyze the adsorption properties of acenes on Au(111), we determined the binding energies of naphthalene, anthracene, tetracene, and pentacene as a function of coverage by using the so-called complete analysis method. 17 This method has the advantage that no guess for the pre-exponential factor (ν) has to be made. This is of particular importance because the value of this factor depends on a number of system parameters such as the size of the molecule and its vibrational degrees of freedom, among other contributions. 28 Furthermore, it is the only method which allows analyzing measurements to determine the coverage dependency of ν and the desorption energy (E Des ). 18 We already successfully used this method to elucidate the binding energies of benzene on the coinage metal surfaces Au(111), Ag(111), and Cu(111) 8,9 as well as azobenzenes on noble metals. 21,29,30 To apply the complete analysis, TPD measurements over a wide range of initial coverages in the submonolayer regime have to be carried out, as exemplarily displayed in Figure 2a for pentacene/Au(111). We defined a monolayer to the desorption spectrum shown in black. The integral of this spectrum is used as a reference to determine the coverage of all other TPD spectra shown in Figure 2a. While the falling edges at the high temperature cutoff of the monolayer to submonolayer desorption peaks lie on top of each other, the peak maxima strongly shift to lower temperatures with increasing coverage. This is a clear indication for repulsive lateral and substrate-mediated interactions. 8,9,21,31 By applying the complete analysis evaluation routine, the binding energy as a function of coverage can be determined, which is displayed in Figure 2b. The data are fitted with a second-order polynomial fit resulting in a desorption energy in the limit of vanishing coverage (the intercept with the y-axis) of E Des (θ→ 0 ML) = 1.80 ± 0.15 eV. For comparison with theoretical data, this experimental value measured at finite temperatures has to be corrected to E Des at 0 K by adding 3/2k B T Des (≈ 0.03 eV), yielding a value of 1.83 ± 0.15 eV. For the other acenes, we also analyzed the binding properties as a function of coverage using the complete analysis (see Supporting Information (SI)); the binding energy values for vanishing coverage are summarized in Figure 3 and Table 1. As can be clearly seen, the binding energy scales nonadditively with the number of πelectrons. To rationalize these experimental findings, in the following we present and discuss our first-principles calcu-lations that account for nonlocality of electronic correlation effects. We have carried out DFT+vdW calculations for the adsorption of all studied acenes (from benzene to pentacene) 3. Desorption energy in the limit of vanishing coverage for the acenes, i.e., benzene, naphthalene, anthracene, tetracene, and pentacene on Au(111). The value for benzene (Bz)/Au(111) determined via the complete analysis of TPD data has been adopted from ref. 9 Computed binding energies for the acenes on Au(111) at the most preferable sites using PBE+vdW surf and PBE+MBD calculations. The dashed line assumes a simple additive behavior of the adsorption energy per π-electron based on the measured E Des for benzene. The Journal of Physical Chemistry Letters Letter on Au(111). We employ a relatively large (9 × 5) supercell of Au(111) that allows comparison to low-coverage experiments, since lateral interactions in the (9 × 5) cell are negligible. In order to explore the infuence of different treatments of vdW interactions, we have employed two DFT+vdW methods, namely PBE+vdW surf32 and PBE+MBD. 19,33 The comparison between these two sets of calculations allows us to assess the potential relevance of many-body correlations (MBD) beyond the pairwise approximation (vdW surf ). As can be observed in Figure 3, the PBE+vdW surf calculations predict that the adsorption energy of acenes increases in an additive linear fashion, as expected from a simple atom-pairwise approximation for the vdW interactions, but in strong contrast to experimental observations. We stress that any pairwise approximation to vdW interactions would yield a simple additive scaling of the adsorption energy. Conversely, PBE +MBD calculations yield a pronounced nonadditive behavior of the adsorption energy as a function of acene size, and the PBE+MBD adsorption energies turn out to be in excellent agreement with experiment (see Figure 3 and Table 1). The fraction of the adsorption energy induced by many-body electronic correlations strongly increases with molecular size, growing from 0.2 eV for benzene to almost 0.7 eV for pentacene. The observed nonadditive behavior of the adsorption energy can be understood in terms of anisotropy of molecular polarization which stems from many-body electronic correlations. Already in the gas phase, the polarizability of acenes is strongly anisotropic. For example, the perpendicular component of the polarizability of benzene is twice smaller than the in-plane component. 8 The anisotropy keeps growing with molecular size. 34 Upon adsorption on a metal surface, the polarizability of the combined molecule/surface system can strongly deviate from the sum of polarizabilities of the isolated molecule and pristine surface. We already found in ref 8 that for the benzene/Au(111) system the perpendicular component of the polarizability grows, while the in-plane component decreases, upon adsorption. This change in polarizability components results in less effective coupling between the inplane fluctuations of the electron density of the benzene molecule and the surface plasmons, leading to a concomitantly smaller vdW attraction in PBE+MBD calculations compared to the PBE+vdW surf method. A similar mechanism is in play for larger acene molecules, with even larger (de)polarization than that found for benzene/Au(111). For the pentacene molecule on Au(111), we observe that the perpendicular (zz) component of the polarizability grows by 11 bohr 3 , while the in-plane (xx and yy) components are reduced by 7.2 and 4.0 bohr 3 , respectively, compared to the separated molecule and surface. The decrease of the in-plane polarizability, which increases with acene size, explains the reduction and nonadditivity of acene adsorption energies compared to a simple additive scaling predicted by pairwise additive vdW models. The presented calculations rely on several approximations whose accuracy we will briefly discuss here (see Methods section for more information). First, we used static unrelaxed structures for the acene molecules adsorbed at an equilibrium vertical position. For benzene/Au(111), the change in the adsorption energy due to full geometry relaxation was found to be 0.02 eV; 35 hence, it is completely negligible compared to the energies in Table 1. Second, at the desorption temperatures in the low coverage limit the molecules are mobile and hence there is no unique adsorption site. Thus, the cleanest comparison of experiments and calculations at low coverage is done by assuming an unreconstructed Au(111) surface. When the experimental and theoretical results are merged, both elucidate the appreciable nonadditivity of the binding energy with the size of the π-electron system. As can be seen in Figure 3 and from Table 1, our calculated binding energies using the first-principles PBE+MBD method are in excellent agreement with the experimentally determined values. Thus, we conclude that the nonadditive behavior can be explained by the anisotropy of the molecular polarization originated from many-body electronic correlations. The nonadditivity identified for the single molecules may also affect neighboring adsorption sites and thus influence the binding energies at higher coverages in the submonolayer regime. The TPD data clearly reveal the existence of repulsive lateral and substratemediated interactions in the submonolayer regime. Moreover the temperature range in which monolayer desorption occurs rises significantly with increasing molecular size, providing strong motivation for further studies at higher coverages. In summary, we employed temperature-programmed desorption and first-principles calculations to precisely determine the binding strength of the acenes (benzene to pentacene) on the Au(111) surface. Contrary to chemisorbed molecules on metal surfaces, for which the adsorption energy scales additively with the size of the π-electron system, we found a strong nonadditivity for physisorbed molecules in the limit of vanishing coverage (single molecule) by both theory and experiment. Our PBE+MBD calculations that account for many-body correlations resulted in excellent agreement with the experimentally determined binding energies. The nonadditive behavior of the adsorption energy was attributed to the anisotropy of the acenes molecular polarization stemming from electronic many-body correlations. We anticipate that the identified nonadditive behavior of the binding energies will also have an impact on substrate-mediated intermolecular interactions and accordingly on self-assembly processes at surfaces at higher coverages in the submonolayer regime. Hence, future work should focus on coverage-dependent studies to elucidate substrate-mediated intermolecular interactions. All experimental values are corrected to E Des at 0 K and correspond to the limit of zero coverage (see text). b The binding energy value has been adopted from ref 9. The Journal of Physical Chemistry Letters Letter ■ EXPERIMENTAL AND COMPUTATIONAL METHODS Experimental Methods. All experiments were performed under ultrahigh vacuum conditions at a base pressure of 1 × 10 −10 mbar. The crystals were mounted onto a liquid nitrogen cooled cryostat, and together with resistive heating a temperature range (measured directly at the substrate via a thermocouple) between 100 and 800 K was achievable and precisely controllable. Crystals were prepared by a standard cleaning procedure including Ar + sputtering and subsequent annealing to 750 K. The respective molecules were evaporated from an effusion cell (anthracene, tetracene, and pentacene) or via a leak valve (naphthalene) and deposited on a Au(111) single crystal (the evaporation and sample temperatures for all investigated molecules can be found in the SI). To record TPD spectra, the samples were heated with a constant heating rate of β = 1 K/s and the desorbing acences were monitored with a quadrupole mass spectrometer. The complete analysis method, which has been applied to analyze the coverage dependent TPD data, has been described in detail in ref 9. Computational Methods. For all DFT calculations, we used the all-electron/full-potential electronic-structure code FHIaims. 36,37 The PBE 38 exchange-correlation (XC) functional and "light" computational settings were used for the calculations. The electron density was converged to 10 −5 e, and the total energy, to 10 −6 eV. Relativistic effects were included via the atomic scalar zeroth-order regular approximation. 39 The electronic levels around the conduction level were broadened with a Gaussian function with a width of 0.1 eV. The total energies were calculated for zero-broadening based on the entropy of the electron gas. We have calculated the binding energies for acene molecules adsorbed on the Au(111) surface using the single-point calculations with both the PBE+vdW surf32, 40 and PBE+MBD 19 methods. The surface slab was modeled with a 9 × 5 supercell having four metallic layers, and a vacuum space of 40 Å was included perpendicular to the surface to avoid the interaction between periodic images. The acene molecules were placed 3.08 Å above the topmost Au layer of the slab. The difference in adsorption energetics was studied locating the acene molecules on the fcc site and bridge site of the Au slab. We used a Monkhorst−Pack grid of 2 × 4 × 1 k-points. The adsorption energies were calculated using where E acene/Au(111) is the total energy of the molecule/surface system, E Au(111) is the energy of the bare Au(111) slab, and E acene is the energy of the isolated acene molecule in the gas phase. Several approximations have been employed in our calculations to understand the energy trends for adsorption of acenes on Au(111). First, we used static unrelaxed molecular geometries for acenes. This approximation was assessed for the benzene/Au(111) previously, 35 finding a negligible impact of 0.02 eV in the adsorption energy. Second, at the desorption temperatures in the low coverage limit the molecules are mobile and hence there is no unique adsorption site. Thus, the cleanest comparison of experiments and calculations at low coverage is done by assuming an unreconstructed Au(111) surface. * S Supporting Information The Supporting Information is available free of charge on the ACS Publications website at DOI: 10.1021/acs.jpclett.9b00265. Sample preparation conditions, temperature-programmed desorption data, coverage dependent binding energies and pre-exponential factors of naphthaline, anthracene, and tetracene adsorbed on Au (111)
2019-03-08T14:19:04.278Z
2019-02-15T00:00:00.000
{ "year": 2019, "sha1": "c0f5ea28a545074bd0046c4889f505c4fb216428", "oa_license": "CCBYNCSA", "oa_url": "https://orbilu.uni.lu/bitstream/10993/40825/1/142-acenes-Au111-nonadditivity-JPCL-2019.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "6cc284f9ca638202cd8b180c17d6c239e27a9a68", "s2fieldsofstudy": [ "Chemistry", "Physics" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
260621251
pes2o/s2orc
v3-fos-license
Evaluation of baseline optic disc pit and optic disc coloboma maculopathy features by spectral domain optical coherence tomography Purpose The aim of this study is to describe and compare the baseline demographic, ocular, and imaging characteristics of a cohort of patients with optic disc pit (ODP) or optic disc coloboma (ODC) maculopathy. Methods This retrospective study included patients diagnosed with ODP or ODC on clinical examination between June 2017 and December 2022. These patients’ baseline demographics, ocular characteristics, and optical coherence tomography (OCT) imaging characteristics were analyzed. Results Fundus examination revealed 11 eyes of 11 patients with ODP and 14 eyes of 9 patients with ODC, respectively. On OCT, maculopathy was observed more frequently in ODP (n = 10) than in ODC (n = 4) [p = 0.004] cases. Eyes with ODP were more likely to exhibit retinoschisis and/or serous macular detachment [SMD] (n = 7, 70%), communication of the retinoschisis with the optic disc (p = 0.015), whereas the SMD did not communicate with the optic disc (p = 0.005), and significant outer retinal layer thinning (p = 0.015). In contrast, eyes with ODC exhibited only SMD (p = 0.005) and no retinoschisis on the non-colobomatous retina. SMD in ODC communicated with the margin of the optic disc. In both clinical entities, hyperreflective foci were observed in the SMD. Conclusion In summary, baseline maculopathy characteristics on OCT, including its type, location, and relationship to the optic disc, are among the most distinguishing characteristics between an ODP and an ODC. Trial Registration Number Not applicable. Supplementary Information The online version contains supplementary material available at 10.1186/s40942-023-00484-7. Introduction Excavated congenital optic disc anomalies include megalopapilla, peripapillary staphyloma, optic disc coloboma (ODC), optic disc pit (ODP), and morning glory disk anomaly (MGDA) [1].ODC, ODP and MGDA share a common embryological origin, exhibit similar optic disc morphology, remain stationary, and are frequently associated with serous retinal detachments [2,3].The most common explanation regarding the formation of congenital cavitary optic disc entities proposes that they arise from either the incomplete closure of the fetal fissure during embryogenesis or an impaired differentiation of the peripapillary sclera from the primary mesenchyme [4].Despite sharing a similar embryonic origin, these entities exhibit distinct visual characteristics when observed at the optic disc.An ODP refers to a central depression resembling a crater in the optic disc, where the typical tissue of the optic nerve is absent.This condition is predominantly found on the temporal aspect of the optic disc [5].An ODC is distinguished by a concave excavation, typically with well-defined boundaries, located inferiorly.It lacks a central glial tuft and typically exhibits normal vasculature within the optic disc [5].In contrast, the primary ophthalmoscopic characteristics of a MGDA consist of a conical depression with a tuft of glial tissue located at the center of the depression, accompanied by a pronounced and increased retinal vascularity extending outward from the outer margins of the optic disc [5,6].Congenital ODPs are seen alone or occasionally in combination with ODCs [7,8].These observations suggest that congenital ODPs may be pathologically related to ODCs. In terms of vision, patients with ODPs and ODCs typically exhibit excellent vision, unless further complicated by retinal schisis (splitting) and serous macular detachment (SMD) [9].On the other hand, MGDA is associated with an increased risk of retinal detachment and poor visual acuity [9].The only reason to consider treatment/ intervention in cases of ODP or ODC is the development of SMD [10].Treatment/intervention is generally considered for cases demonstrating new macular serous detachment with recent onset visual symptoms or cases with progressive increase in the SMDs [10][11][12].A SMD affects approximately two-thirds of patients with congenital ODPs [13].Optical coherence tomography (OCT) has revealed that retinoschisis-like retinal separations occur frequently during the development of SMDs in eyes with congenital ODPs [14].In eyes with ODC, similar SMDs with retinoschisis-like separations have been reported [15,16].The exact etiology of macular detachments caused by ODPs or ODCs is still unknown.Furthermore, the pathogenesis for developing maculopathy in these embryologically similar entities could be significantly different. In order to conduct a more in-depth investigation, we aimed to compare and report the baseline OCT findings in eyes with ODP and ODC maculopathy, as well as provide a plausible theory for its development in these cases. Methods Between June 2017 and December 2022, cases diagnosed with congenital ODP and ODC at a tertiary eye hospital were included in this retrospective observational study.A typical ODP is a solitary, ovoid, grey-white crater-like excavation of the optic disc, usually at its temporal margin.The ODC is distinguished by a bowl-shaped excavation that is often inferiorly located and has sharp borders.According to Ida-Mann's classification, this is a type 4 coloboma [17].Other congenital optic disc cavitary anomalies such as MGDA, Pedlar's coloboma, megalopapilla and peripapillary staphyloma and other acquired excavated optic disc pathologies such as avulsed optic nerve following trauma were excluded from the study. All of these cases' medical records were reviewed, and demographic and ophthalmic data were compiled.Age, gender, laterality of involvement, visual acuity, spherical equivalent, intraocular pressure of both the study and the fellow eye, and the presence of additional choroidal coloboma were all recorded.The visual acuity was initially noted in Snellen's format and was later converted to logarithm of minimum angle of resolution (logMAR) for statistical purpose.On OCT scans obtained with the Spectralis (Heidelberg Engineering, Germany) machine, the presence of maculopathy following an ODP or ODC was confirmed.The horizontal line raster OCT scans were obtained using the enhanced depth imaging mode passing through the optic nerve head and macular region.On OCT, maculopathy secondary to ODP or ODC was identified by the presence of retinoschisis and/or SMD.Other features of maculopathy that were observed included communication of the retinoschisis and/or SMD with the ODP or ODC, the presence of outer retinal layer thinning with increased visibility of the underlying choroidal structures, and hyperreflective clumps within the SMD. Statistical tests All data were analysed using GraphPad Prism version 9.5.0 (730) for Windows, GraphPad Software, San Diego, California USA, www.graphpad.com.The vision data at presentation was documented as Snellen's vision data and was converted to logarithm for minimum angle of resolution for analytical purpose.Only statistical tests related to the analysis of non-parametric data were used in this study.Quantitative data between the 2 groups of cases were analysed using the Mann-Whitney U test.Chi-square test was used to compare the categorical data between 2 groups.P values < 0.05 were considered statistically significant. Results Eleven eyes of eleven patients with ODP and fourteen eyes of nine patients with ODC were included in this study.These anomalies were identified in 15 patients who presented with diminished vision while in the remaining 5 cases, the ODPs and ODCs were identified incidentally during ocular examination screening.In Table 1, comparisons between demographic and ocular characteristics are described in detail.Patients with ODC presented to the clinic at a younger age than those with ODP (p = 0.02).ODC eyes exhibited bilateral involvement (n = 6, 67%), and both the study eyes and fellow eyes exhibited the presence of additional choroidal coloboma.The spherical equivalents of the study eye and fellow eye were comparable between ODP and ODC cases.The visual acuity of the fellow eye was significantly lower in the ODC group (p < 0.05), whereas the visual acuity of the study eye was comparable between the two groups.In the ODP (n = 10) group, maculopathy was more prevalent than in the ODC (n = 4) group (p < 0.05) [Figs. 1 and 2].Table 2 details maculopathy-related findings in both groups.70% (n = 7) of eyes with ODP-maculopathy exhibited retinoschisis and SMD.Communication of the retinoschisis was observed in 8 (80%) eyes and thinning of the outer retinal layer was more pronounced in 8 (80%) cases of ODP.One eye with ODP showed the presence of SMD alone.In contrast, SMD was the most common maculopathy finding in eyes with ODC.No patient presented with retinoschisis outside the coloboma.Communication between the SMD and ODC was observed in each of the four eyes with ODC maculopathy.In both groups, hyperreflective clumps were observed within the SMD (p > 0.05). Discussion This study showed significant differences between ODP and ODC maculopathy features on OCT.Greater prevalence of maculopathy, presence of retinoschisis and SMD, and continuation of retinoschisis but not SMD with the optic disc were observed in eyes with ODP, whereas eyes with ODC exhibited a lower prevalence of maculopathy, presence of SMD without retinoschisis outside the coloboma, and communication of the SMD with the ODC.In addition, there were differences between the two groups in terms of demographics and ocular characteristics. The optic nerve head is composed of retinal ganglion cell axons, blood vessels, glia, and connective tissue.The optic disc is widely regarded as the central location of impairment in congenital cavitary defects such as ODP, ODC, MGDA and megalopapilla and in acquired conditions like glaucoma and peripapillary staphyloma [5,18,19].The observed disparities in our study's findings can be attributed to different gross and histological anatomical characteristics, variations in lesion margins, and differing mechanisms underlying the onset of maculopathy in these two entities. Congenital optic pits commonly affect the temporal optic disc, although they have the potential to occur in various other areas.Histologically, optic pits are distinguished by the presence of dysplastic retina herniations into the subarachnoid space, often occurring through a major defect in the lamina cribrosa [20].On the other hand, an ODC is a distinct, bright white, concave depression located in the lower portion of an enlarged optic disc, with a size that is notably larger than that of an ODP [21].Both of these papillary defects have the potential for communication between various spaces, including the vitreous space, subarachnoid space, intraretinal space, subretinal space, and orbital spaces, either directly or indirectly.Therefore, an ODP is often classified as an "unusual and small coloboma" located at the optic nerve head. Our study yielded several interesting and highly significant findings when examining the demographic and ocular characteristics of these two clinical entities.The patients diagnosed with ODP exhibited a higher mean age and a greater prevalence of visual symptoms upon their initial presentation at the clinic, in comparison to patients diagnosed with ODC.Patients with ODP commonly experience visual difficulties when maculopathy develops.Maculopathy typically manifests during the period spanning the second to fourth decades of an individual's life within the context of ODP [22].Maculopathy that arises as a consequence of ODC also manifests at a similar time.However, owing to the large size of the defect and its greater likelihood of being associated with choroidal coloboma, the visual symptoms of ODC become apparent at a notably earlier stage, leading to an earlier manifestation in clinical settings [7].In addition, it was observed that the visual acuity of the fellow eyes in individuals with ODC was notably inferior compared to the visual acuity of the eyes with ODP.This phenomenon may also be attributed to the presence of choroidal colobomas in both eyes among individuals with ODC. Patients with ODC tend to present at a relatively early age to the clinic due to the manifestation of poor vision in the fellow eye and the accidental discovery of ODC in the affected eye.Ohno-Matsui et al. reported similar findings pertaining to the occurrence of choroidal coloboma in cases of ODC [23]. Both of these clinical conditions commonly exhibit maculopathy, which is typically characterized by the presence of intraretinal fluid/retinoschisis or SMD.A higher prevalence of maculopathy symptoms was observed in the group of individuals with ODP.One potential factor contributing to the observed phenomenon in the current study is the comparatively earlier onset of ODC cases, which may result in the gradual development of maculopathy over a period of time.The structure of the optic disc's margin may also serve as a potential barrier against the development of maculopathy during the initial phases.The classification system proposed by Ida-Mann categorizes an ODC as a choroidal coloboma of type IV [17].Hence, the anatomical characteristics of the margin in an ODC will exhibit similarities to the margin anatomy observed in choroidal colobomas located in other regions of the fundus.The intercalary membrane (ICM) is a continuation of the inner retina from the non-colobomatous area that extends over the colobomatous area.This conversion of the inner retina into an ICM may be transient or abrupt at the coloboma margin [24].Several studies utilizing swept source OCT have demonstrated that the occurrence of subclinical or clinical retinal detachments (RDs) is influenced upon the presence of microbreaks at the marginal ICM, which are caused by the continuous traction exerted by the vitreous at the coloboma margin [25,26].The development of these marginal ICM breaks may require a significant amount of time, which could explain the lack of visibility of maculopathy characteristics in ODC cases in our study. In our study, we observed that the maculopathy characteristics of SMD or inner retinal schisis presence, location, communication with the papillary defect and focal outer retinal layer defects were distinct between the two entities.We observed that eyes with ODP were more likely to exhibit simultaneous inner retinal schisis and SMD.In contrast, eyes with ODC exhibited only SMD in every case, and no patient exhibited retinoschisis outside the coloboma.In addition, we observed that the inner retinal schisis communicated with the optic disc defect in eyes with ODP, whereas the SMD failed to communicate with the ODP in most cases.However, the SMD communicated with ODC in all four eyes.These intriguing differences could be explained by the pathophysiology underlying the development of maculopathy in both of these entities.In an ODP maculopathy, regardless of the source of fluid, Lincoff et al. have described a generally accepted sequence of retinal fluid accumulation and progression of its formation [11,27,28].The fluid originating from the ODP initially induces an inner retinal separation resembling schisis, subsequently leading to the development of an outer layer macular hole beneath the inner layer.Subsequently, the fluid proceeds to dissect the subretinal region, resulting in the detachment of the outer retina.Additionally, it has been reported that nearly all cases of ODP maculopathy exhibit intraretinal fluid within the outer nuclear layer, while none exhibit solely subretinal fluid.This observation provides support to the hypothesis that the fluid initially infiltrates the inner retinal layers before progressing to the subretinal area.[29].It has been hypothesized that, as fluid accumulates intraretinally in eyes with ODP maculopathy, a pressure gradient forms, directing the fluid towards the outer retinal layers and then into the subretinal space [30].In one eye, which showed the presence of SMD alone in a case of ODP, the possible explanation could be a breakdown in communication between the ODP and inner retinal space as a result of a previous laser barrage to the optic disc margin.In an ODC, the vitreous' persistent traction on the ICM at the coloboma margin causes schisis-like defects in both the marginal and central portions of the ICM.This persistent traction eventually causes micro or mini breaks at the marginal ICM, allowing fluid to enter the sub retinal space outside the coloboma, resulting in subclinical or clinical retinal detachment.Retinoschisis outside the coloboma without retinal detachment is uncommon in eyes with ODC, with the exception of eyes with long-standing retinal detachment, in which the retinal detachment itself causes degenerative splitting of the inner retinal layers.This best explains the glaring differences between ODP and ODC maculopathy characteristics noted in our study.In instances of ODP and ODC, hyperreflective dots are observed in the SMD.These are comparable to the shaggy photoreceptors observed in eyes with long-standing subretinal fluid, such as in chronic CSCR, autosomal recessive bestrophinopathy, and choroidal melanoma [31][32][33][34][35].This finding also suggests that the subretinal fluid is thick, viscid and slowly resorbing, which explains why SMD absorption requires more time. ODP and ODC share a similar embryological origin, resulting from incomplete closure of the fetal fissure's proximal portion.The only difference is the size of the defect: the size of ODC defects is greater in comparison to ODP defects, with the latter being significantly smaller.As part of the choroidal coloboma spectrum, the ODC is frequently associated with other systemic syndromes, such as the CHARGE and COACH syndromes, whereas the ODP is typically sporadic and rarely associated with other systemic syndromes [36]. This study has clinical significance because the OCT characteristics and visual symptoms in ODP and ODC may serve as a guide when considering intervention in such cases.Observation, laser retinopexy, pneumatic retinopexy, and pars plana vitrectomy with internal limiting membrane peeling are among the various treatment modalities for ODC or ODP cases.We recommend that clinicians take a step-by-step approach when considering interventional treatment for ODP or OPC cases based on the presence of maculopathy, progression of maculopathy, and recent onset of visual symptoms.In addition, we stress the importance of analysing the entire cube scan at the optic disc margin in order to evaluate the communication between the SRF and ODP or ODC when planning intervention in these cases.Moreover, the role of sweptsource OCT imaging cannot be overlooked in such clinical settings, particularly in eyes with ODC. A few drawbacks exist in the present study.Despite the utilization of the enhanced depth imaging mode on the spectral domain Spectralis OCT machine, the acquisition of a comprehensive image encompassing the entirety of the subarachnoid space and optic nerve sheath remained challenging.The optimal approach for imaging and computation in this scenario would involve the utilization of swept-source OCT imaging.Obtaining clear OCT images proved to be a difficult task, especially in cases where eyes had ODC.Follow up changes, and treatment outcomes were not addressed.The primary objective of this study was to investigate the initial imaging disparities between eyes affected by ODP and ODC maculopathy.The aim was to offer plausible interpretations for these findings and to furnish clinicians with a helpful tool for devising treatment strategies. In summary, there are obvious disparities in the baseline maculopathy characteristics observed on OCT between patients diagnosed with ODP and ODC.The primary distinguishing characteristics between the two clinical entities are the presence of retinoschisis and SMD, along with their specific location and relationship to the optic disc. Fig. 1 AFig. 2 A Fig. 1 A case of Optic disc pit with maculopathy: A: This cropped colour fundus image (Optos, Daytona, UK) belongs to a 26-year-old female showing a grey-white translucent defect at the temporal portion of the optic disc suggestive of optic disc pit (white arrow).Her presenting visual acuity was 20/80, N12 in the affected left eye.B: The optical coherence tomography scan section through the ODP and macula shows the temporal defect suggestive of an ODP with schisis at the nerve fibre layer and at the junction of inner nuclear layer -outer plexiform layers (white arrow), and schitic fluid further dissecting the outer nuclear layer of the retina (red arrow) followed by a focal outer retinal defect (yellow arrow) causing serous macular detachment (SMD).There is no communication of the SMD with the optic disc pit.C: Fundus autofluoroscent image shows the hypoautofluorescent ODP (white arrow) at the temporal portion of the optic disc with a number of hyper autofluoroscent spots (red arrow) at the posterior pole indicative of shaggy photoreceptors and a long-standing SMD Table 1 Demographic data comparisons between optic disc pit and optic disc coloboma Abbreviations: RE -right eye; LE -left eye; logMAR -logarithm of minimum angle of resolution; IOP -intraocular pressure; SD -standard deviation; D -diopters Table 2 OCT findings in patients with optic disc pit and optic disc coloboma: Abbreviations: RS -retinoschisis; SMD -serous macular detachment
2023-08-07T13:40:13.343Z
2023-08-07T00:00:00.000
{ "year": 2023, "sha1": "5a14d6c4ab538dc8e880e15ab33a26229285986d", "oa_license": "CCBY", "oa_url": "https://journalretinavitreous.biomedcentral.com/counter/pdf/10.1186/s40942-023-00484-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "044c6483dfc74754941f61d0b9aba7ee0c0ba2d3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119622254
pes2o/s2orc
v3-fos-license
Number fields with prescribed norms We study the distribution of extensions of a number field $k$ with fixed abelian Galois group $G$, from which a given finite set of elements of $k$ are norms. In particular, we show the existence of such extensions. Along the way, we show that the Hasse norm principle holds for $100\%$ of $G$-extensions of $k$, when ordered by conductor. The appendix contains an alternative purely geometric proof of our existence result. Introduction Let k be a number field.In this paper we are interested in the images of the norm maps N K/k : K * → k * for finite field extensions K/k.Specifically, given an element α ∈ k * and a finite group G, does there exist an extension K/k with Galois group G such that α is a norm from K? We are able to answer this question positively if one restricts to abelian extensions of k.Furthermore, in the abelian setting, we prove the existence of such an extension from which a given finite set of elements of k * are norms.Theorem 1.1.Let k be a number field, G a finite abelian group and A ⊂ k * a finitely generated subgroup.Then there exists an abelian extension K/k with Galois group G such that every element of A is a norm from K. As an application, we obtain the following corollary.Corollary 1.2.Let k be a number field, G a finite abelian group and S a finite set of places of k.Then there exists an abelian extension K/k with Galois group G such that every S-unit of k is a norm from K. We prove Theorem 1.1 by counting the collection of abelian extensions under consideration; we obtain an asymptotic formula for the number of such extensions of bounded conductor, and show explicitly that the leading constant in this formula is non-zero.In particular, we prove the existence of infinitely many extensions with the desired properties.The strategy of proving existence via counting is widely used in analytic number theory, for example in the context of the Hardy-Littlewood circle method.Our proof of Theorem 1.1 seems to be the first case where it is implemented for number fields.Our methods even allow us to prove existence of such an extension K/k which satisfies any finite collection of admissible local conditions (Corollary 4.12). Before we can explain these more general results, we must introduce some notation.Fix a choice of algebraic closure k of k and let G be a finite abelian group.By a G-extension of k, we mean a surjective continuous homomorphism ϕ : Gal(k/k) → G.This corresponds to choosing an extension k ⊂ K ⊂ k together with an isomorphism Gal(K/k) ∼ = G.Keeping track of the isomorphism with G simplifies the set-up and the counting.It has no qualitative effect on the results; forgetting the choice of isomorphism merely scales all the counting results by | Aut(G)|.We write G-ext(k) for the set of all G-extensions of k.Given ϕ ∈ Gext(k), we write K ϕ for the corresponding number field, and Φ(ϕ) for the norm of the conductor of K ϕ (viewed as an ideal of k).Moreover, we write A * Kϕ for the ideles of the number field K ϕ .We are interested in the counting functions ) The first counts all G-extensions ϕ of k of bounded conductor, the second counts only those for which every element of A is everywhere locally a norm, the third only those for which every element of A is a global norm. An asymptotic formula for N(k, G, B) was first obtained by Wood in [49], building on numerous special cases.In this paper we obtain asymptotic formulae for the other counting functions.Our formulae are stated in terms of the invariant ̟(k, G, A) which we now define.Definition 1.3.Let k be a number field, G a finite abelian group, and A ⊂ k * a finitely generated subgroup.For d ∈ Z ≥1 , let k d = k(µ d , d √ A).We define where |g| denotes the order of g in G and id G ∈ G is the identity element. Theorem 1.4.Let k be a number field, G a non-trivial finite abelian group, and A ⊂ k * a finitely generated subgroup.Then as B → ∞, for some c k,G,A > 0. This theorem gives an asymptotic formula for the number of G-extensions from which every element of A is a global norm.It is natural to ask how the number of such extensions compares with the total number N(k, G, B) of G-extensions of k of conductor bounded by B. We observe that N(k, G, B) = N glob (k, G, {1}, B) and note that in this case the formula of Theorem 1.4 agrees with [49,Thm. 3.1]. The next theorem generalises this observation.It says that, unless we are in a very special case, for 100% of G-extensions of k not all elements of A are norms. Theorem 1.6.Let k be a number field, G a non-trivial finite abelian group of exponent e, and A ⊂ k * a finitely generated subgroup.Then the following are equivalent: (1) v for all but finitely many places v of k.There is a nice cohomological way to interpret the condition (3) in Theorem 1.6 via certain Tate-Shafarevich groups (see §4. 6).Together with some class field theory, this will allow us to deduce the following result. Corollary 1.7.Let A ⊂ k * be a finitely generated subgroup and let e be the exponent of G. Then the limit (i) only depends on the image Ak * e of A in k * /k * e ; (ii) equals one if A ⊂ k * e ; (iii) is zero for all but finitely many finite subgroups Ak * e ⊂ k * /k * e ; (iv) is zero for all finitely generated subgroups A ⊂ k * e if and only if the extension k(µ 2 r )/k is cyclic, where 2 r is the largest power of 2 dividing e. Condition (iv) holds for example if 8 ∤ e or µ e ⊂ k * .Our next result shows that if G is cyclic then in order to have for some choice of A, the field k must have more than one prime lying above 2. Theorem 1.8.Let k be a number field, let A ⊂ k * be a finitely generated subgroup, and let G be a finite cyclic group.Suppose that k has only one prime lying above 2. Then the following are equivalent: (1) > 0; (2) every element of A is a global norm from every G-extension of k. A necessary condition for an element of k to be a global norm is that it is a norm everywhere locally.However, this is not a sufficient condition in general due to possible failures of the Hasse norm principle (HNP).Nevertheless, to prove Theorem 1.4, we reduce to the case of everywhere local norms via the following theorem, which shows that, when ordered by conductor, "most" abelian extensions satisfy the Hasse norm principle.Theorem 1.9.Let k be a number field, G a finite abelian group, and A ⊂ k * a finitely generated subgroup.Then In particular Theorem 1.9 implies that lim Theorem 1.4 can thus be proved via an asymptotic formula for N loc (k, G, A, B), which we obtain in Theorem 4.1.We prove Theorem 1.9 using a purely local criterion for failure of the Hasse norm principle (Proposition 4.2).Taking A = {1} in Theorem 1.9, we obtain the following result. Corollary 1.10.Let k be a number field and G a finite abelian group.Then 100% of G-extensions of k, ordered by conductor, satisfy the Hasse norm principle. Corollary 1.10 stands in stark contrast to the results of [20], where a dichotomy occurs when counting by discriminant: in op.cit.we showed that for certain finite abelian groups G a positive proportion of G-extensions can fail the Hasse norm principle, when ordered by discriminant.This contrasting behaviour illustrates the fact, already observed by Wood in [49], that counting by conductor often leads to more natural statements than counting by discriminant.In fact, after seeing the results we obtained in [20] when counting extensions ordered by discriminant, Wood remarked that the dichotomy we had observed should disappear when ordering by conductor, and conjectured the statement of Corollary 1.10. There are two reasons why it seems quite difficult to prove Theorem 1.1 when counting by discriminant, rather than conductor.Firstly, the condition that every element of A is a norm everywhere locally may be only rarely satisfied and, in the setting of [20,Thm. 1.4] where a positive proportion of G-extensions fail the Hasse norm principle, it becomes challenging to show the existence of a Gextension for which every element of A is a norm everywhere locally and the Hasse norm principle holds.Secondly, the leading constant obtained when counting by discriminant is very complicated, with potential for further cancellation, so it is difficult to prove its positivity, whereas when counting by conductor we have a simple criterion for positivity of the leading constant (see Theorem 3.1). The counting techniques employed in this paper are fairly robust and enable us to prove a strengthening of Theorem 1.1 in which we impose local conditions at finitely many places.See Theorem 3.1 and Corollary 4.11 for precise statements. Our work on the statistical behaviour of the Hasse norm principle brings together two major areas of modern number theory: namely, counting within families of number fields, and the quantitative study of the failure of local-global principles.Notable recent papers on the statistics of number fields include [1], [2], [4], [5], [18], [21], [27], [33] and [50].Some significant contributions to the study of local-global principles in families include [3], [6], [7], [8], [19], [29] and [30].For a summary of recent progress on counting failures of the Hasse principle, see [9].More specifically, the statistical behaviour of the Hasse norm principle is examined in [10], [31] and [35].In particular, in [35] Rome obtains an asymptotic formula for the number of biquadratic extensions of Q (ordered by discriminant) which fail the Hasse norm principle.Obtaining asymptotic formulae for the number of such failures for other classes of field extensions would seem to be an interesting problem. Below, we give some examples illustrating our results in a variety of settings to demonstrate the wide range of phenomena manifested by norms in extensions of number fields. (1) Take G = Z/nZ with 8 ∤ n and α ∈ k * not an nth power.Then Corollary 1.7 implies that for 100% of all Z/nZ-extensions of k ordered by conductor, α is not a norm.In the special case n = 2 of quadratic extensions, this result can be proved using standard techniques in analytic number theory; all other cases are new.(2) Take k = Q, α = 16 and G = Z/8Z.As is well known, 16 is an 8th power in Q * p for all odd primes p and in R * .It therefore follows from Theorems 1.6 and 1.8 that 16 is a norm from every Z/8Z-extension K/Q, despite not being an 8th power in Q. ), α = 16 and G = Z/8Z.Then, as above, we see that 16 is locally an 8th power at all places v such that v ∤ 2. Hence 16 is a local norm from all Z/8Z-extensions of k at all places v ∤ 2. However, let p, q be the two primes of k above 2.By [32, Thm.9.2.8] there exists a Z/8Zextension F/k such that F p /k p is unramified of degree 8. Therefore, 16 is not a local norm from F p /k p , and consequently not a global norm from F/k.Given the existence of one such an extension, an application of [49,Cor. 1.7] (or Theorem 3.1) yields the existence of a positive proportion of Z/8Z-extensions K/k which are unramified of degree 8 over p, thus the limit (1.2) is positive but not equal to 1 in this case. Let us explain in more detail why [32, Thm.9.2.8] applies here but not in the previous example.Recall that a place v of a number field L is said to split (or decompose) in an extension M/L if there exist at least two distinct places of M above v.All places of Q apart from 2 split in the non-cyclic extension Q(µ 8 )/Q, so that (Q, 8, Ω Q \ {2}) is a so-called special case and [32, Thm.9.2.8] does not apply in example (2).However, in example (3), q is non-split in k(µ 8 )/k: both p and q are totally ramified in k(µ 8 )/k, since 2 is split in k/Q and totally ramified in Q(µ 8 )/Q.Therefore, (k, 8, Ω k \ {p}) is not a special case and [32, Thm.9.2.8] can be applied in example (3).(4) Take k = Q, α = 5 2 and G = (Z/2Z) 2 .A simple argument (cf.Lemma 4.4) shows that 5 2 is a norm everywhere locally from every biquadratic extension of Q.By Theorem 1.9, it is thus a global norm from 100% of biquadratic extensions of Q ordered by conductor.However, 5 2 is not a global norm from ) (failure of the Hasse norm principle [11,p. 360,Exercise 5.3]).Therefore, it is not true that 5 2 is a global norm from every biquadratic extension of Q. Remark 1.12.A simple application of local class field theory (Lemma 4.4) shows that every element of k * e is everywhere locally a norm from every Gextension of k, where e denotes the exponent of G. Using this, one can show that in our results, the assumption that A is a finitely generated subgroup of k * can be replaced by the weaker assumption that the image of A in k * /k * e is finite.We have chosen to make the stronger assumption as it simplifies the exposition and some technical aspects of the proofs. We finish with a simple example which solves the problem analogous to Theorem 1.1 for field extensions of degree n with maximal Galois group. Example 1.13.Let α ∈ Q * and n ≥ 3. Then the polynomial has Galois group S n over Q(t) for all but finitely many c ∈ Q (see [26,Satz 1]).Therefore, Hilbert's irreducibility theorem implies that for infinitely many specialisations t ∈ Q, the Galois group is S n , and α is clearly a norm from such an extension, being the product of the roots of the defining polynomial. 1.1.Methodology and structure of the paper.In §2 we recall some of the theory of frobenian functions from Serre's book [41, §3.3], in order to help analyse the Dirichlet series which arise in this paper. In §3 we prove our main technical result, Theorem 3.1.This is a general theorem for counting abelian extensions with local conditions imposed.To prove this we study the analytic properties of the Dirichlet series corresponding to our counting functions.We achieve this with the help of the harmonic analysis techniques developed in our earlier paper [20].In our case, however, the analysis is more difficult as the singularities of our Dirichlet series will be branch point singularities, rather than poles, in general; this is reflected in the fact that ̟(k, G, A) in Theorem 1.4 can be a non-integral rational number.This section is the technical heart of the paper and is dedicated to the proof of Theorem 3.1. Let us emphasise once more that we prove Theorem 1.1 by first counting the extensions of interest and then showing that the leading constant obtained is positive.Our situation presents an interesting difficulty, however: the leading constant we obtain is not an Euler product but a sum of Euler products and, in general, cancellation within these sums may occur for some choices of local conditions.For example, a famous theorem of Wang [47] says that there is no Z/8Z-extension of Q which realises the unramified extension of Q 2 of degree 8; in this case Wright observed in [51, p. 48] that the Euler products appearing in the leading constant cancel out.We have to carefully analyse these sums of Euler products and explicitly show that no cancellation occurs in our case. In §4, we prove the major results stated in the introduction via suitable applications of Theorem 3.1 combined with Galois-cohomological techniques.At the end of §4 we also give a generalisation of Theorem 1.1 which allows one to impose local conditions on the abelian extension K/k at finitely many places. The appendix (by Yonatan Harpaz and Olivier Wittenberg) contains a purely geometric proof of Theorem 1.1.It uses descent and a version of the fibration method developed in [24] to show that the Brauer-Manin obstruction controls the failure of weak approximation on a certain auxiliary variety.The existence of the required abelian extension is then shown using a version of Hilbert's irreducibility theorem due to Ekedahl [17] (see also [42, § §3.5-3.6]). 1.2.Notation and conventions.We fix a number field k throughout the paper and use the following notation: the residue field at a finite place v q v the cardinality of the residue field at a finite place v ζ k (s) the Dedekind zeta function of k. For locally compact abelian groups A and B, we use the following notation: All finite groups are viewed as topological groups with the discrete topology. For a place v of k, a finite abelian group G, and χ ∈ Hom(k * v , G), we denote by Φ v (χ v ) the reciprocal of the v-adic norm of the conductor of Ker χ v .For every χ ∈ Hom(A * /k * , G), we let Φ(χ) be the reciprocal of the idelic norm of the conductor of the kernel of χ; this equals the norm Φ(ϕ) of the conductor of the sub-G-extension ϕ corresponding to χ via the global Artin map. Let K/k be an extension of number fields and α ∈ k * .We say that α is a Frobenian functions For the proofs of our main results, we will require some of the theory of frobenian functions, as can be found in Serre's book [41, §3.3].Recall that a class function on a group is a function which is constant on conjugacy classes.Definition 2.1.Let k be a number field and ρ : Ω k → C a function on the set of places of k.Let S be a finite set of places of k.We say that ρ is S-frobenian if there exist (a) a finite Galois extension K/k, with Galois group Γ, such that S contains all places which ramify in K/k, and (b) a class function ϕ : Γ → C, such that for all v ∈ S we have where Frob v ∈ Γ denotes a Frobenius element of v.We say that ρ is frobenian if it is S-frobenian for some S. A subset of Ω k is called (S-)frobenian if its indicator function is (S-)frobenian. In Definition 2.1, we adopt a common abuse of notation (see [41, §3.2.1]), and denote by Frob v ∈ Γ the choice of some element of the Frobenius conjugacy class at v; note that ϕ(Frob v ) is well defined as ϕ is a class function. We define the mean of ρ to be ] be a (not necessarily irreducible) polynomial.Then the set {v ∈ Ω k : f (x) has a root in k v } is frobenian.Indeed, take K to be the splitting field of f .Then for a place v which is unramified in K, the polynomial f has a root in k v if and only if Frob v acts with a fixed point on the roots of f over k; the set of such elements is a conjugacy invariant subset of the Galois group Γ. We require the following result on the zeta function of a frobenian function.Throughout the paper, we write q v for the size of the residue field at a finite place v.Moreover, for any place v, let ζ k,v (s) be the Euler factor of ζ k (s) at v if v is non-archimedean, and ζ k,v (s) = 1 otherwise. Proposition 2.3. Let S be a finite set of places of k containing all archimedean places and let ρ be an S-frobenian function. Assume that |ρ(v)| < q v holds for all v / ∈ S. Then the Euler product has the form for some c = c ρ > 0, and satisfies in this region the bound and the limit in (2.5) is non-zero. Proof.First, note that the Euler factors 1 + ρ(v)q −s v are holomorphic on C and non-zero for Re s ≥ 1, as |ρ(v)| < q v by assumption.Next, recall that the irreducible characters of a finite group Γ form a basis for the space of complex class functions of Γ [22,Prop. 2.30].In particular, if ϕ : Γ → C is the class function associated to ρ, then we may write where λ χ ∈ C and the sum runs over the irreducible characters of Γ.For Re s > 1, we find that where L(χ, s) denotes the Artin L-function of χ and G 1 (s) is a holomorphic function with absolutely convergent Euler product on Re s > 1/2, which is nonzero on Re s ≥ 1. For the trivial character χ = ½, we have By the Brauer induction theorem [11, Thm.VIII.7, p. 225], we may decompose each remaining L(χ, s) as a product of Z-powers of Hecke L-functions of nontrivial Hecke characters of subfields of K. Hence, we assume from now on that each L(χ, s) is an entire Hecke L-function (for some possibly different number field).By [28,Thm. 5.35], L(χ, s) respects a zero-free region of the form (2.3), for some c < 1/4 that may depend on χ.Since there are only finitely many characters to consider, we can find a constant c that works for all of them.Decreasing c further, we obtain a bound valid in the region (2.3) (cf.[38, p.230]).Using this bound and the fact that |G 1 (s)| ≪ ρ 1 in Re s ≥ 3/4 due to the absolute convergence of its Euler product, it is simple to verify that G(s) satisfies (2.4). To verify (2.5), we start with the following fact, which is well known at least in the classical case of Dirichlet L-functions: for non-trivial χ, the Euler product of L(χ, s) converges for s = 1 and takes the value L(χ, 1).To see this, observe that log L(χ, s) can be defined for Re s > 1 as a Dirichlet series, use the prime number theorem for L(χ, s) (see [28,Thm. 5.13]) and partial summation to verify that this Dirichlet series converges for s = 1, and apply Abel's theorem. Since G 1 (s) has an absolutely convergent Euler product for Re s > 1/2, this shows that the Euler product of ζ k (s) −m(ρ) F (s) = G(s) does indeed converge at s = 1 and takes the value Recalling our assumption that |ρ(v)| < q v , it is clear that the right-hand side of (2.5) is non-zero. Remark 2.4. (1) Note that frobenian functions are bounded; thus the condition |ρ(v)| < q v in Proposition 2.3 is always satisfied for all but finitely many v. Counting with local conditions All of the main counting results in this paper are obtained from a more general counting result, which we present in this section.To state this result we require some notation. Statement of the result. Let G be a finite abelian group, let F be a field and F a separable closure of F .We define a sub-G-extension of F to be a continuous homomorphism Gal( F /F ) → G.A sub-G-extension corresponds to a pair (L/F, ψ), where L/F is a Galois extension inside F and ψ is an injective homomorphism Gal(L/F ) → G. For each place v of the number field k, we fix an algebraic closure kv and compatible embeddings k ֒→ k ֒→ kv and k ֒→ k v ֒→ kv . Hence, a sub-G-extension ϕ of k induces a sub-G-extension ϕ v of k v at every place v.For each place v of k, let Λ v be a set of sub-G-extensions of k v .For Λ := (Λ v ) v∈Ω k we are interested in the function which counts those G-extensions of k of bounded conductor which satisfy the local conditions imposed by Λ at all places v. (Here Φ is as in §1.2.) In general, it is difficult to say anything about the counting function given in (3.1), especially when there are infinitely many local conditions imposed.Even in the case when one imposes finitely many conditions, the set being counted may be empty, as explained in §1.1.Our main technical result imposes arbitrary conditions at finitely many places, but at the remaining places we only impose those conditions which force every element of A to be a local norm. there exists a sub-G-extension of k which realises the given local conditions for all places v. The leading constant c k,G,Λ in this theorem is given by a finite sum of Euler products (see Theorem 3.22 for an explicit expression).Our condition for positivity is only the existence of some sub-G-extension of k which realises the given local conditions; we do not require the existence of a genuine G-extension of k, so we do not need to assume that the set of G-extensions being counted is non-empty to deduce the positivity of the constant.This means that one need only look for an extension with possibly smaller Galois group to prove positivity of the constant; we use this trick to great effect when proving Theorem 1.1. We illustrate how one applies Theorem 3.1 in some simple cases.Firstly, one counts the total number of G-extensions of k by applying Theorem 3.1 with A = {1} and no local conditions, i.e. taking Λ v to be the set of all sub-Gextensions of k v for all places v.These local conditions are realised by the sub-G-extension given by the trivial extension k/k.For a more interesting example, consider the case A = {1} and the trivial local conditions Λ v = {½} for v ∈ S, which are again realised by the trivial extension k/k.This gives the following corollary.(Note that we do not need to avoid the places above 2.) Corollary 3.2. Let S be a finite set of places. Then a positive proportion of G-extensions of k, ordered by conductor, are completely split at all places in S. The rest of this section is dedicated to the proof of Theorem 3.1.All implied constants in the O and ≪ notation are allowed to depend on k, G, A and Λ. The set of places S. To prove Theorem 3.1, we are free to increase the size of S if we wish.Henceforth, we will assume that S contains all archimedean places of k and all places of k lying above the primes p ≤ |G|, that A ⊂ O * S , and that O S has trivial class group. The reader should note that many of the formulae which follow are only valid for finite sets of places S which satisfy these conditions.For example, in the case where k = Q, G = Z/8Z, A = {1}, S = ∅, the expression for the leading constant in Theorem 3.22 does not hold.To compute c k,G,Λ in this instance, we may take S = {∞, 2, 3, 5} instead. Dirichlet series. To prove Theorem 3.1 we study the associated Dirichlet series and G 2 have coprime order, and µ((Z/pZ) n ) = (−1) n p n(n−1)/2 for a prime p and n ∈ Z ≥0 .Let f be a function on the subgroups of G.For subgroups H ⊂ G, we consider the function where the sum runs over all subgroups J ⊂ H.The Möbius inversion formula for finite abelian groups [16] states that Proof.Sorting the sub-H-extensions ϕ : Gal( k/k) → H by their images, we get Call the right-hand side g(H) and apply Möbius inversion (3.3). We now consider the contribution to F Λ (s) of each subgroup H in turn.The contribution from H = {1} is either 0 or 1.From now on we focus on the contributions of the non-trivial subgroups H. Hence, in our analysis of F Λ (s) we can now focus on the inner sums Our counting problem fits very well within the class-field-theoretic framework. For each place v ∈ Ω k , we use local class field theory (specifically, the local Artin ).Thus, we consider Λ v as a subset of Hom(k * v , H).By the compatibility of local and global class field theory, we still have Proof.Let ϕ v be the sub-G-extension of k v associated to χ v .By local class field theory we have Ker χ v = N Kϕ v /kv K * ϕv , where K ϕv is the extension field of k v associated to ϕ v .However, as v / ∈ S, by assumption in Theorem 3.1 we have f Λv (χ v ) = 1 if and only if every element of A is a local norm from K ϕv ; the result follows. Harmonic analysis. To deal with the sums we shall use a version of the Poisson summation formula from harmonic analysis.The theory relevant to us was worked out in detail in [20, §3] when counting by discriminant.The same theory transfers almost verbatim to show the validity of the Poisson summation formula for counting by conductor. However, for the purposes of Theorem 3.1, our case is special enough that we merely require a simplified version of the Poisson summation formula that can be proved using only character orthogonality for finite abelian groups.We may therefore forego some of the general theory from [20, §3] and proceed in a more explicit manner.We first recall the set-up for the harmonic analysis. ∈ S, these local functions take only the value 1 on the unramified elements by our choice of S and Lemma 3.5, and thus f Λ /Φ s extends to a well-defined and continuous function on Hom(A * , H).We define its Fourier transform to be For Re s ≫ 1, the global Fourier transform exists and defines a holomorphic function in this domain, and there is an Euler product decomposition Proof.We prove the result when v is non-archimedean, the case of archimedean v being analogous.By our choice of measures, we have This finite sum clearly defines a holomorphic function on C. If Re s ≥ 0 then the sum is ≪ k,H 1, since every summand is bounded absolutely and the number of summands is ≪ k,H 1.For the last part, we have For v ∈ S the set Λ v is non-empty by assumption.For v / ∈ S the set Λ v is again non-empty, as it always contains the trivial homomorphism k * v → H by Lemma 3.5.For s ∈ R, we therefore obtain a finite non-empty sum of positive real numbers, which is positive.Now let v be non-archimedean.Choosing a uniformiser of with Z and gives a splitting of the exact sequence This implies that the sequence since ψ v is unramified and hence Φ( We use the criterion from Lemma 3.5.We have this also shows the second assertion. In the statement of the following lemma, note that the natural map v ⊗ H ∧ is injective, as the sequence (3.7) is split exact.Therefore, we may naturally view Proof.From (3.8) and Lemma 3.7 we have Now character orthogonality gives Note that the group O * S ⊗ H ∧ is finite by Dirichlet's S-unit theorem; in particular the right-hand sum is finite. Recall that we have normalised our Haar measures on Hom(k * v , H) to be |H| −1 times the counting measure for non-archimedean v, and equal to the counting measure for archimedean v.We let S f be the set of non-archimedean places in S. Now Lemma 3.8 and (3.5) give where We now change the order of summation in the right-hand sum of (3.10) to obtain As H).Thus, we may apply character orthogonality to find that We therefore obtain Moreover Proof.An element χ v ∈ Hom(O * v , H) is unramified if and only if it is trivial.Furthermore, since v / ∈ S and our assumptions on S in §3.2, the ramification is tame and hence for non-trivial characters χ v ∈ Hom(O * v , H), we have Φ v (χ v ) = q v .Therefore, by Lemmas 3.5 and 3.8 we have We claim that the natural map is an isomorphism.To see this, recall that Hensel's lemma yields a split short exact sequence The kernel of a continuous homomorphism 1 + p v → H contains 1 + p n v for some n ∈ N, and the successive quotients in the filtration Inputting this into (3.11), the result follows. To study the analytic behaviour of the global Fourier transforms f Λ,H (x; s), we use the theory of frobenian functions from §2. (2) Let m be the largest divisor of Lemma 3.11. Let e be the exponent of H. Consider a function d Note that the group A/A e is finite.Moreover, our slight abuse of notation is harmless, as whether or not the polynomial appearing in (3.13) has a root is independent of the choice of representative of each element of A/A e . Proof.To prove this we choose a presentation of H ∧ .We then work coordinatewise on H ∧ , using the fact that the intersection of finitely many frobenian sets is frobenian.Thus, we reduce to the case H ∧ = Z/eZ.Here we have O * S ⊗ H ∧ = O * S /O * e S .For x ∈ O * S , we have to show that the set is S-frobenian.However, we have v for some α v ∈ A v (depending on v).We find that the set in question is the set of places v such that the equation α∈A/A e (t e − xα) = 0 has a solution in k v ; this set is frobenian (see Example 2.2).As x is an S-unit, it is easily seen that this is S-frobenian for our choice of S in §3.2. Proof.The product or sum of two S-frobenian functions is clearly S-frobenian (in Definition 2.1 one takes the compositum of the relevant field extensions).Moreover, the complement of a S-frobenian set is S-frobenian.The result therefore follows from Lemmas 3.11 and 3.12. Definition 3.14.We denote by ̟(k, H, A, x) the mean of the S-frobenian function described in Corollary 3.13. We now compare ̟(k, H, A, x) with ̟(k, H, A), as defined in Definition 1.3.̟(k, H, A, x) ≤ ̟(k, H, A, 1), the first assertion follows immediately from the second.So let us prove the second assertion.By Corollary 3.13 and Lemma 3.11, we see that ̟(k, H, A, 1) is the mean of an S-frobenian function ρ with ρ Proof. As clearly With the notation of the proof of Lemma 3.11, the corresponding class function on Gal(k e /k) is given by σ where G(x; s) is holomorphic in the region (2.3), for some c > 0, and satisfies (2.4).Moreover, we have In the case x = 1, this limit is non-zero. Proof.We consider the Euler product expansion of f Λ,H (x; s) from (3.5), where Euler factors at v / ∈ S were determined in Lemma 3.10.By Corollary 3.13 and our assumptions on S, we may apply Proposition 2.3 to obtain with a function H(x; s) that is holomorphic in a region (2.3) and satisfies the bound (2.4).By Lemma 3.6, we may multiply H(x; s) by the Euler factors f Λv,H (x v ; s) for v ∈ S while still preserving these properties (possibly for a smaller c > 0 in (2.3)).Finally, the explicit form of the limit follows from (2.5) which, together with Lemma 3.6, also shows that the limit is non-zero if x = 1. 3.6.The asymptotic formula in Theorem 3.1.We now bring all our tools together to prove the first part of Theorem 3.1.Recall from Lemma 3.4 that we performed a Möbius inversion to obtain a sum over the subgroups H of G.Moreover, in Proposition 3.9 we used Poisson summation to understand the inner sums from Lemma 3.4.In summary, where ̟(k, H, A) < ̟(k, G, A). Proof.Follows immediately from Definition 1.3. We are now finally in the position to prove the required asymptotic formula. This proves the asymptotic formula in Theorem 3.1.Next, we study the leading constant. 3.7. Formula for the leading constant.To calculate the leading constant, we first need to understand exactly which elements of O * S ⊗ H ∧ give rise to the leading singularity in the Poisson sum (Proposition 3.9). for all but finitely many v}. Then X (k, G, A) is finite and Proof.It is enough to prove the result for G ∧ a cyclic group of prime power order.Henceforth, let G ∧ = Z/qZ, where q = p r is a prime power.We view X (k, G, A) as a subgroup of k * /k * q .First, we claim that the image of X (k, G, A) in k(µ q ) * /k(µ q ) * q is equal to the image of A. One containment is clear, as A ⊗ G ∧ ⊂ X (k, G, A).For the other, let K = k(µ q , q √ A), so K = k q in the notation of Definition 1.3.Let K v be the completion of k at a choice of place of K above v.The image of X (k, G, A) in k(µ q ) * /k(µ q ) * q is contained in the following set: {x ∈ k(µ q ) * /k(µ q ) * q : x v ∈ K * q v for all but finitely many v} .As µ q ⊂ K, an application of the Chebotarev density theorem shows that this set equals (k(µ q ) * ∩ K * q )/k(µ q ) * q (this also follows from Lemma 4.9).On the other hand, Kummer theory shows that (k(µ q ) * ∩ K * q )/k(µ q ) * q is equal to the image of A in k(µ q ) * /k(µ q ) * q , and the claim is proved.In particular, the image X (k, G, A) in k(µ q ) * /k(µ q ) * q is finite, as A is finitely generated. We now show that X (k, G, A) ⊂ O * S ⊗ G ∧ ; the rest follows from the fact that our condition is S-frobenian (see Lemma 3.12).Let x ∈ k * be such that its image in k * /k * q is in X (k, G, A).By the argument above, the image of x in k(µ q ) * /k(µ q ) * q is in (k(µ q ) * ∩ K * q )/k(µ q ) * q .In particular, x = y q for some y ∈ K * .By our assumptions in §3.2 that A ⊂ O * S and that S includes all primes dividing |G|, the extension K/k is unramified at all v / ∈ S. Therefore, for all v / ∈ S, the valuation ord v (x) = ord v (y q ) is divisible by q.Consequently, the fractional ideal xO S is the qth power of some fractional ideal I of O S .By our assumption in §3.2 that O S has trivial class group, I = zO S for some z ∈ k * .Therefore, x = uz q for some u ∈ O * S .This completes the proof.Lemma 3.21. | − 1 as this group always contains the trivial homomorphism.The result now follows from the fact that the function in Corollary 3.13 is S-frobenian. These lemmas show that the leading singularity comes from finitely many terms which are independent of S and our choice of local conditions for v ∈ S.This makes applications much easier when one is varying S (we require such applications for the proof of Theorem 1.9).Theorem 3.22.Retain the assumptions of Theorem 3.1 and the additional assumptions on the finite set of places S from §3.2.Let X (k, G, A) be as in Lemma 3.20 and let S f be the set of non-archimedean places in S. , where the product over v / ∈ S is non-zero. Proof.From Proposition 3.16, Proposition 3.19, and Lemma 3.21, we get the leading constant We have f Λv,G (x v ; 1) = f Λv,G (1; 1) for x ∈ X (k, G, A) and v / ∈ S by Lemma 3.21, and these factors are non-zero by Lemma 3.6.The explicit expressions for v / ∈ S follow from Lemma 3.8.For v ∈ S, we simply apply directly the definition of the local Fourier transforms from §3.4.1 (see (3.6) for a formula in the non-archimedean case) and change the order of summation. Note that the expression for c k,G,Λ is independent of S, for any S which satisfies the assumptions of §3.2.Remark 3.23.In the special case A = {1}, our constant agrees with the constant which Wood obtains in [49,Thm. 3.1], up to the factor (Res s=1 ζ k (s)) ̟(k,G,A) .This factor is missing from Wood's paper: in the proof of [49,Thm. 3.1], she mistakenly uses the equality lim s→1 (s − 1)ζ K (s) = 1, which holds for K = Q but does not hold in general (the residue is given by the analytic class number formula).Thus the right-hand side of [49,Thm. 3.1] should contain an additional factor of (Res ).An examination of the proof of Lemma 3.20 gives the bounds The following examples show that either bound can be sharp. An example where both bounds coincide is given by taking A = {1} and G ∧ = Z/2Z.One easily sees that in this case X (k, G, {1}) is trivial. Positivity of the leading constant. To finish the proof of Theorem 3.1, we need to show that c k,G,Λ > 0 if there exists some sub-G-extension which realises all the given local conditions.It suffices to consider the contributions from v ∈ S to the explicit expression given in Theorem 3.22, as the factors at v / ∈ S are clearly non-zero.By character orthogonality we have In particular, this sum is non-negative for all χ ∈ Hom( v∈S k * v , G).Hence, it suffices to show the existence of some χ such that this sum is non-zero.However, we have assumed the existence of a sub-G-extension ϕ which realises all the local conditions.Let ψ : A * /k * → G be the associated homomorphism coming from class field theory.Note It therefore suffices to show that However, for x ∈ X (k, G, A) we have x v ∈ A v ⊗G ∧ for all v / ∈ S, by Lemma 3.20.Moreover, by assumption every element of A is a local norm from K ϕ for all v / ∈ S, thus A v ⊂ Ker ψ v for all v / ∈ S by Lemma 3.5.The claim (3.16) follows, which completes the proof of Theorem 3.1. Proof of results We now apply Theorem 3.1 in various ways to prove the results from the introduction. 4.1.Asymptotic formula for everywhere local norms.We first derive an asymptotic formula for N loc (k, G, A, B) (see (1.1)) using Theorem 3.1.Theorem 4.1.We have Proof.For all v ∈ Ω k , let Λ v be the set of sub-G-extensions of k v corresponding to those extensions L/k v for which every element of A is a local norm from L/k v .Thus, in this setting Λ = (Λ v ) v∈Ω k is determined by A. We clearly have k, G, Λ, B).It therefore suffices to show that the leading constant in Theorem 3.1 is positive.To do so, we need to exhibit some sub-G-extension of k for which every element of A is everywhere locally a norm.However, the trivial extension k/k is such an extension.4.2.Proof of Theorem 1.9.As cyclic extensions always satisfy the Hasse norm principle, we may assume that G is non-cyclic.We use the following criterion for failure of the Hasse norm principle in the abelian setting, which was originally pointed out to us by Melanie Matchett Wood.(We use the notation from §3.1.)Proposition 4.2.Let ϕ be a G-extension of k.Then ϕ fails the Hasse norm principle if and only if there exists a proper subgroup Υ ⊂ ∧ 2 (G) that contains the image of the natural map Proof.Let K be the number field determined by ϕ.Recall that the failure of the Hasse norm principle is measured by the Tate-Shafarevich group where R 1 K/k G m denotes the associated norm 1 torus, see [34, §6.3].This group is finite by [34, Prop.6.9].As K/k is Galois, a theorem of Tate [34,Thm. 6.11] (see also [36,Ex. 5.6]) implies that there is an exact sequence However, as G is abelian, we have a well-known canonical isomorphism (see e.g.[20,Lem. 6.4]).Using this and applying Hom(•, Q/Z), we therefore obtain the exact sequence Thus, failure of the Hasse norm principle is equivalent to the first map in (4.1) failing to be surjective. Therefore, to prove Theorem 1.9, it suffices to show the following. Theorem 4.3.Let Υ ⊂ ∧ 2 (G) be a proper subgroup.Then Note that in Theorem 4.3, and henceforth, we abuse notation by writing ∧ 2 (Im ϕ v ) ⊂ Υ to mean that the image of the natural map ∧ 2 (Im ϕ v ) → ∧ 2 (G) is contained in Υ, despite the fact that this map is not injective in general. We prove Theorem 4.3 via an application of Theorem 3.1.Note, however, that one cannot apply Theorem 3.1 directly, as the local conditions imposed at the infinitely many places will not be with the assumptions of Theorem 3.1.We therefore apply Theorem 3.1 to a suitable finite set of places, which we then allow to increase.4.2.1.Proof of Theorem 4.3.Let S 0 be a finite set of places of k satisfying the conditions of §3.2, which we consider as being fixed.Let T be a finite set of places of k which is disjoint from S 0 .Eventually, we will consider what happens as T increases.Let S = S 0 ∪ T . We consider the local conditions Λ v given by We denote the collection of such conditions by Λ T .Note that we clearly have for all B. Applying Theorem 3.1 gives where c k,G,A,loc > 0 by Theorem 4.1.To prove Theorem 4.3 it therefore suffices to show that lim where as explained we consider S 0 as fixed and T as increasing and disjoint from S 0 .We do this using the explicit expression for the leading constant given in Theorem 3.22.We let e be the exponent of G.We require the following elementary observation.Proof.Let K be an extension of k with Galois group isomorphic to a subgroup of G and v a place of k such that α ∈ k * e v .Let K v be the completion of k at a choice of place of K above v.Then local class field theory yields Now G has exponent e, whereby the group k * v / N Kv/kv K * v has exponent dividing e.It follows that an eth power in k * v is a local norm. We now obtain the following bounds. Proof.The factors in Theorem 3.22 cancel out in the quotient c k,G,Λ T /c k,G,A,loc , except those at places v ∈ S. By Lemma 3.20, we have (this statement holds for any set of places satisfying the assumptions of §3.2).Moreover, for v ∈ T any element of A v is a local norm at v by our choice of Λ v ; it follows that χ v , x v = 1 for χ v ∈ Λ v as in Theorem 3.22, hence Therefore, we can split off Euler factors for all v ∈ T from the term involving S, while the remaining sum over Hom( v∈S 0 k * v , G) is the same in c k,G,Λ T and c k,G,A,loc .We have obtained the equality . The quotient of each local factor is at most 1, so to obtain an upper bound we may just consider those places v ∈ T which are completely split in k e /k.For such places every element of A is an eth power in k * v , hence the condition that they are local norms is automatic by Lemma 4.4.The result follows. We will make use of the following fact from [20,Lem. 6.9].Here, we use the term bicyclic for a non-cyclic group that is a direct sum of two cyclic groups.Lemma 4.6.Let G be a finite abelian non-cyclic group.Then there exists a finite collection of bicyclic subgroups As Υ ⊂ ∧ 2 (G) is a proper subgroup, there exists some i such that ∧ 2 (G i ) ⊂ Υ. Fix this i and write G i ∼ = Z/nZ × Z/mZ where n, m | e.Let v ∈ T be a place of k which is completely split in the extension k(µ e , e √ A).There exists a G i -extension of k v : simply adjoin an nth root of a uniformiser to the unique unramified extension of k v of degree m.Thus, by local class field theory, there Recall that tamely ramified χ v have conductor q v and there are |G| unramified G-characters.Using Lemma 4.5, it follows that However this diverges to 0 as diverges by the Chebotarev density theorem.This proves (4.2) and completes the proof of Theorem 4.3, hence the proof of Theorem 1.9. Unfortunately the second part of the statement [20, Lem.6.9] is false (this claims that if the exponent of ∧ 2 (G) divides a prime p, then all the G i may chosen isomorphic to (Z/pZ) 2 ).A counterexample is given by the group G = Z/2Z × Z/4Z and the subgroup G 1 = Z/2Z × Z/2Z; here the induced map is trivial.This mistake in [20,Lem. 6.9] has various consequences for [20] which will be addressed in a forthcoming corrigendum. v and α is a unit, its image in the residue field lies in F * d v .However, as d = gcd(e, q v − 1), we have The result therefore follows from Hensel's lemma. Hence, the remaining implication (2)⇒(3) in Theorem 1.6 follows immediately from (4.3) and Lemma 4.8.4.6.Proof of Corollary 1.7.Let α ∈ A and consider αβ e for some β ∈ k * .By Lemma 4.4, we see that β e is a norm everywhere locally from all G-extensions of k.It follows that αβ e is a norm everywhere locally from a given G-extension if and only if α is a norm everywhere locally.Part (i) now follows from Theorem 1.9 and Theorem 4.1.Part (ii) also follows from Lemma 4.4 and Theorem 1.9. For (iii) and (iv), we use the ω-version of Tate-Shafarevich groups.Namely, for a finite abelian group scheme M over k we let ) for all but finitely many v}. By Kummer theory we have H 1 (L, µ e ) = L * /L * e for any field L of characteristic 0. Therefore Part (iii) of Theorem 1.6 is equivalent to The key observation is now the following.Lemma 4.9.Let k be a number field, let e ∈ Z ≥1 and let 2 r be the largest power of 2 dividing e.Then X ω (k, µ e ) = 0, unless the extension k(µ 2 r )/k is non-cyclic, where we have X ω (k, µ e ) ∼ = Z/2Z.As Br(V ⊗ K L) = 0 and Br(L) ։ Br(V ⊗ K L), the Hochschild-Serre spectral sequence provides an embedding of Coker(Br(K) → Br(V )) into the kernel of the restriction map H On the other hand, the exact sequence (see [44, p. 130]) embeds this group into H 2 (Gal(L/K), T ). As f is smooth and surjective, we have f * Br(k(B)) ∩ Br(X ′ ) ⊆ f * Br(B) as subgroups of Br(k(X)).It follows that the pull-back map is injective as well.Thanks to this injectivity, we now see that in order to prove Proposition A.6, we may assume that k is algebraically closed. The generic fibre of the natural map X → (T /G when k is algebraically closed, as we have seen in the proof of Proposition A.5.In addition, the unramified Brauer group of (T 1 × • • • × T 1 )/G vanishes when k is algebraically closed, by Saltman's formula [15,Thm. 8.7] and by the next lemma.Hence Br(X ′ ) = 0 in this case. ) vanish, this follows from the injectivity of the product of restriction maps It is a general fact, valid for an arbitrary finite group G, that the kernel of (A.This completes the proof of Proposition A.6. A.5. Nonabelian Galois groups.The descent-fibration argument described in §A.2 and §A.3 is modelled after a similar argument appearing in [24], profiting in addition from the favourable circumstance of G being abelian.In general, the inductive argument of [24] is constructed to handle also nonabelian groups, as long as they admit a suitable filtration into normal subgroups whose successive quotients are cyclic; such groups are also known as supersolvable.Though the variety X considered here is more complicated than the one considered in [24], the argument of loc.cit.can be adapted to yield the statement of Theorem 1.1 for any supersolvable G, see [25].Interestingly enough, though, it turns out that the stronger claim appearing in Corollary 4.12 does not hold for a general nonabelian group G, even when G is supersolvable (indeed, even when G is a 2-group).This is due to the fact that the variety X may contain unramified Brauer classes which are not vertical with respect to the projection f : X → B, and which can obstruct the weak approximation of local points on X, even when those local points lie over a rational point of B. (Such Brauer classes do not exist in the abelian case; see Proposition A.6.) Let us now illustrate how one can construct a nonabelian example where exactly this happens. We shall say that a group H is weakly bicyclic if it is an extension of a cyclic group by a cyclic group.We note that if K/k is a Galois extension with Galois group G then the decomposition subgroups H v ⊆ G are weakly bicyclic at every finite place v which does not divide the order of G. Given a group G, we shall denote by B G the set of weakly bicyclic subgroups of G (a notation compatible with Lemma A.8 when G is abelian).Proposition A.9. Let G be a finite 2-group satisfying the following properties: (i) G has exponent ≤ 16. (ii) The abelianization G ab has exponent 2 and is generated by images of elements of G of order 2. (iii) There exists an element ϕ ∈ H 2 (G, Z/2Z) whose restriction to every cyclic subgroup of G of order 16 vanishes, whose restriction to at least one cyclic subgroup of G of order 8 does not vanish, and whose image by the natural map δ : H 2 (G, Z/2Z) → H 2 (G, Q/Z) belongs to, and spans, the kernel of the product of restriction maps H 2 (G, Q/Z) → H∈B G H 2 (H, Q/Z).Let H ⊆ G be a cyclic subgroup of order 8 on which ϕ does not vanish.Then: (1) There exist G-extensions K/Q which are unramified at 2 and whose decomposition groups at 2 are conjugate to H. (2) For every G-extension K/Q as in (1), the element 256 ∈ Q * is a local norm from K at every place of Q, but not a global norm from K. In particular, the statement of Corollary 4.12 does not hold for G with k = Q, S = {2} and A ⊂ k * the subgroup generated by 256. The proof of Proposition A.9 requires a bit of preparation.In the next lemma, we denote by Br nr (B), Br 1 (B), Br 1,nr (B), Br 0 (B) the subgroups of Br(B) consisting, respectively, of unramified, algebraic, algebraic unramified, constant classes.Condition (iii) implies that Br nr (B) = Br 1,nr (B).For every subgroup H ⊆ G, the Hochschild-Serre spectral sequences for the H-coverings π H : SL n → SL n /H and SL n, Q → SL n, Q/H , together with the inclusion of roots of unity µ ∞ ⊆ Q * , give rise to a commutative diagram Ker Br(SL n /H) → Br(SL n ) where Γ Q = Gal( Q/Q) is the absolute Galois group of Q.The horizontal arrows between the first two columns are isomorphisms since Pic(SL n ) = Pic(SL n, Q) = 0, and the bottom right horizontal map is an isomorphism since Q * /µ ∞ is uniquely divisible.In addition, the rightmost vertical map is surjective: indeed, this map fits in the middle of the commutative diagram with exact rows 0 / / Ext determined by the universal coefficient theorem, where Ext 1 (H 1 (H), µ ∞ ) = 0 since µ ∞ is a divisible group.We now fix a β ∈ Br nr (B) and aim to show that β is algebraic.By adding to β a constant class, we may assume that β(π G (1)) = 0.As SL n is rational over Q, we have Br nr (SL n ) = Br 0 (SL n ), and so π * G β = 0. Considering the diagram (A.7) for G = H and using the surjectivity of its right vertical map, we find β G ∈ H 2 (G, µ 2 ) whose eventual image in Br(BQ) is the same as the image of β.Now by Bogomolov's formula (see, e.g., [15,Thm. 7.1]), the group Br nr (SL n, Q/H ) vanishes whenever H is weakly bicyclic, and so by the naturality of (A.7), the image of β G in H 2 (H, µ ∞ ) vanishes for every H ∈ B G .Since µ ∞ ∼ = Q/Z as abelian groups via a choice of a compatible system of roots of unity, Condition (iii) implies that the image of β G in H 2 (G, µ ∞ ) is either 0 or the image of ϕ ∈ H 2 (G, Z/2) = H 2 (G, µ 2 ) under the natural map H 2 (G, µ 2 ) → H 2 (G, µ ∞ ).By possibly amending the choice of β G , we may assume that β G ∈ {0, ϕ}.We then write β 1 ∈ Br(B) for the image of β G , and set β 2 := β −β 1 .By construction, β 1 (and hence also β 2 ) vanishes when pulled back to SL n , and β 2 also vanishes when pulled back to BQ.In particular, β 2 ∈ Ker(Br 1 (B) → Br 1 (SL n )). Let now H ⊆ G be a cyclic subgroup of order 8 on which ϕ does not vanish.As β is unramified, there exists a prime p 0 such that β evaluates trivially on B(Q p ) for all p > p 0 .Choose p > p 0 such that there exists a cyclic extension L/Q p of degree 4 that does not extend to a cyclic extension of degree 8 (any p such that −1 is a square but not a 4th power modulo p will do).Embed Gal(L/Q p ) into H.The image of the class of L/Q p by the resulting map H 1 (Q p , Gal(L/Q p )) → H 1 (Q p , G) is the class of the torsor π −1 G (b) for some point b ∈ B(Q p ) (see [23, §1.2]), which we fix. For any place v of Q, let K v denote the completion of K at a place of K dividing v.The corresponding decomposition group D v ⊆ G is weakly bicyclic since G is a 2-group and K 2 /Q 2 is unramified.Letting Q v ⊆ K ϕ Dv ⊆ K v denote the intermediate cyclic extension determined by ϕ Dv ∈ H 1 (D v , Q/Z), a direct computation now reveals that x * v P = (16, K ϕ Dv /Q v ) ∈ Br(Q v ).Since ϕ is assumed to vanish on every cyclic subgroup of order 16, the class ϕ Dv becomes divisible by 2 when restricted to every such subgroup.Since the exponent of G divides 16, it follows that 8 ϕ Dv ∈ H 1 (D v , Q/Z) = Hom(D v , Q/Z) vanishes when restricted to any cyclic subgroup of D v , and hence vanishes; in other words, the degree of the extension K ϕ Dv /Q v divides 8. On the other hand, as D 2 is cyclic of order 8 and ϕ does not vanish when restricted to D 2 we have that ϕ 2 ∈ H 1 (D 2 , Q/Z) ∼ = Z/8Z is not divisible by 2 and so K ϕ D 2 = K 2 .We conclude that inv v (x * v P) = 0 for all v = 2 (recall that 16 is an 8th power at such v) while inv 2 (x * 2 P) = 1/2 ∈ Q/Z.We shall now construct a 2-group G satisfying the conditions of Proposition A.9. Let N be the group generated by 4 generators x, y, z + , z − under the following relations: (1) x 16 = y 16 = z 8 + = z 8 − = 1; (2) each of z + , z − commutes with each of x, y, z + , z − ; (3) [x, y] = z + z − .In particular, N is a central extension of the bicyclic group Z/16Z x, y by the bicyclic group Z/8Z z + , z − .Let σ : N → N be the involution given by σ(x) = x −1 , σ(y) = y −1 , σ(z + ) = z − and σ(z − ) = z + .We define G := N ⋊ Z/2Z σ to be the associated semi-direct product and view σ as an element of G. It is straightforward that G satisfies Conditions (i) and (ii) of Proposition A.9. Let us now construct an element ϕ ∈ H 2 (G, Z/2Z) satisfying Condition (iii).The homomorphism ρ : N → Z/8Z which sends x, y to 0, z + to 1 and z − to −1 intertwines the action of σ with the action of −1 : Z/8Z → Z/8Z.Consequently, it induces a homomorphism ρ ′ : G = N ⋊ Z/2Z → Z/8Z ⋊ Z/2Z =: D 8 to the dihedral group of order 16.Consider the short exact sequence where D 16 := Z/16Z ⋊ Z/2Z is the dihedral group of order 32 and the map q is induced by the surjective map Z/16Z → Z/8Z.Let ϕ D 8 ∈ H 2 (D 8 , Z/2Z) be the element classifying the central extension (A.8) and let ϕ := (ρ ′ ) * ϕ D 8 ∈ H 2 (G, Z/2Z).We leave it to the reader to verify that ϕ has the desired properties. Theorem 3 . 1 . Let k be a number field, G a non-trivial finite abelian group, and A ⊂ k * a finitely generated subgroup.Let S be a finite set of places of k and for is a union of conjugacy classes, since each Gal(k e /k d ) is normal in Gal(k e /k).The sets Σ d for d | e form a partition of Gal(k e /k).Let ϕ : Gal(k e /k) → C be the class function that takes the constant value d on Σ d , for all d | e.We claim that d A,H (v) = ϕ(Frob v ) for all v / ∈ S, so in particular it is S-frobenian.Note that Frob v ∈ Σ d if and only if d is the largest divisor of e such that v splits completely in k d /k.Equivalently, d is the largest divisor of e such that d | q v − 1 and x d − α has a root in k v for all α ∈ A. By Hensel's lemma, this is equivalent to d = d A,H (v), and thus ϕ(Frob v ) = d A,H (v), as desired. Lemma 4 . 4 . Let α ∈ k * .If v is such that α ∈ k * e v ,then α is a local norm at v from every sub-G-extension of k. Lemma 4 . 8 . Let e ∈ Z ≥1 , let α ∈ k * , and let v be a place of k such that e, α ∈ O * v .Let d = gcd(e, Lemma A. 8 . Let B G denote the set of subgroups of G generated by two elements.For any finite abelian group G and for M a field which contains d distinct dth roots of unity and A ⊂ F * is a finitely generated subgroup, then we denote by F ( d √ A) the splitting field of the polynomials x d − α, where α runs over a set of generators of A.For a subgroup A ⊂ k * and a place v of k, we denote by A v the image of A in this Dirichlet series defines a holomorphic function on Re s > 1. (This follows from [49, Lem.2.10], but also from the analysis later in this paper.) 3.3.1.Möbius inversion.Recall that a G-extension of k is a surjective continuous homomorphism ϕ : Gal( k/k) → G.The condition that ϕ be surjective is difficult to deal with, hence we perform a Möbius inversion to remove it.Let µ be the Möbius function on isomorphism classes of finite abelian groups.That is, µ(G) = 0 if G has a cyclic subgroup of order p n with p a prime and n 3.3.2.Class field theory.Via global class field theory, we make the identification Hom(Gal( k/k), H) = Hom(A * /k * , H).(3.4)The canonical isomorphism (3.4) is induced by the global Artin map A , H), let Φ(χ) be the reciprocal of the idelic norm of the conductor of the kernel of χ, which is precisely the norm of the conductor of the sub-H-extension corresponding to χ. Together with Lemma 3.3, this discussion shows the following: * /k * → Gal(k ab /k).Using this isomorphism, we consider f Λ now as a function on Hom(A * /k * , H).For every χ ∈ Hom(A * /k * 3.4.1.Fourier transforms.The group Hom(A * /k * , H) is locally compact.Its Pontryagin dual is naturally identified with A * /k * ⊗ H ∧ (see [20, §3.1]).We denote the associated pairing by •, • : Hom(A * /k * , H) × (A * /k * ⊗ H ∧ ) → S 1 .Similarly, the Pontryagin dual of Hom(k * v , H) is naturally identified with k * v ⊗H ∧ , and we also denote the relevant Pontryagin pairing by •, • .For each place v, we equip the finite group Hom(k * v , H) with the unique Haar measure dχ v such that vol(Hom(k * v /O * v , H)) = 1.If v is non-archimedean, this is |H| −1 times the counting measure; for archimedean v, recalling our convention that O v = k v , we obtain the counting measure.The product of these measures yields a well-defined measure dχ on Hom(A * , H).We say that an element of Hom(k * v , H) is unramified if it lies in the subgroup Hom(k * v /O * v , H), i.e. if it is trivial on O * v , and that it is tamely ramified if it is ramified and trivial on 1 Poisson summation.We now prove the version of Poisson summation that we will require.In the statement, we view O * S ⊗ H ∧ as a subgroup of k * ⊗ H ∧ as follows: we have the exact sequence 0 → O * S → k * → P (O S ) → 0 (3.9)where P (O S ) denotes the group of non-zero principal fractional ideals of O S .Since P (O S ) is a free abelian group, we have Tor(P (O S ), H ∧ ) = 0. Therefore applying (•) ⊗ H ∧ to (3.9) we find that the map O * S ⊗ H ∧ → k * ⊗ H ∧ is injective, as required.For Re s > 1 the Fourier transform f Λ,H (•; s) exists and defines a holomorphic function on this domain.Moreover, we have the Poisson formula naturally identified with the Pontryagin dual of Hom(k * v /O * v , H).The result now follows on noting that k * v /O * v ∼ = Z and hence | Hom(k * v /O * v , H)| = |H|.3.4.3.χ∈Hom(A * /k * ,H) A * S and A * S /O * S are locally compact groups and their subgroups of nth powers are closed, an application of [20, Lem.3.2] gives canonical isomorphisms of abelian groups Hom(A * S , H) ∼ = (A * S ⊗ H ∧ ) ∧ and Hom(A * * S ⊗ H ∧ ) ∧ .Therefore, we can view an element χ ∈ Hom(A * S , H) as a character of A * S ⊗ H ∧ .It is easily seen that χ induces the trivial character on O * S ⊗ H ∧ if and only if χ ∈ Hom(A * S /O * S , , as O S has trivial class group, the natural map A * S /O * S → A * /k * is an isomorphism [49, Lem.2.8].The result now easily follows.3.5.Analytic continuation of the Fourier transforms.We now use the Poisson formula to study the analytic behaviour of the Dirichlet series under consideration.To do so, we shall calculate explicitly the local Fourier transforms for v / ∈ S. Fix some subgroup H of G.By a slight abuse of notation, for Therefore, any continuous homomorphism 1 + p v → H is trivial.It follows that Hom(F * v , H) = Hom(O * v , H).Moreover, A mod v lies in the kernel of a homomorphism F * v → H if and only if A v lies in the kernel of the induced homomorphism O * [39,39, Prop.IV.2.6]).Consequently, the quotient (1 + p v )/(1 + p n v ) has order a power of q v .Now recall that we assumed in §3.2 that gcd(q v , H) = 1. Let H ⊂ G be a subgroup, let x ∈ O * S ⊗ H ∧ , and let a n (H, x) be the Dirichlet coefficient from(3.15).Then n (H, x) ∈ C. Lemma 3.17. [32, by Theorem 1.6 there exists a cofinite set of places T ⊂ Ω k such that A ⊂ k * e Then at all places v = p, the cyclic algebra (χ, α) over k has local invariant zero, because α is a local norm at v byLemma 4.4.Now the Albert-Brauer-Hasse-Noether Theorem[32, Thm.8.1.17]showsthat(χ, α) has local invariant zero at p, meaning that α is also a local norm at p. Therefore, all elements of A are everywhere local norms from all G-extensions of k.But G is cyclic, hence every G-extension satisfies the Hasse norm principle; (2) now follows.Let T be a torus over a field K, with character group T , split by a finite Galois extension L/K.For any smooth and proper variety V over K containing a torsor under T as a dense open subset, there is a canonical embedding Coker(Br(K) → Br(V )) ֒→ H 2 (Gal(L/K), T ).Proof.Let L denote a separable closure of L and V 0 the open subset in question. [32,r all v ∈ T .By[32, Thm.9.1.11],Kerk*/k* e → v∈T k * v /k * e v = Ker k * /k * e → vso we may assume that T contains all v ∤ 2. Let p be the unique prime of k lying above 2. Let χ : Gal( k/k) → G be a G-extension and let α ∈ A.4.8.Variants of Theorems 1.1 and 1.4.We finish with some variants of our results, which allow one to impose local conditions at finitely many places.Our first result is a variant of Theorem 1.4, and follows immediately from Theorem 3.1 and Theorem 1.9.Lemma A.7.
2018-10-14T12:31:55.000Z
2018-10-14T00:00:00.000
{ "year": 2018, "sha1": "6b7e858d36b2d726970be0b46c2645ea5855c234", "oa_license": "CCBY", "oa_url": "https://ems.press/content/serial-article-files/18239", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "6b7e858d36b2d726970be0b46c2645ea5855c234", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
3560403
pes2o/s2orc
v3-fos-license
Fabrication of Hemin-Doped Serum Albumin-Based Fibrous Scaffolds for Neural Tissue Engineering Applications Neural tissue engineering (TE) represents a promising new avenue of therapy to support nerve recovery and regeneration. To recreate the complex environment in which neurons develop and mature, the ideal biomaterials for neural TE require a number of properties and capabilities including the appropriate biochemical and physical cues to adsorb and release specific growth factors. Here, we present neural TE constructs based on electrospun serum albumin (SA) fibrous scaffolds. We doped our SA scaffolds with an iron-containing porphyrin, hemin, to confer conductivity, and then functionalized them with different recombinant proteins and growth factors to ensure cell attachment and proliferation. We demonstrated the potential for these constructs combining topographical, biochemical, and electrical stimuli by testing them with clinically relevant neural populations derived from human induced pluripotent stem cells (hiPSCs). Our scaffolds could support the attachment, proliferation, and neuronal differentiation of hiPSC-derived neural stem cells (NSCs), and were also able to incorporate active growth factors and release them over time, which modified the behavior of cultured cells and substituted the need for growth factor supplementation by media change. Electrical stimulation on the doped SA scaffold positively influenced the maturation of neuronal populations, with neurons exhibiting more branched neurites compared to controls. Through promotion of cell proliferation, differentiation, and neurite branching of hiPSC-derived NSCs, these conductive SA fibrous scaffolds are of broad application in nerve regeneration strategies. INTRODUCTION Nerve injuries in either the central nervous system (CNS) or peripheral nervous system (PNS) can cause severe neurological deficits, resulting in the diminished physical and psychological well-being of patients. 1,2 As the regenerative ability of the human nervous system is limited, these injuries can be permanent due also to the relative shortage of therapeutic options. 2 Although nerve repair in the PNS can be achieved by autologous transfer of a normal nerve from an uninjured site, its application is restricted by limited tissue supply and the potential undesirable effects at the donor site. 3 Given these considerations, tissue engineering strategies incorporating both biomaterials and cellular therapies represent a promising new avenue for therapeutic nerve repair and neuroregeneration. In order to successfully recreate intricate and functional neural tissue in vitro, several different components and properties are necessary. First, building a bioengineered construct that mimics neural tissue requires the presence of a scaffold that can provide housing for a supportive extracellular environment along with the physical guidance necessary for nerve repair and neural regeneration. 4,5 A widely used method to construct scaffolds for neural tissue engineering (TE) is electrospinning: this is a simple, potentially large-scale fabrication process capable of generating nano/microscale fibers for 3D scaffold architecture. 6,7 While artificial polymeric scaffolds are widely used, the generation and use of self-derived biomaterials from adults remains to be explored. 8 Serum albumin (SA), which is abundant and can be rapidly replenished in humans or animals, has been widely used in biomedical research for cell culture and storage, in vitro fertilization, and transplantation. 9 As a natural carrier protein with multiple ligand binding sites and the ability to bind different cellular receptors, SA has also been exploited as a potential delivery platform for drugs and biomolecules. 10 With its ease of isolation from clinical samples and lowest cost compared to other commercially available proteins, SA has become an attractive autogenic biomaterial for TE with optimal cell compatibility. 8,11,12 In addition to a suitable scaffold supporting cellular growth and differentiation, it is also desirable to integrate multiple different cues into any tissue-engineered construct to recapitulate the tissue's natural microenvironment. A variety of different factors have already been used in tissue engineering scaffolds to promote nerve regeneration. For example, nerve growth factor (NGF), 13 brain-derived neurotrophic factor (BDNF), 14,15 and glial-derived neurotrophic factor (GDNF) 16 successfully encapsulated into different electrospun scaffolds showed that the synergistic effects of nanofiber topography and sustained growth factor delivery could promote cellular proliferation and differentiation in targeted cells. As a wellcharacterized neurogenic factor affecting neural stem cell (NSC) proliferation and differentiation, fibroblast growth factor-2 (FGF2) (basic FGF) 17 has also been encapsulated into fibrous biomaterials for TE purposes. 13,18 An ideal construct for neural TE also needs to take into account the inherent electroresponsive properties of neurons and the effect of electrical stimulation on developing neuronal networks. Several studies have suggested an important role of external electrical stimulation on enhancing neuronal differentiation, neurite sprouting, neurite outgrowth, and neurite orientation. 19−22 In recent years, fibrous scaffolds with electrically conductive properties have been used in neural TE to actively modulate cell responses like differentiation and neurite guidance following application of external electric stimuli. 23 For example, various conducting polymers, such as polypyrrole (PPy) 24,25 and polyaniline (PANI), 26,27 graphene, 28 and gold nanoparticles, 29 have individually been blended with other polymers and successfully electrospun into fibrous materials. Other studies also achieved conductive fibrous scaffolds by depositing a layer of conducting polymers or metallic nanoparticles onto the template fibers. 30−34 In this study, we sought to combine these complex stimuli topography, growth factor release, and electrical stimulation into a single construct designed specifically for neural TE applications. The scaffold construct is based on our recent study of a new type of conductive freestanding hybrid material based on the bovine SA protein. 35 After electrospinning, we doped the SA mat with a hemin dopant, which resulted in a very high macroscopic conductance. Hemin, the oxidized form of iron protoporphyrin IX (Fe 3+ ), is critical to cellular homeostasis and gene regulation, and is also one of the main electron mediators in nature. 36 This facile approach using electrospinning and doping in hemin solution eliminates the need for a complicated fabrication process. The large affinity of hemin to the SA mat also avoids the leaching of dopants out of the mat in an aqueous environment. 35 The 3D electrospun fibrous structure, the biocompatibility of the raw materials, and the strength of electrical conductivity make hemin-doped SA mats a promising material for bioelectronic devices and tissue engineered constructs. To test the potential application of the SA constructs for neural TE, we utilized human induced pluripotent stem cell (hiPSC)-derived NSCs, which represent an attractive cell source for TE and regenerative medicine. 37 These cells are generated by reprogramming somatic cells such as fibroblasts into an undifferentiated state. 38 The generated cells are capable of self-renewal, providing a stable source of pluripotent cells; unlike embryonic stem cells (ESCs), hiPSCs can bypass certain ethical issues and can also be used for the production of patient-specific cells, reducing the risk of immune rejection. 37 While many studies done within the field of neural TE often use immortalized cell lines such as SH-SY5Y and PC12 cells, or primary cultures from animal models, 32,39,40 the hiPSC-derived neural populations provide a more clinically and biologically relevant platform by which to test the function of designed biomaterials. We demonstrated the potential of this proteinbased material that can be readily produced from an autologous origin, as a source for growth factor signaling by incorporating a human recombinant protein, FGF2, into the SA fibrous scaffolds. Finally, the conductive nature of the construct enabled us to explore the effect of electrical stimulation on clinically relevant human NSCs. The feasibility of using the hemin-doped SA fibrous scaffold for neural TE is concluded with the functional enhancement of neuronal cell behaviors. MATERIALS AND METHODS 2.1. Fabrication of Electrospun SA Fibrous Scaffolds. SA scaffolds were fabricated as previously described by Amdursky et al. 35 Briefly, bovine SA lyophilized powder, ≥96% (agarose gel electrophoresis), (Sigma-Aldrich, U.K.) was dissolved in a 90 v/v % 2,2,2trifluoroethanol (Sigma-Aldrich) solution. We premixed the polymer solution (14 w/v % bovine SA) on a tube roller overnight, and 5 v/v % of 2-mercaptoethanol (Sigma-Aldrich) was added 30 min before electrospinning. The polymer solution was electrospun using a syringe equipped with an 18 gauge steel needle, a 10 kV potential, a throw distance of 10 cm, and a syringe flow rate of 0.8 mL/h. Electrospun SA mats were obtained on an Al-foil-wrapped rotating drum with 10 cm diameter at an average speed of approximately 1000 rpm at a relative humidity (RH) of 35−55%. 2.2. Preparation of Hemin-Doped SA Fibrous Scaffolds. The hemin dopant (porcine; Sigma-Aldrich) was first dissolved in dimethyl sulfoxide (DMSO; Sigma-Aldrich) to make an 11 mM stock hemin solution. We then made the final doping solution of 130 μM hemin by diluting the stock solution with phosphate buffer solution (PBS). Electrospun SA mats were cut into smaller samples (10 mm wide, 30 mm long) and doped in the solution with shaking at room temperature overnight. Prior to use, the doped SA samples were immersed in PBS at least overnight to wash away the residual unincorporated dopants. 2.3. Electrochemical Properties of Hemin-Doped SA Fibrous Scaffolds. Nondoped and hemin-doped SA fibrous scaffolds were immersed in PBS in the cell culture constructs (described in section 2.6). Cyclic voltammetry (CV) was performed using an eDAQ 410 System (eDAQ Pty Ltd., Australia) by applying cyclic potential in the ±0.75 V bias range at a scan rate of 40 mV/s. 2.4. Scanning Electron Microscopy (SEM). SA mats were dehydrated by incubation for at least 30 min in progressively higher concentrations of ethanol (Sigma-Aldrich) in water (30,50,70,80,90, and 100 v/v %) under gentle shaking. SA mats were then incubated in 100 v/v % EtOH for 1 h, with refreshing of the solution three times, followed by one wash in hexamethyldisilazane (Sigma-Aldrich) for 5 min, and finally air drying overnight under a chemical hood. A 10 nm thin film of Cr was deposited on the sample by sputter coating to prevent charging. The sample was analyzed at 5 keV with a Sigma 300 SEM instrument (ZEISS, Germany). 2.5. Cell Culture. The human episomal iPSC line (Epi-hiPSC) (Thermo Fisher Scientific, U.K.) was maintained on Matrigel-coated culture plates in feeder-free culture conditions with the use of chemically defined Essential 8 media (Thermo Fisher Scientific). Colonies of Epi-hiPSCs were passaged by dissociation with 0.5 M EDTA (pH 8.0; Thermo Fisher Scientific) diluted 1:1000 in sterile PBS when they reached 80−90% confluence. Neural differentiation was based on a published protocol with some modifications. 41 Briefly, Epi-hiPSC cultures were used for neural conversion when they reached confluence. The cells were differentiated into neuroectoderm by dual-SMAD signaling inhibition 42 2.6. Design of Cell Culture Device. We assembled the electrical stimulation device for Epi-hiPSC-derived NSCs on glass slides and hemin-doped fibrous scaffolds based on a conventional six-well tissue culture plate ( Figure S1A). Each scaffold was placed on a glass coverslip in a well. Two Au mylar (Vaculayer, Canada) electrodes were placed on top of the two ends of the scaffold with the conductive side (10 mm × 10 mm) facing down and the rest of the electrodes tightly folded alongside the culture well. An ∼50 mm thick poly-(dimethylsiloxane) (PDMS, Dow Corning, U.K.) ring fitted to the well with 10 mm inner diameter was placed and pressed on the stack of cover glass, SA fibrous scaffold, and mylar electrodes. The seam between the scaffold and the mylar electrodes was sealed by pressing the PDMS ring tightly to the attached cover glass. The culture devices of electrical stimulation were sterilized by one wash with 70 v/v % ethanol, three washes of sterile PBS, and exposure to UV light for an hour. 2.7. Laminin Coating of Hemin-Doped SA Fibrous Scaffolds. The scaffolds were assembled into a well device as described in section 2.6 without placing Au mylar electrodes. The mats were incubated overnight in 500 μL of 0.1 mg/mL poly-D-lysine (PDL; Sigma-Aldrich) solution, followed by three washes with PBS and then 500 μL of 10 μg/mL laminin (Sigma-Aldrich) overnight. The coating of laminin was evaluated with the amount of the remaining laminin in the coating solution after incorporation. Samples were analyzed using a Mouse Laminin ELISA Kit (Abcam, U.K.) according to manufacturer instructions. Absorbance values from ELISA plates were measured at 450 nm with a multimode microplate reader (SpectraMax M5; Molecular Devices, USA) and were normalized to the glass control. For the time-lapse laminin adsorption assay, 20 PDL-coated and 20 PDL-laminin-coated nondoped, hemin-doped, and glass substrates were prepared as mentioned above. Four PDL-coated and 4 PDLlaminin-coated substrates were stained at different time points (day 0, day 2, week 1, week 2, and week 3), as described in section 2.12. The time-lapse laminin adsorption was determined by subtracting the background mean fluorescence intensity of PDL-coated substrates from the mean fluorescence intensity of the PDL-laminin-coated substrates to eliminate the effect of autofluorescence of SA and the fluorescence quenching caused by the hemin dopant (10 fields were analyzed per batch of sample, and a total of 40 fields were analyzed). The stability of the laminin coating was evaluated by comparing the background-subtracted mean fluorescence intensity at different time points to day 0 within the substrate. 2.8. Incorporation and Release of FGF2 of Hemin-Doped SA Fibrous Scaffolds. The scaffolds were assembled into a well device as described in section 2.6 without placing Au mylar electrodes. For the incorporation assay, the device was incubated overnight in 500 μL of 0.1 mg/mL PDL (Sigma-Aldrich) solution followed by three washes with PBS and then 500 μL of 10 μg/mL laminin (Sigma-Aldrich) with 0.1 μg/mL FGF2 (PeproTech, U.K.) overnight. The incorporation of FGF2 was evaluated by the amount of the remaining FGF2 in the coating solution after incorporation (day 0). The release of FGF2 was examined by replacing the previous solution into fresh PBS at day 0 and day 2 and collecting the solution at day 2 and day 5, respectively. The time points were chosen in accordance with the frequency of the media exchange. FGF2 was examined by measuring the FGF2 released in the collected solution. Samples were analyzed using an FGF2 Human ELISA Kit (Thermo Fisher Scientific) according to manufacturer instructions with five different batches of scaffolds analyzed. Absorbance values from ELISA plates were measured at 450 nm with a multimode microplate reader (SpectraMax M5; Molecular Devices) and were normalized to the initial FGF2 solution. 2.9. Viability and Neuronal Differentiation of hiPSC-Derived NSCs on Hemin-Doped SA Fibrous Scaffolds. Before cells were seeded, the cell culture device was assembled and precoated with PDL and laminin, as described in section 2.7. hiPSC-Derived NSCs were seeded at a concentration of 200 000 cells in 300 μL of NSCR base medium in the inner well of the PDMS ring (d = 10 mm). After 30 min of cell adhesion, the constructs were topped up with an extra 3 mL of medium, and cultured at 37°C in a humid, 5% CO 2 incubator. After 24 h, the viability of NSCs on the scaffolds was evaluated using a LIVE/DEAD Viability/Cytotoxicity Kit for mammalian cells (Thermo Fisher Scientific), which determines cell viability based on the membrane integrity of cells. Viable cells were stained with green fluorescence through the reaction of calcein AM with intracellular esterase, while dead cells were stained with red fluorescence, indicating lost or damaged cell membranes. To test if the scaffolds were biocompatible for neuronal differentiation of hiPSC-derived NSCs, the cells were seeded at a concentration of 200 000 cells in NSCR neuron medium [NSCR base medium supplemented with 10 ng/mL BDNF (R&D Systems) and 10 ng/mL GDNF (R&D Systems)] for 7 d, with medium exchanged every 2−3 d. The cells were fixed after 7 d of neuronal differentiation and stained for cell observation. hiPSC-Derived NSCs on FGF2-Incorporated Hemin-Doped SA Fibrous Scaffolds and for Electrical Stimulation Studies. The cell culture constructs were assembled as described in section 2.6 and then prepared with or without 0.1 μg/mL FGF2 (PeproTech) incorporation, as described in section 2.8. NSCR base medium was used for electrical stimulation group and blank controls for FGF2 incorporation experiments, while F20 medium was used for positive controls for FGF2 incorporation experiments. Confluent Epi-hiPSC-derived NSCs were dissociated with Accutase (Sigma-Aldrich) and seeded on SA fibrous scaffolds in the inner well of the PDMS ring (d = 10 mm) with 62 500 cells in 300 μL of medium. After 30 min of cell adhesion, the constructs were topped up with an extra 3 mL of medium, and cultured at 37°C in a humid, 5% CO 2 incubator. 2.11. Electrical Stimulation of hiPSC-Derived NSCs on Hemin-Doped SA Fibrous Scaffolds. Previous studies have shown that the effects of electrical stimulation on cell behavior vary depending on parameters such as electrical stimuli, cell types, material interfaces, and experimental setups. 32,43−45 In our experiment, after 48 h of cell seeding for cell attachment and spreading, trains of 50 ms electrical pulses of 50 mV/cm at 2 Hz for a period of 2 h were applied at day 2 and day 3 with a 24 h interval between each stimulus via a 33500 Series Trueform waveform generator (Agilent, USA). The constructs were replaced with fresh media immediately after the electrical stimulation to avoid undesirable effects of electrical stimulation on the media. After the final stimulation, Epi-hiPSCderived NSCs were further cultured on the scaffolds for 48 h and then fixed and stained for cell observation. The schematic of the experimental scheme and the stimulation parameters are shown in Figure S1B. ACS Applied Materials & Interfaces Research Article cell experiments were acquired with a SP5MP/FLIM inverted confocal microscope (Leica, Germany) by sequential scanning. The thickness of the acquired sample sections was about 40 μm, and z stacks of typically 20 2 μm slices were imaged. 2.13. Imaging Analysis and Statistical Analysis. Image analysis was performed with ImageJ 64 (version 2). To quantify fiber diameter, measurements were made from 300 fibers taken randomly in the SEM images. The cell viability on the scaffolds was evaluated by the total coverage area of live cells (green) and the number of dead cells (red) after 24 h of cell seeding, where a total of 35 images in each group were analyzed. NSC proliferation and differentiation for biocompatibility were analyzed on five different batches of scaffolds with cell coverage using βIII-tubulin, a neuron-specific marker, and nestin, a neural stem cell marker. NSC proliferation, differentiation, and neurite branching were analyzed with the proliferation marker, Ki67, and βIIItubulin, using the "Cell Counter" plugin. Cell proliferation and differentiation were evaluated with the percentage of the Ki67 + cells and ßIII-tubulin + cells over the total number of cells within a field of 40×, respectively. Neurite outgrowth was evaluated using the "Neurite Tracings" plugin. For statistical analysis, all experiments were conducted three times (with two biological replicates and three technical replicates in each experiment). One-way ANOVA with post hoc Tukey's test was used throughout the study unless specified otherwise. A p-value <0.05 was considered statistically significant and all results represent means ± s.e.m. (In the diagrams, * represents p < 0.05, ** represents p ≤ 0.01, and *** represents p ≤ 0.001.) Morphology and Characterization of Hemin- Doped SA Fibrous Scaffolds. We fabricated SA scaffolds as previously described by Amdursky et al. 35 using an electrospinning process ( Figure 1A) and examined the morphology and topography of the SA mats with SEM imaging (Figure 1). The electrospinning of the SA solution produced fibrous mats (∼110 μm thick) with an average fiber diameter of 0.95 ± 0.13 μm ( Figure 1B, panel 1). Doping the SA mats with hemin resulted in a comparatively rough surface compared to the smooth and uniform surface of the nondoped SA mats ( Figure 1C); however, there was no significant difference in the average fiber diameters (1.04 ± 0.08 μm) ( Figure 1B, panel 3). Research Article To enhance cell attachment and promote neuronal differentiation, we further coated a layer of PDL and laminin using physical adsorption. After coating, the hemin-doped mats ( Figure 1B, panel 4) exhibited an increase in their fiber diameters (1.71 ± 0.23 μm) that were significantly larger than the nondoped SA mats coated with PDL and laminin ( Figure 1B, panel 2; 0.68 ± 0.06 μm). Both of the laminin-coated SA mats exhibited some aggregates resulting from the adsorption of the laminin proteins. We next sought to investigate the ability of the scaffolds to adsorb and retain a laminin coating, in order to assess the biofunctionalization. We first coated the scaffold for 24 h in a laminin containing solution with a known concentration, and then collected the coating solution and evaluated the laminin adsorption using ELISA to determine the amount of remaining laminin in the coating solution after incorporation (Figure 2A). The results showed a significantly higher amount of remaining laminin in the nondoped SA scaffolds, indicating the hemindoped SA scaffolds and the PDL-coated glass slides exhibited more laminin adsorption compared to the nondoped SA scaffolds. While initial laminin adsorption is critical for cell attachment, the maintenance of the adsorbed laminin during the culture period can further support cell adhesion, proliferation, and differentiation. To understand if different substrates exhibited different capabilities for laminin maintenance, we coated the laminin on the nondoped, hemin-doped SA scaffolds and the PDL-coated glass slides, and examined the immunofluorescent staining of the laminin coating at different time points ( Figure 2B). The results showed a significant decrease in fluorescence intensity of the laminin protein on both the nondoped SA scaffolds and glass controls after 3 weeks of being immersed in cell culture medium, with medium exchange every 2−3 days. However, the hemin-doped SA scaffolds were able to maintain the laminin coating over the time period tested. 3.2. Cell Viability, Proliferation, and Neuronal Differentiation on Hemin-Doped SA Fibrous Scaffolds. To test the potential of our hemin-doped SA mats for neural TE applications, we cultured hiPSC-derived NSCs on our constructs, and investigated stem cell proliferation and induction of neuronal differentiation. We seeded the hiPSCderived NSCs on the mats in the assembled cell constructs ( Figure 3A) and examined the cell viability with the LIVE/ DEAD Viability assay 24 h after cell seeding ( Figure 3B). The staining showed no significant differences in the percentage of To examine the effect of the nondoped and hemin-doped SA fibrous scaffolds on the proliferation and differentiation of hiPSC-derived NSCs, we stained the cells with βIII-tubulin, a neuronal marker, and nestin, a neural stem cell marker, after 7 days of differentiation ( Figure 3D). The immunostaining revealed that hiPSC-derived NSCs on the nondoped SA scaffolds clumped together and formed sphere-like structures, while the cells on the hemin-doped SA scaffolds and the glass control were widely spread on the substrates. The total cell coverage on the nondoped SA scaffolds (13.81 ± 4.05%) was significantly lower than the hemin-doped SA scaffolds (30.90 ± 3.18%) and glass control (32.09 ± 4.30%) ( Figure 3E). We further examined the percentage of cells expressing βIII-tubulin and nestin over the total cell coverage. While there were many immature neurons coexpressing both βIII-tubulin and nestin markers at day 7, there was no significant difference in the percentage of βIII-tubulin + cells and nestin + cells over the total cell coverage between the substrates ( Figure 3F). Overall, even though the cellular coverage of the nondoped mats was lower compared to other groups, the SA scaffolds were biocompatible to the cell system and did not hinder cell proliferation and neuronal differentiation of the hiPSC-derived NSCs. 3.3. Effect of Growth Factor Release with Hemin-Doped SA Fibrous Scaffolds. Next, we evaluated the ability of our SA scaffolds to incorporate and release signaling factors. We chose to work with FGF2 as an example of recombinant protein with a clear effect on NSC populations. For incorporation of FGF2, we took advantage of the ability of SA to noncovalently bind a variety of small molecules and peptides, similarly to the hemin doping procedure. We placed the SA scaffold into an FGF2 solution and, using ELISA as a measure of the quantity of recombinant protein bound to our material, evaluated the amount of remaining FGF2 in the ACS Applied Materials & Interfaces Research Article coating solution following overnight incubation (Figure 4). We observed a significant binding of FGF2 to the SA scaffold, while 94.80 ± 2.27% and 99.57 ± 0.12% of the initial FGF2 in the solution went inside the nondoped and hemin-doped SA scaffolds, respectively ( Figure 4A). After ensuring that FGF2 could be incorporated into our scaffolds, we further examined its release by measuring the FGF2 in solution after 2 and 5 days using ELISA ( Figure 4B). Our results indicated that the incorporation of FGF2 into the SA scaffolds induced a slow release profile (days time scale). We found that the release of FGF2 from the nondoped SA scaffolds was 0.12 ± 0.05% and 0.18 ± 0.02% of the initial FGF2 in the solution (corresponding to a release of 0.13% and 0.19%, respectively, of the initial loaded FGF2 in the nondoped SA scaffold) for days 2 and 5, respectively. From the hemin-doped SA scaffolds, the release of FGF2 was 0.34 ± 0.12% and 0.65 ± 0.50% of the initial FGF2 in the solution (corresponding to a release of 0.34% and 0.65%, respectively, of the initial loaded FGF2 in the hemin-doped SA scaffold) for days 2 and 5, respectively. Following the successful incorporation of FGF2 into our scaffolds, we examined the cellular responses of our hiPSCderived NSCs for proliferation and neurogenesis by focusing on the effects of FGF2 incorporated nondoped and hemin-doped SA mats on the cells ( Figure 5A). We found that the FGF2- ACS Applied Materials & Interfaces Research Article incorporated nondoped SA mats were sufficient to maintain a proliferative (Ki67 + ) cell population of 33.75 ± 2.52% over 5 days of being cultured in basal medium, similar to the degree of regular exchange of FGF2-containing medium with nonincorporated nondoped SA mats (31.48 ± 3.79%). The mats supplied with soluble FGF2, FGF2-incoporated mats, and the combination of both had a significantly higher proliferative cell population compared to the control nondoped mats without FGF2 (17.86 ± 3.22%). For hemin-doped SA mats, the results also demonstrated a higher percentage of proliferative cells with soluble FGF2, FGF2-incorporated mats, and the combination of both compared to the control hemin-doped mats without significance ( Figure 5B and Table S1). We examined neuronal differentiation of the hiPSC-derived NSCs by measuring the percentage of βIII-tubulin + cells. On both nondoped and hemin-doped SA mats, the NSCs in the control group without any FGF2 exhibited higher neuronal differentiation compared to other groups with FGF2 ( Figure 5C). This result was consistent with the predicted effect of FGF2 in maintaining the proliferating stem state of the NSCs. These results also demonstrated that the hemin-doped SA mats overall had a higher percentage of differentiated cells compared to the nondoped SA mats, which hinted at a preference toward neuronal differentiation on the hemin-doped mats. The highest neuronal differentiation occurred on the hemin-doped SA mats without soluble FGF2 and FGF2 incorporation (38.88 ± 7.34%) compared to the other groups (Table S2). 3.4. Effect of Electrical Stimulation on Hemin-Doped SA Fibrous Scaffolds. The conductive properties of hemindoped SA mats (∼2 mS/cm) have been detailed previously by us in Amdursky et al. 35 To use the hemin-doped SA scaffolds for in vitro electrical stimulation in our current study, we developed the cell culture construct and optimized the stimulation protocol. The electrical characterization (current− voltage behavior) of the scaffolds assembled in our constructs showed that, when a voltage was applied, a higher current passed through the hemin-doped SA scaffolds compared to the nondoped SA scaffolds and PBS control ( Figure S2 and text within). Due to the cells exhibiting different attachment patterns on the nondoped and hemin-doped SA mats, we chose glass slides as the nonconductive control in our electrical stimulation experiments, since this would decouple the effect of electrical stimulation through the conductive material and the effect of material properties on the cells. We first examined the effects of electrical stimulation on cell proliferation and differentiation ( Figure 6A). Our results showed that there were significantly more Ki67 + cells on the glass control (38.57 ± 5.25%) compared to the hemin-doped SA scaffolds with and without electrical stimulation (11.05 ± 3.04% and 15.10 ± 4.08%, respectively). Although the number of Ki67 + cells decreased following the application of electrical stimulation to the glass control (23.90 ± 6.06%; p = 0.149), the cell percentage remained similar on the hemin-doped SA mats with and without electrical stimulation ( Figure 6B). For neuronal differentiation (Figure 6C), the glass slides with electrical stimulation (28.27 ± 4.26%) exhibited higher neuronal differentiation compared to the unstimulated control (p = 0.309), which suggested the effectiveness of the applied stimuli. Both hemin-doped SA scaffolds with and without electrical stimulation exhibited enhanced neuronal differentiation with a significantly higher percentage of βIII-tubulin + cells (40.73 ± 7.64% and 38.91 ± 5.63%) compared to the glass control (14.93 ± 2.51%). To examine the effects of electrical stimulation on neuronal maturation and network formation as applied through the hemin-doped SA scaffolds, we examined neurite outgrowth and branching in the hiPSC-derived neurons (Figure 7). With electrical stimulation, we observed a nonsignificant increase in neurite outgrowth on both the glass slides and hemin-doped SA scaffolds compared to the unstimulated groups (Table S3). However, the neurons exhibited the longest neurite outgrowth on the stimulated hemin-doped SA scaffolds (78.14 ± 6.40 μm) among all groups examined. The cells on the hemin-doped SA mats with stimulation also demonstrated significantly more neurite branching compared to all other groups (3.76 ± 0.12 branches). The amount of neurite branching of cells was as follows: on the unstimulated hemin-doped SA mats, 2.92 ± 0.15; on the glass slides with electrical stimulation, 2.60 ± 0.30; and on the glass slides without electrical stimulation, 2.43 ± 0.17. DISCUSSION The restoration of functional nerve tissue after injury is an intricate process requiring multiple stimuli from the microenvironment. 5 Here, we present the first report of a hemin- ACS Applied Materials & Interfaces Research Article doped SA scaffold in neural TE, and demonstrate its ability to synergistically provide topographical, biochemical, and electrical stimuli to actively enhance cellular responses. Our initial characterization of the biointerface with SEM imaging revealed that, while the nondoped and hemin-doped SA scaffolds exhibited a similar fiber diameter, the fiber diameter increased significantly on the hemin-doped scaffolds compared to the nondoped SA scaffolds after coating with PDL-laminin. This also correlated with the presence of putative protein aggregates and a general increase of surface roughness along the fibers. We also observed significantly more laminin adsorption on the hemin-doped SA mats compared to the nondoped SA mats, and a more stable laminin coating on the hemin-doped SA mats. Together, these results would suggest that the difference of the morphology and diameter between the nondoped and hemin-doped mats after laminin coating could possibly be related to the difference in their ability to adsorb extracellular matrix protein such as laminin. The hemin dopant could be a key regulator in this process, where the electrostatic interactions between hemin and SA affect substrate-dependent differences in peptide and protein adsorption, which offers additional TE advantages. Previously, to improve cell−material interaction, studies have shown that an increased surface roughness in an optimum range and a large surface area can increase cell attachment and cell−material integration advancing bioelectronic interfaces. 46,47 In addition, ACS Applied Materials & Interfaces Research Article extracellular matrix proteins can also dynamically regulate cell behaviors, with laminin being especially shown to guide and promote neuronal differentiation and neurite outgrowth. 48 By examining cell viability, proliferation, and differentiation, we found that, on the nondoped SA mats, hiPSC-derived NSCs tended to group in clusters. By contrast, on the hemin-doped SA mats, the cells exhibited better cell attachment and performance across the whole mat. In summary, the properties of the laminin-coated hemin-doped SA scaffolds could provide surface roughness, high surface area, interconnected porosity, and higher protein adsorption propensity, as well as the ability to support cellular attachment, growth, and differentiation. Together, these findings demonstrate the potential use of our scaffolds as an attractive biomaterial for neural interfaces. Since the addition of bioactive factors into TE constructs has been known to improve cell−tissue interactions, we further examined the potential of our hemin-doped SA scaffolds for bioactive molecule release. Previous studies have successfully delivered bioactive factors, such as growth factors and neurotrophic factors, through TE substrates via physical incorporation, chemical conjugation, and polymeric microsphere delivery. 49−51 Numerous studies have demonstrated the incorporation of nerve growth factor into 2D conductive substrates, and recently also into 3D conductive scaf-folds. 23,52−55 For example, Lee et al. fabricated PPy-coated electrospun poly(lactic acid-co-glycolic acid) (PLGA) nanofibers and chemically immobilized NGF onto their surface. 56 Zeng et al. also synthesized conductive NGF-conjugated PPypoly(L-lactic acid) (PLLA) fibers through oxidation polymerization and EDC chemistry. 57 Because the stability and functionality of growth factors is critical but difficult to maintain during chemical incorporation, 51 our SA system with its innate property as a natural transport proteincould be an advantageous platform for delivering biomolecule stimuli. In the study, we showed that our SA-based hybrid system was able to physically incorporate the model growth factor FGF2, and eliminate relatively complex chemical reactions and polymeric microsphere preparation. Our results also showed a functional outcome of increased proliferative cells on the FGF2-incorporated SA scaffolds compared to nonincorporated mats, and demonstrated for the first time that an electrospun SA scaffold could be used for the incorporation and release of bioactive molecules. It was also interesting to find a trend of higher incorporation and higher release of FGF2 in the hemindoped SA scaffolds similarly to what was observed with the laminin incorporation. Although the specific means by which hemin regulates protein incorporation remains unclear, we speculate it could be due to a combination of the following: (1) ACS Applied Materials & Interfaces Research Article the electrostatic effects of hemin to the SA substrate, (2) hemin's effects on SA's FGF2 binding sites, and (3) the effects of the increased laminin adsorption on both electrostatic incorporation and the binding affinity of FGF2. This would suggest that hemin-doping of the SA scaffold, besides conferring electroactive properties to the constructs, can also enhance its bioactive applications. In our study, we also tested the potential of our hemin-doped SA scaffolds for in vitro electrical stimulation application. Previously, Schmidt et al. reported that extracellular electrical fields of 100 mV for 2 h applied with an oxidized PPy film on PC12 cells could increase neurite outgrowth. 58 Recent studies also reported that, with electrical stimuli of 100 mV/cm for 2 h, PC12 cells on PPy-coated PLGA nanofibers and NGFconjugated PLLA fibers showed increases in neurite outgrowth and extension compared to the unstimulated controls. 32,57 In our study, we decided to work with even lower electrical fields of 50 mV/cm at trains of 50 ms, 2 Hz electrical pulses, since this electric stimulation protocol did not adversely affect cell viability in our system and could potentially recapitulate the endogenous bursting of human pluripotent stem cell-derived neurons. 59 It is generally recommended to work with the lowest electric fields possible to avoid undesirable electrical phenomena next to the electrode, such as water splitting or the reduction/oxidation of ions. 43 We found that increasing the electric field to 100 mV/cm resulted in unwanted cell death (data not shown), which might have been related to the tolerance of our human clinically relevant cells to high electric fields. Following electrical stimulation, our glass control exhibited an increase in neuronal differentiation compared to the unstimulated glass control, in line with previous studies which showed that electrical stimulation increased neuronal differentiation in human stem cells. 60−62 The effects of electrical stimulation are known to vary according to cell type, substrate condition, and the exerted intensity. 32,43−45 In particular, comparing the effect of electrical stimulation on the differentiation potential between immortalized cell lines and iPSC-derived neural progenitors has proven especially difficult, since iPSC-derived cultures are inherently more sensitive to change in culture conditions. However, in our experiments, the overall viability of our cells and a trend to increased neuronal differentiation after electrical stimulation suggested that our applied stimuli are biocompatible and sufficient to modulate cellular behavior. On the other hand, cells on the hemin-doped SA scaffolds exhibited a significantly higher neuronal differentiation, and there was no significant difference between the unstimulated and electrically stimulated groups. This observation could have been the result of the intrinsic properties of the hemin-doped SA scaffolds inducing NSC differentiation under basal conditions; the electrical stimulation could thereby not exert any additive effects, since the population was uniformly differentiated. Hemin has previously been reported to have neurotrophic effects that promote survival and induce neurite outgrowth in both neuroblastoma cell lines and neurons derived from neural crests. 63,64 Other studies have shown that hemin is potentially neurotoxic via various oxidative and nonoxidative mechanisms. 65,66 The precise biochemical mechanism by which the hemin acts in the SA scaffold to ACS Applied Materials & Interfaces Research Article preferentially give neuronal differentiation will require further elucidation in future studies. Beyond its effects on neuronal differentiation, electrical stimulation on the conductive SA constructs proved to be a very effective means by which to modulate neuronal maturation responses. Indeed, we observed significant morphological changes of the hiPSC-derived neurons, and especially when it came to neurite branching. Previous studies reported that electrical stimulation enhances neurite outgrowth and neurite branching in human neuroblastoma cell lines and animal cells. 41,53,58,67 While the effects of electrical stimulation have been widely studied, the mechanisms are not yet fully understood. 19,22 Some important mechanisms have been proposed for the mediation of electric signals including (1) membrane proteins, which undergo conformational change and induce integrin-dependent signaling; (2) the modulation of voltage-sensitive Ca 2+ channels and voltage-sensitive smallmolecule transporters (i.e., serotonin) inducing ion and small molecule influx, and further triggering downstream signaling; (3) voltage-sensitive phosphatase activity, which affects phosphoinositide-sensitive signaling; (4) changes in the cytoplasmic content of H + , K + , and other ions; (5) electrical stimulation reorganization of membrane receptor distribution, which affects actin filaments and microtubules and further amplifies the gradient of intracellular Ca 2+ ; and (6) electrophoresis of morphogens through the cytoplasm. 20,22,68 It has also been shown that electrical stimulation induces gradients of ions and molecules within tissue fluid, culture medium, and cell culture substrates, and affects both protein adsorption and the macroscopic protein structure in the extracellular environment. 58,67,69,70 Our use of a conductive scaffold added an additional dimension of complexity, since it introduced an electronic/ionic current within the scaffold itself in addition to the ionic current in the solution. 35 Using a very low electric field in our study allowed us to try and pinpoint the effect of electrical stimulation on the scaffold by avoiding additional effects on electrophoresis and conformational changes of proteins, along with the redox effects in the cell culture media and extracellular environment. As shown above, the main difference found for the hiPSC-derived neurons on the hemindoped SA scaffolds (with or without electrical stimulation) was in the neuronal structures associated with maturation, such as neurite branching. We propose that the electrical stimuli applied through the hemin-doped fibrous mats simulate physiological neuronal activity and subsequently induce large neurite branching. CONCLUSION In this study, we present a neural TE platform based on the hemin-doped SA scaffold. This scaffold can actively provide a supportive microenvironment and present topographical guidance, bioactive molecule incorporation, and electrical stimulation to promote cell engraftment, proliferation, and differentiation. Our scaffold is biocompatible and supports the culture and differentiation of clinically relevant iPSC-derived populations, and is capable of incorporating and releasing growth factors to modulate cell behavior over long periods of time. With optimized electrical stimulation parameters, we have also successfully achieved structural maturation with enhanced neurite branching. Our hemin-doped SA-based constructs represent a valuable new platform by which to satisfy the major essential needs in neural TE with clinical application, namely, the combination of autogenic cells with a feasible artificial fabricated autogenic tissue engineered construct. * S Supporting Information The Supporting Information is available free of charge on the ACS Publications website at DOI: 10.1021/acsami.7b18179. Further experimental details and results, including the schematic of the electrical stimulation setup and waveform in Figure S1, conductivity measurements of the SA-based scaffolds in Figure S2, and more detailed results for cell proliferation, neuronal differentiation, and neurite outgrowth in Tables S1−S3 (PDF)
2018-04-03T00:33:53.700Z
2018-01-30T00:00:00.000
{ "year": 2018, "sha1": "469d63c396a13cc3a004d11764913439063443f4", "oa_license": "CCBY", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsami.7b18179", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "469d63c396a13cc3a004d11764913439063443f4", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
270112648
pes2o/s2orc
v3-fos-license
Benchmarking Android malware analysis tools Today, malware is arguably one of the biggest challenges organizations face from a cybersecurity standpoint, regardless of the types of devices used in the organization. One of the most malware-attacked mobile operating systems today is Android. In response to this threat, this paper presents research on the functionalities and performance of different malicious Android application package analysis tools including one that uses machine learning techniques. In addition, it investigates how the use of these tools streamlines the process of detection, classification, and analysis of malicious APKs for Android operating system devices. The tools, that use Artificial Intelligence techniques, are more efficient than other current tools that do not use them. In this way, new approaches can be suggested in the specification, design, and development of new tools that help to analyze, from a cybersecurity point of view, the code of applications developed for this environment. Introduction In recent years, the amount of malware on smartphones running Android operating systems has increased rapidly, mainly due to the complexity of the development and maintaining modern operating systems that manage these devices.Today, this type of threat has become one of the biggest security problems facing any organisation.Because of the current advancements in programming, the creative ways that developers hide malicious code (malware jumbling), and the added factor of hyper-availability and hyperconnectivity in today's world, malware investigation, analysis, identification, and classification are becoming a real and increasingly difficult problem to deal with.Android is an open-source operating system with more than 1 billion users, covering many devices like smartphones, tablets, Internet of Things (IoT) devices, gadgets, and so on. Cybercriminals are well aware of the weaknesses of a large percentage of ordinary users, who are unaware of the importance of the data that are exposed every day and every minute on the network, waiting to be "stolen" for fraudulent use. The amount of sensitive data currently processed and stored on these devices is increasing the number of attacks [1], which is a problem of concern to society.It is a priority for organisations to use tools to analyse, detect, and classify malware on devices using the Android operating system. The malicious payload available at these malware executables can be defined as "any code added, changed or removed from a software system to intentionally cause harm or subvert the intended function of the system", the definition used by McGraw and Morrissette in [2]. In the last decade, many methods based on machine learning and data mining were applied to detect intrusions, malware, and their classification, where many clustering and classification techniques involved cataloguing malware into known families or identifying new families of malicious code. This problem, commonly addressed by manual procedures, has taken on additional dimensions involving the use of new tools capable of automating this process with large numbers of suspicious Android Application Packages (APKs).Among these, ML techniques address a hopeful arrangement. The use of ML techniques for the specific task of malware analysis is largely due to the idea that artificial intelligence (AI) can automatically learn from the study of data, identify patterns, and make decisions with little human interference, and thus automate the building of analytical models.In other words, this technique allows data to be taken and broken down and then converted into predictions. ML significantly reduces effort, saves time, and is a cost-effective tool that replaces multiple teams working on analysing, processing, and performing regression tests on data.It provides accurate results and helps organisations build statistical models based on real-time data.It has positioned itself as a powerful mechanism for solving diverse, vast, and complex distinct challenges.This concept is classified as a subfield of artificial intelligence (AI), which is a fundamental part of many Data Mining processes, which are concerned with extracting knowledge from enormous volumes of data (datasets).To define the term "machine learning", Kevin P. Murphy's precise definition is used, included in his book "Machine Learning: A Probabilistic Perspective" [3], comprising "a set of methods that can automatically detect patterns in data and then use the uncovered patterns to predict future data or to perform other types of decision making under conditions of unpredictability". In this article, firstly, an analysis of malware targeting this platform, existing malware analysis techniques, and related work are presented.Next, a specific research method (Section 3.1 Methodological design) is designed and developed to carry out the research that aims to evaluate the effectiveness of tools that use or do not use ML techniques, to address the detection and classification of malware on Android devices, using different adapted benchmarks. Then, a series of experiments are performed against the two datasets of malware and goodware (benign applications) APKs (a dataset of 7003 APKs) and a dataset of 106 APKs.We use a method based on several selected metrics to obtain different rankings of the tools, according to different criticality objectives according to the desired weight of TPs and FPs.After analysing the tools, practitioners will be able to choose the most appropriate tools to protect Android-based devices and their malware scanning needs. In short, this article mainly makes the following contributions: • Design of a method for carrying out the research presented in this paper. • A functional analysis of the tools is based on the work in which a comparison of various malware analysis tools available on the Internet is performed. • A comparison based on defined metrics of different tools for detecting malicious code in Android applications. • Based on the comparison made in the previous point, it is determined whether the tools that use ML in the malicious code detection method present better results, advantages, or disadvantages. In conclusion, this research attempts to demonstrate the benefits of using machine learning tools as a method for detecting known families of malicious code for Android applications.The need for the use of more complete tools is justified, providing the essential foundations for establishing a systematic process of malware analysis for Android applications. The article is structured as follows.Section 2 surveys our current knowledge of the types of malware analysis techniques and available tools for Android operating systems.Then, Section 3 includes the methodological process followed in carrying out the research, the functional analysis of the selected tools, the process of carrying out the experiments, and the detailed results obtained from the application of the aforementioned method.Finally, conclusions and future research guidelines are included in Section 4. Android Malware Android is one of the most important operating systems for mobile devices today, being used in many devices such as smartphones, IoT, smart TV gadgets, and many others.The first version was released in November 2007 [4], although it was commercialised at the end of 2008.Since then, it has experienced extraordinary growth that has led it to become the most widely used mobile operating system in the world.It is recognised for its open-source code, architecture, and its multiplatform approach as well as its kernel from the Linux operating system. It stands out from the rest of the competitors with a market share [5] of 85% according to official sources, and will reach 87% market share in 2022.Android is not only found in mobile terminals but also various environments such as critical industrial systems, servers, network nodes, telephony, IoT devices, gadgets, tablets, etc. [6].Therefore, cybercriminals have focused their efforts on this operating system, making it the most targeted platform by cybercriminals [7], with over 900 million devices, and over 1 million applications, and increasing its growth year after year. The characteristics of mobile devices stand out for the presence of sensors (GPS, gyroscopes, microphones), open connections (Bluetooth, Wi-Fi), and hosting of third-party applications.But these are not all advantages; all the above-mentioned aspects present security problems.Apart from storing sensitive data, the sensors that incorporate these technologies have been shown to collect information without the user being aware of it.It is for all these reasons that the proportion of malicious software or "malware" has shot up at the same rate as its use. Detecting malware is a difficult task, due not only to the numbers but also to the variety of families available to attackers.In addition, cybercriminals have a variety of techniques at their disposal to bypass the controls of malicious applications, such as hiding malware in code (obfuscation), targeted permission elevation attacks, or API calls [8]. Today, there are two ways of detecting whether the software is malicious or benign.One is "signature-based" (reactive) analysis, which involves rules in detection systems, and antivirus software, which recognises the characteristic patterns of known threats.The other method available to classify and detect whether the software is suspicious or malicious is heuristics (proactive), which comprises observing behaviour or determining whether or not it is benign, using a machine learning system or machine learning [9]. Given that malware expands rapidly, machine learning offers a way to handle such threats, using the collection of known malware and automatically looking for patterns of behaviour, to detect new malware from families not yet classified [10], and thus constantly improve malware detection, without the need to update signatures. Malware develops rapidly on any of the known platforms.The method of automatic learning or machine learning offers a way to handle these threats, using the collection of known malware families and looking for behaviour patterns automatically, to detect new malware from families not yet classified, and thus constantly improve in malware detection, with no need to update signatures.The steps that have led to the use of these techniques to analyse and predict malware behaviour are described below [11]. • Descriptive analysis: Knowledge of the past.It reports organisations about "what has happened" and how they can learn from their past actions to settle on better decisions later. • Predictive analytics: It uses different static models and AI calculations to examine past information and anticipates future outcomes. • Prescriptive analysis: Results-based solutions.It uses simulation and optimisation algorithms to guide organisations on a secure path by recommending useful solutions. Android was attacked and threatened by malware in 2010.Not long after this date, the first malware designed specifically for this platform was found, particularly a Trojan (SMS.AndroidOS.FakePlayer [12]).From that time onwards, attackers have repeatedly targeted this platform as the main target of their attacks, mainly due to various reasons, such as its large market share. Based on the Malware Bytes [13] threat catalogue, the different categories of malware most commonly discovered today are given as follows: • Pre-installed.It is a type of built-in malware that can be found mostly in low-budget manufacturers.The case of the UMX mobile phone, financed by the United States, that was manufactured with pre-installed and immovable Trojans is known [13]. • HiddenAds.The second most identified and detected malware is an enormous group of Android Trojans that is classified as Android/Trojan.HiddenAds.It is based on a silent installation in which the only symptoms of HiddenAds are displaying ads aggressively, by any means necessary.This includes but is not limited to ads in notifications, full-screen pop-ups, and on the lock screen.It does not inform users who install HiddenAds applications in advance about advertising behaviour. • Stalkware (Monitoring).The term can apply to any application that potentially allows it to be used to track the user or track others.It incorporates the gathering of the following data and information from others' devices without their consent: GPS location data, call logs, photos, emails, contact lists, text messages, non-public activities on social networks, and other personal information. Malware Analysis There are different methods used to perform malware analysis: dynamic analysis, static analysis, hybrid analysis, and memory analysis.Static analysis includes analysis of a given malware sample without executing it, while dynamic analysis is carried out systematically in a controlled environment [14], and hybrid analysis is a combination of both. Methods that fall within the scope of static analysis allude to the extraction of useful data from the executable and do not involve running the specimen in question.This permits the building of efficient and effective patterns to detect malware.Notwithstanding, obfuscation techniques represent a major impediment to the success of this approach [15].The "static analysis" incorporates the utilisation of reverse engineering methods to analyse the instruction set that characterises the functioning of the application [16].In addition, and with a focus on the Android platform, a wide range of features can be discovered through this type of analysis.Data were collected from the Android manifest or assets that fall in this class. Figure 1 shows the diagram of the various files and folders obtained once the APK has been decompressed.Because several of these files include encrypted data, it is necessary to utilise specific tools to extract the human-readable files.Files included in the several folders give various data that might be employed to categorise sample actions.For example, /META-INF/ includes certificates, designer data, or data to run the jar file.The resources arsc and res folders are linked to distinct methods for importing resources.The lib folder stores the aggregated libraries.Finally, the two files that give the best applicable elements to handle a malware investigation assignment are presented in Figure 1; classes.dexspecifies the application code in the way of Dalvik bytecode.From here, a catalogue of system commands, API calls, or collectors can be recovered.The other significant file is AndroidManifest.xml,which states the list of permissions, package names, or intent-filter relationships [17]. Dynamic analysis involves a method in which the specimen is run in a monitored environment, where the supervising service takes any events or actions that occur during execution [8].This kind of investigation can provide insights not previously discovered by the static analysis workflow (mostly because of the utilisation of dynamic code methods).It is significantly more costly and less effective [18].Notwithstanding, this is another downside, as existing documents show how malicious code can be detected in an application when a piece of code runs on a virtual platform. Another approach combining both dynamic and static analysis techniques is hybrid analysis.The advantages offered by each sort of analysis can create more robust classification, detection, and analysis models compared to others that select a single point of view.A hybrid analysis [19] represents the most efficient approach to use.The computational cost can be elevated.In most instances, the two-phase analysis process is the most suitable solution.Therefore, the first level deals with the static properties that define the nature of the specimen.But in situations where categorisation is not achieved through a particular level of accuracy, a dynamic analysis is needed. The article presented by Chakkaravarthy et al. [20] proposed a hybrid analysis method to identify Advanced Persistent Threats (APTs).The suggested technique is called "Behavior-based Sandboxing (BbS)".It uses a mixture of memory, dynamic, static, and system state analysis procedures.In the conference article presented by authors Aslan and Samet [21], a method is proposed in which dynamic and static analysis tools are used to determine whether a sample is known malware.Using different tools results in increasing the detection rate of malware. Finally, one type of analysis that provides very good results is that of the memory of the infected machine.As stated by Montes et al. in the reference article [22], "any process or object in an Operating System will have to pass through its RAM at some point.Some researchers have considered the RAM as an ideal place to perform their malware analysis".This analysis comprises analysing the capture of the computer's physical memory to analyse, identify, and obtain evidence of the malicious activity performed by the malware.In the article by Tien et al. [23], a sandbox solution is presented to observe live memory data and analyse system behaviour using memory forensics methods. This technique is especially useful for the analysis of threats known as "fileless malware" or "memory-based attacks".In the article published by Gadgil et al. [24], they give an insight into how certain types of malware do not install files on the target's hard drive to execute malicious activities.Malware lives directly in memory and can take advantage of system tools to inject code into trusted and safe processes such as javaw.exeor iexplorer.exe. Related Work In preparing this work, an investigation has been carried out on existing methodologies that combine various ML techniques to develop malware classification tools for Android applications. Malware analysis describes a set of methods and procedures that aims to discover the collection of actions that a suspicious specimen file can perform [25].The above permits to us obtain important data to recognise malicious and corrupt payloads.The two different methods in which malware analysis methods can be organised are static and dynamic analysis, but it is also possible to combine the two.Then there is talk of hybrid analysis, which is also possible.Every one of these methods shows various methods aiming to gather important data capable of describing the behaviour of the malicious code obtained from the dataset. DroidMat [26] is a tool in which API calls have been utilised depending on the element they are associated with in runtime.The date associated with permissions, intention actions, or inter-component communications (ICCs) is contemplated.Clustering algorithms permit improved malware behaviour modelling, though Naive Bayes and k-NN run the learning procedure. DroidMiner [27] and DroidAPIMiner [28] are additional instances of work carried out, suggesting API calls as the most important illustrative feature to train malware classifiers. The initially generated Component Behaviour Charts (CBGs) correspond to the present links that connect the API resources and permissions with the actions made.Next, the algorithms Support Vector Machines (SVM), Bayesian Networks (Naive Bayes), Random Forest, and Decision Tree are trained.In DroidAPIMiner, special interest is given to threatening calls during training of the C4.5, ID5, SVM, and k-NN algorithms. There is varied literature that focuses on the usage of ML methods to develop malware detection and classification methods [29].The simple removal of static features beyond the complete description they deliver about application behaviour and intent is the reason for the significant quantity of research conducted, based on the following selflearning algorithms: Naive Bayes, SVM, Decision Tree, and Stochastic Gradient Descent (SGD) [30]. In DroidSIFT [31], API call dependency graphs and likeness metrics to classify and detect zero-day malware allow the training of a Bayes network classifier.Combined with permissions and other system events and calls, this provides an alternate forest model [32].The creators demonstrate that this classifier, founded on the decision tree algorithm, offers better results compared to SVMs.A particular technique known as MOCDroid makes a malware classifier with a transformative method [33]. Another approach already studied previously involves the use of the MosBF framework [34] for analysing malware from a dataset as APK files for both benign or goodware applications or malicious Android applications.Another work like the past one carried out by Jianlin Xu et al. [35] proposes a mechanism for the security evaluation of mobile application (APK) applications using a prototype of a tool called MobSafe that combines static and dynamic analysis techniques to systematically evaluate an Android application. In the work of Asaf Shabtai [36], an anomaly detection system is described that monitors the device frequently for suspicious events and performs machine learning to classify the results as benign or malicious based on the behaviour of the malware.However, this technique damages the device's battery by making multiple requests.In the TaintDriod framework, the device is monitored in real time, and the user is alerted when suspicious activity is presented by an application running on the device. Similar work to the one presented in this article is performed by Agrawal and Trivedi [37], in which they analyse the various types of malware scanning tools for the Android operating system.The paper provides a comparison of the tools, revealing their advantages and disadvantages.It also concludes that most of the tools only perform static malware scanning and do not support bulk scanning of files. In [38], a study of deep learning techniques, one of the groups of devices that typically use the Android operating system, which allows for the detection of malware in the IoT world, is carried out.Finally, Ashawa and Morris [39] conducted a systematic review of various papers on the different techniques for detecting malware on Android.Their main conclusion is that most detection techniques are not very effective in detecting obfuscated and zero-day malware. After carrying out the above study and looking at the result of the comparison of existing techniques and studies carried out to date, it can be seen that there is no standardised use of tools for analysing malware in Android applications using frameworks or security frameworks [40] with a self-learning techniques engine.This is where it is intended to contribute feasible research in this area, which is not yet covered or not with the necessary clarity and specification required by the technical community.The techniques or tools studied have limitations that must be considered when choosing the tool that best suits the needs of malware analysis. Experimental Research This paper explains the benefits of using ML tools as an analysis method to detect known families of malicious code for Android applications.It justifies the need for the use of more complete tools, which offer the indispensable basis to establish the realisation of a systematic process of malware analysis for Android applications. The purpose of this article is to explain the benefits of using artificial intelligence.It can automatically learn from studying data, identify patterns, and make decisions with little human intervention, thus automating the construction of analytical models for detecting and analysing malware in specific applications for Android environments.As can be seen from the state of the art, there are various techniques for doing this.However, there is no standardised way to bring all these techniques together and form a procedure or working strategy that efficiently facilitates all the steps to be followed in the event of a malicious event generated by these applications. Methodological Design The following paragraphs describe how the experimental pilot was conducted, where data have been obtained from the analysis brought by the chosen tool, and in return, they have been used to draw conclusions and possible future work. The following research hypothesis is established: "Are tools that use existing machine learning techniques more effective than tools that do not use Artificial Intelligence engines?" This implies that one of the most important objectives of this research work is to establish the benefits of using machine learning tools as an analysis method for detecting known malicious code families for Android applications.Another purpose of this work is to evaluate and test how the utilisation of malware analysis tools for Android devices speeds up obtaining results, analysing, and classifying malicious applications.A study of various tools will be carried out and compared from a functional and performance point of view based on a series of defined metrics.About this experimental pilot, a work program has been developed with the following steps: 1. Choice of the different tools to be analysed.At least one of them must use machine learning techniques.2. Implementation and configuration of the virtual analysis environment.3. Dataset construction.Depending on the characteristics of the tools to be analysed, different datasets will be constructed.4. Selection and definition of metrics to analyse the performance of the tools.The above procedure is shown in Figure 2. The method used to carry out the analysis of the APKs in the different tools selected for the experimental pilot is shown in Figure 3. Different Tools to Be Analysed Research has been carried out on the different malware analysis tools that currently exist for Android operating systems, considering the research hypothesis set out in the previous paragraph.For this purpose, a tool that uses machine learning techniques has been selected, AndroPyTool, whose performance will be compared to all the others selected. A multitude of APK analysis tools can be found on the internet.Many of them have limits to avoid bugs on the platform or indiscriminate use of it.Others require a previous registration to make more extensive use of it. As a general feature, all the tools have an input interface that allows the loading of the malware to be analysed; the big difference is that some of them have an API that allows the automatic loading of the files to be analysed through a script, and others do not.This automatic upload allows bulk or non-bulk scanning.Some are available for online use or can be installed locally in a laboratory. Based on the above, a specific set of tools was selected for this research based on their functionality, user-friendly approach, use of the different types of analysis methods, whether they are free to use or not, and their available online option.The selection of online tools was based on the work of Agrawal and Trivedi [37].The tools selected are: Once this work has been completed, an attempt will be made to adapt the possible test scenarios of the selected Android malware scanning tools to their characteristics.In addition, the operation of the tool using machine learning techniques (AndroPyTool), on which this work is based, will be described. In the following sections, the above tools are compared and analysed.To carry out this experimental pilot and to ratify the use of tools with self-learning mechanisms, the results obtained have been compared with the other selected tools. AndroPyTool The AndroPyTool [41] security framework will be used as a reference for this entire study.This tool was designed and applied to automate the method of analysing APKs by extracting representative behaviour, using static and dynamic analysis, to distinguish between infected and benign applications.Using this tool makes it possible to effectively gain diverse behavioural data that would otherwise require a considerable investment of time and personnel resources. AndroPyTool is an open-source Python tool where several scripts are executed sequentially, using machine learning techniques.The data collected during this procedure might be categorised into three distinct types: pre-static, static, and dynamic characteristics [42]. The last phase of the analysis performed by this tool consists of the extraction and processing of features using machine learning techniques, as shown in Figure 4, such as random forest and bagging classifier.It processes all the data collected in the previous stages to obtain the main features of the APK and proceed to a final classification [43]. MobSf MobSf (https://github.com/MobSF/Mobile-Security-Framework-MobSF)(accessed on 18/02/2024) is an open-source framework [35] that combines static and dynamic analysis methods to comprehensively evaluate an Android OS application.It also allows for cloudbased analysis and data mining in a significant amount of time.It is worth mentioning that, although it is a good framework, no system can solve all the difficulties that malware can generate. It contains a set of tools to decode, debug, review code, and comprehensively perform a penetration test, aiming to minimise analysis time with a single tool in a few steps.The most important of these tools is the Static Android Analysis Framework (SAAF) for static analysis and the Security Evaluation Framework (ASEF) for dynamic or behavioural analysis.This tool supports binary files (APK, IPA, and APPX) as well as compressed source code. It is scalable and allows for the easy addition of custom rules.YARA (Yara is a tool designed to identify and classify malware by creating rules to detect strings, instruction sequences, regular expressions, and other patterns within malicious files) rules can be added to classify malicious code based on the characteristics of each sample.From text strings, the rules can identify instruction sequences, regular expressions, or other patterns within the application, for example, if the code contains information to connect to a specific URL.This can be used to find malware variants that are spreading as a type of targeted attack. Virustotal It is a well-known open-access online tool (https://www.virustotal.com/gui/home/upload)(accessed on 18/02/2024)that permits checking suspicious files, hash, APKs, URLs, etc.It includes over 70 antivirus engines.For every one of them, if positive, a label identifying the sort or group of malware found is generated.It allows for rapid detection of many categories of malware.The uploaded file is up to 150 MB in size. Intezer Analyzer It is a malware analysis tool (https://analyze.intezer.com/)(accessed on 18/02/2024)available online [44] with different licensing options, in which it is possible to analyse malicious files in multiple formats (exe, .dll,.sys,ELF, Zip, RAR, TAR, 7-Zip, APK, msi, doc, xls, ppt, PDF, PowerShell scripts, vbs, and js).It uses the technique "genetic analysis of malware", and the basic premise is that "all software, whether legitimate or malicious, is composed of previously written code" which allows identifying new types of malware by comparing the code with previously found threats. Hybrid Analysis This is an advanced security tool (https://hybrid-analysis.com/)(accessed on 18/02/2024) developed by Payload Security that classifies, detects, and analyses unidentified threats using distinctive hybrid scanning technology.There are suspicious files and URLs that it scans.It also provides an in-depth analysis of code and programs for Windows systems.It performs both a hybrid analysis using Falcon Sandbox that combines runtime data, static analysis, and memory dump and a multi-scan analysis that uploads the sample to Metadefender and Virustotal.This tool gives a threat score that can be taken as a metric.In addition, incident response and risk assessment reports are provided. Joe Sandbox Another online sandbox (https://www.joesandbox.com/#windows)(accessed on 18/02/2024)available, but which does require registration with a professional email address, is Joe Sandbox [45].It has a Web API, but only for those who have a Cloud Pro account, which comes at a cost.The cloud sandbox offered by Joe Sandbox detects and analyses malicious files, URLs on Windows OS, and the hash value on different platforms such as MacOS, Android, Linux, and iOS for suspicious events.It carries out profound malware analysis and creates exhaustive and point-by-point investigation reports.It only allows running a maximum of 15 scans/month, 5 scans/day on Linux, Windows, and Android with limited scan results.This tool has an upload limit of 25 MB, making it ineffective if our purpose is to analyse a dataset with thousands of files (APKs). Metadefender Cloud It is an online malware-scanning utility (https://metadefender.opswat.com/)(accessed on 18/02/2024)that provides the ability to upload and scan files up to 140 MB in size.It performs two types of analysis: static, in which a multiscan analysis is performed with up to 35 different antivirus engines including McAfee, Kaspersky, AVG, etc., and an analysis of the metadata of the APK, mainly of the dangerousness of the permissions; it requires running and dynamic analysis with a sandbox that does not work for the case of APKs. Jotti Jotti's malware scan (https://virusscan.jotti.org/es-ES/scan-file)(accessed on 18/02/2024)is a free service that scans a file against more than 13 antivirus engines, including Avast, F-Secure, Sophos, etc.It permits the upload of up to 5 files simultaneously, up to a limit of 250 MB for the 5 files and 25 MB for each one.It also permits the download and use of a client to upload files without using the browser.Finally, it is worth mentioning that it has an API for bulk file scanning. 3.2.9.Pithus Pithus (https://beta.pithus.org/)(accessed on 18/02/2024) is a free and exclusive opensource malware analysis platform specially developed for the analysis of APKs.It has been recently developed and its current version is in beta.It performs several types of analysis, such as fingerprint, control flow analysis, and threat intelligence; basically, it submits the sample to Virustotal, code analysis, behaviour analysis, and network analysis.It also has a fuzzy tool to verify if the sample belongs to a known malicious family. Implementation and Configuration of the Virtual Analysis Environment The experiment with the AndroPyTool and MobSafe tools has required the implementation of a virtual analysis environment.For the online tools, this was unnecessary.For implementing the virtual analysis environment of the AndroPyToll tool, the Docker tool [46] has been used, as it does not require installing dependencies, downloading the several necessary repositories, or configuring the Android emulator for the dynamic analysis phase. On the Ubuntu machine mentioned above, the MobSf tool is also installed.The aforementioned lab machine will allow the server to run together with the MobSf application to perform static analysis of the APK file, e.g., analysing the source code and the permissions that the application has on the device, along with the dangers that each of these can generate. To perform dynamic analysis, the MobSf tool provides several ways to emulate an Android system, either using a virtual machine in Virtual Box, an ARM emulator, or finally by a physical device; the latter option is the least ideal, as it will infect the device to be tested, so the first option is implemented. In this phase, it is also necessary to guarantee that the analysis can be carried out in its entirety so that the state of the virtual machine can be returned with a snapshot, or that the emulator can return to its previous state so that it does not interfere in each analysis that is carried out.In this sense, the MobSf tool always takes the main snapshot to be able to return to that state so that no problem arises when the analysis of an application is restarted. Once the lab machine has been configured with the application server on which MobSf will run, the applications are uploaded for analysis. Dataset Construction Given the different malware analysis tools selected in this research, different datasets have been constructed according to their capabilities for automated bulk file scanning of the tools through the execution of scripts. For the construction of the datasets, it is started from the one provided by AndroZoo [47] for testing.This dataset contains over 17,000 different APKs in total, where 7002 APKs have been used to carry out this work.The way to obtain this dataset has been stated on one GitHub page, where the use and installation are described [48].Once the required API key provided by the University of Luxembourg (AndroZoo) is available, it requires a manual download, using a CSV file of as many entries as there are APKs in the dataset, for example: curl -O --remote-header-name -G -d apikey=${APIKEY} -d sha256=${SHA256} \ https://androzoo.uni.lu/api/download To avoid performing this task manually, a script has been designed based on a plain text file (CSV AndroZoo), from which all fields that are not relevant have been discarded, redirecting the output to this text file, from where the extraction and download have been carried out in a fully automated way.The CSV file contains the sha256 key, which will be necessary to download beforehand. Once enough APKs have been downloaded to carry out a reliable study of them, it is necessary to make sure that benign files are present in all the experiments to be carried out.A specific dataset of the Canadian Institute for Cybersecurity has been used [49], where 1602 non-malicious application files have been selected.With all these files and the malicious ones downloaded, a total of 7002 APKs are obtained, to carry out the different experiments of this research. Once the base dataset was obtained, specific datasets were designed for the different experiments that were intended to be carried out in this research: • Experiment 1: Analysis of the AndroPyTool application.The dataset built for the analysis of this tool is composed of 7002 APKs, 1602 of them benign (goodware) and 5400 malicious (malware).This experiment includes many APKs due to the tool's ability to perform mass scans using a script. • Experiment 2: Analysis with AndroPyTool, MobSf, and online application tool.The dataset built for the analysis of these tools consists of 53 goodware and 53 malware applications.This dataset is smaller than the previous one given that some of the tools do not allow automated bulk file scans, its high manual interaction, and the high time consumption that other tools require. Metrics to Analyse the Performance of the Tools The selection of the metrics to be applied in the experiment was based on the articles by Surera et al. [50] and Antunes et al. [51], which propose a series of metrics to measure the effectiveness of different tools based on the data gathered when running them against a series of benchmarks.In the following paragraphs, the reasoning and decisions made in selecting the most appropriate metrics for the experiment are explained. About detecting and classifying malicious APKs, the tools can be considered binary classifiers, as they usually classify the target APK into one of the following two classes: APKs that are "clean or trusted" and "malicious".In such a case, the best-performing tools are those with a maximum of True Positives (TP), as they detect the most malicious APKs, and a minimum of False Negatives (FN) and False Positives (FP).In the above, it should be noted that it is more important to have a minimum of FN than a minimum of FP, because a malicious APK that has not been detected as malicious will cause the user a false sense of security.After all, if the user believes that the APK is not malicious and uses it, when in fact it is not, this could cause problems. The confusion matrix used in the experiment and the meaning of the abbreviations TP, TN, FP, and FN are presented in Table 1. Diagnostic Test Negative TN (True negative).Files (APKs) that the tool has correctly classified as negative or goodware. FN (False Negative).Files (APKs) that are extracted from a malicious dataset but have been classified as non-malicious by the tool. Positive FP (False Positive).Files (APKs) that the tool classifies as infected that are benign applications. TP (True Positive). Files (APKs) analysed by the tool as infected or malicious that are malware. Based on the above reasoning, Accuracy (ACC) was chosen as the main metric to be used in this experiment, since it considers the TP, FN, and FP, and as a secondary metric Recall (RE), since it allows following the proportion of correctly classified TP, False Negative Rate (FNR) for the proportion of FN, or failure rate, and finally, False Positive Rate (FPR), for the proportion of goodware APKs that are erroneously classified as positive. • Accuracy (ACC): The ratio of correctly identified APKs, divided by the total number of files analysed.This metric will allow us to evaluate the total number of correct predictions over the total amount of test cases. ACC = TP + TN TP + TN + FP + FN • Recall (RE): Also known as True Positive Rate, it determines the quality of the capacity detection and shows the proportion of infected APKs.In terms of our research, this indicator will determine the ability of the tool to predict malware within the group of infected APKs.It measures the proportion of True Positives that are correctly identified. RE = TP FN + TP • False Negative Rate (FNR): Also known as failure rate, it indicates the proportion of all malicious APKs incorrectly classified as negative or trusted. FNR = FN FN + TP • False Positive Rate (FPR).Represents the quantity of goodware APKs that are incorrectly classified as positive, i.e., malicious.It is also called "fall-out". FPR = FP FP + TN Some of the online analysis tools do not perform a binary classification but add some more cases, such as "suspicious" or "unknown".In this case, an aggregation of the mentioned classification to one of the two main cases of APK, "clean or trusted" and "malicious", will be performed. Functional Analysis The functional analysis of the tools is based on the work carried out in the reference article [37] in which a comparison of several malware analysis tools available on the Internet is performed.Some tools that are no longer available for APK analysis, such as AVC Android, NVISO, and VirSCAN, have been discarded, and other new tools are added in the comparison, including AndroPytool, MobSf, Jose Sandbox, Metadefender, Jotti, and Pithus.In addition, other comparison parameters are added, such as type of application, limitations, options, advantages, and disadvantages.The result is shown in Tables 2 and 3.As a conclusion of the functional analysis carried out, it is indicated that the most important features that a malware analysis tool for Android operating systems should have would be the ability to perform bulk file scanning to perform automatic scans of multiple applications, adequate processing times between 1 and 5 min, different analysis techniques that include machine learning to improve the results, and the possibility of providing several output formats, such as JSON, CSV, etc. Experiment 1 with AndroPytool To study IA techniques such as ML in tasks of analysis, classification, and detection of possible files (APKs) infected with malicious code, an experiment has been carried out with the AndroPyTool against the two datasets constructed: one based on 7002 APKs, 1602 of them goodware and 5400 malware, and the other one based on 106 APKs, 53 of them goodware and 53 malware.Many APKs are included in this experiment due to the tool's ability to perform bulk scans via scripting.The execution of the experiment is shown below.The first step is to run the tool against the dataset described above, using the following command: $ docker run --volume=</PATH/TO/FOLDER/WITH/APKS/>:/apks alexmyg/andropytool -s /apks/ <ARGUMENTS> --single --filter -vt (VirustTotal API Key) -cl -csv EXPORTCSV -colour -all where "volume=" is equal to the path where the APK files to be analysed are stored.The argument chosen for this analysis is the one covered by the "-all" parameter, which makes use of all the analyses. According to the analysis phases of this tool, explained in Section 3.2.1, it first filters the files found for further analysis, depending on whether or not they are considered valid.To do so, it renames the folder containing the files to be analysed and creates two other folders to filter between infected and benign applications (malware and benignware).The next step is to analyse the applications with the available VirusTotal reports (Figure 5).The third step is an internal classification to discriminate malicious APKs from benign ones.After this classification, the program runs the built-in FlowDroid tool, explained in Section 3.2.1 as the compendium of tools offered by this hybrid analysis system (Figure 6). In this phase, the tool installs the application to see its live operation, using an internal sandbox, and discover its dynamic behaviour.This whole process can take several days.The most time-consuming step in this experiment was the analysis with FloidDroid (Figure 7).If the time required to download the dataset is included, it has been about two months, 24/7, to extract all the necessary information for its study.The data received need external analysis with the help of tools that facilitate their visualisation, for which the data have been exported to a CSV file and subsequently transformed into the data that will be displayed through Google DataStudio. Once all the data provided by this tool have been obtained, the JSON files have to be formatted for further analysis.The 7000 JSON files (Figure 8) have been converted to Excel format and given a more intuitive format from a user's point of view, for later handling (Figure 9).After dissecting each JSON file individually, the following scale is used to determine which files are infected, suspicious, goodware, or, conversely, if they have not been analysed because they are categorised as unknown or in unknown status.Several examples are shown below: verbose_msg": "Scan finished, information embedded", "total": 61, "positives": 0, "sha256": "001f91177291bb5fe2b23d43674c85b76f56de677da13ab9a73eb996662e705b", "md5": "5cb3d50e80f74a526d9d59de7db26113" verbose_msg": "Scan finished, information embedded", "total": 64, "positives": 23, "sha256": "000a69d61dc389579b9b931c3c04bbe287b37e471f1c97c4326143665f34c3a6", "md5": "5a322ac4862e8521ae844dd95327c705"} After filtering all the data and offering only those that are conclusive for the present work, with the Google DataStudio tool, the results are shown graphically.This representation has two different views, a more general one, as can be seen in Figure 10, and a more generic one, as can be seen in Figure 11.It is also worth noting that it can be filtered by different fields, thus offering an interactive way of visualising the data. Figure 10 shows the process performed for the transformation of the JSON files, obtained from the analysis performed by AndroPytool, to provide a classification between goodware, suspicious, and malware.Figures 11 and 12, respectively, show the data in graphical format, where it can be filtered by various fields, such as scan_id, classification, or status of the analysed files.In addition, the result of the analysis can be compared with the source dataset imported as "is_Benign".To address this, pre-processing has been necessary, where it has been filtered by fields determined for this output. The following Figure 13 shows the process and transformation of the data before obtaining the data required for this work.As seen, the filtering and classification of the data, together with the understanding of the classification analysed by the tool itself, represent a substantial contribution. We present a sample catalogue that is faithful to the data captured.The results are available in the Google DataStudio Dashboard at the URL (https://datastudio.google.com/u/0/reporting/c45b67e1-8797-4f05-a10c-6aff59db6827/page/2531B) in the footnote.Later, the same analysis process is performed against the 106 APKs dataset.The experiment against the first dataset provides a more precise measurement of the selected metrics, while the second one provides a set of measurements for comparison against other tools.Finally, the calculation of the metrics defined in Section 3.5 is carried out and the results are presented in Table 4.As stated in Section 3.4, an analysis of the MobSF tool with a dataset of 53 benign and 53 malicious applications is performed in this experiment.The reason for using a dataset with a smaller number of APKs than the one used in experiment 1 is that in this tool, certain manual operations must be performed in the dynamic analysis and the file loading. A specific methodology has been designed for conducting this experiment to cover all possible surface attacks on a mobile device with an Android operating system running.Figure 14 shows the process diagram of the proposed methodology.As shown in Figure 13, the first task is to load the APK under analysis to the Mobile-Security-Framework (MobSf) tool that has been installed in an isolated environment. As a second step, the hash of the application under analysis must be compared against a database that has, as a record, other analysed and tested applications just to classify it and define it as malicious or not.If the hash of the application exists in the application database, it will provide us with information from other previous analyses to determine and classify the malware without having to perform another analysis after this step. If the hash of the application does not exist in the database, a static analysis (ASEF) can be performed, using the MobSf tool to obtain the application's permissions from the global manifest configuration file, or another Android tool called Apktool, which extracts the manifest.xmlfile from any application.From this file, it is possible to see the permissions that the application has and categorise the risk of each permission, as many permissions can access sensitive information that should not be accessible.This step will give an insight into the possible exploitation points of the application. After performing the static analysis, a dynamic or behavioural analysis (SAAF) is performed where the MobSf tool will run the application in an Android virtual machine or on a device configured with the tool to detect runtime problems.Within this type of analysis, captured network packet logs will be analysed by decrypting HTTPS traffic, log reports, error logs, debugging information, and memory stack tracing. After these analyses, the information obtained will classify the malicious application, and the hash of the application will be stored in the MobSf tool database so that in a subsequent analysis, this application can be confirmed as malicious at the beginning of its analysis. The following is an example of the analysis of one application infected with malware, to virtually show the proposed methodology. DroidKungFu.This application was the first Android malware to bypass antivirus software and take control of the phone by creating a backdoor.This malware is considered an evolution of DroidDream, the first large-scale Android virus, with the difference that DroidKungfu can avoid detection by security or antivirus software.When analysing the application code, the MobSf framework shows us as a summary that the application has many classes with unsafe random codes, to make recursive calls to instances of the entire application.It also shows us that the application has a method to obtain the location of the device by GPS and network.Then, it makes an HTTP connection to the following URL: http://app.waps.cn/action/account/offerlist(accessed on 18/02/2024) to send information about the device and its location, as shown in Figure 14. This application interacts directly with the user, making use of the activities made in the analysis since it does not have any service running in the background. In the next phase, by performing a behavioural analysis of the sample, the MobSf framework allows running an automatic interactive analysis to obtain relevant information, in which information is sent from the device to the URL described in the previous phase and as a result, commands are brought to perform certain actions, as shown in When performing the static analysis of the permissions found in the application samples of the dataset, it was found that infected applications access more permissions than healthy ones.Malicious applications access over 30 different permissions of eight types, while benign applications access around 16 permissions.Finally, the calculation of the metrics defined in Section 3.5 is carried out and the results are presented in Table 5.Finally, a study of different online analysis tools for APKs of Android systems has been carried out.The dataset of 53 benign and 53 malicious APKs has been used for this experiment, since manual operations are also required to load the file under analysis.Specifically, the following online tools have been analysed: VirusTotal, Hybrid Analysis, Joe Sandbox, Intezer Analyzer, Metadefender, Jotti, and Phitus.The experiment theoretically involves loading the 106 APKs from the dataset and classifying them into four categories: TP, FP, TN, or FN.The results are shown in Table 6.As outlined in Section 3.5, some of the online analysis tools (Intezer Analize, Hybrid Analysis, and Jose SandBox) do not perform binary classification but add some more cases, such as "suspicious" or "unknown"; to have a binary classification, the following aggregations have been performed: It has also been considered in all tools using the Static Multiscan antivirus engine (Virustotal, Jotti, and Pithus) technique that it only shows one positive detection by one of the engines in the group's case of malware APKs, such as TP and FN in the case of goodware APKs. With the online tool, one of its weaknesses is the daily and monthly limit, along with the need to create a paid account, so that the results of the analysis are not made public.To carry out a more extensive analysis with more capabilities and eliminate any type of limitation, apart from the financial outlay, the tool administrator must approve that the intentions are legitimate, to give access to all the tools and reports available in the tool.This factor is a process that can be delayed in time, because of poor maintenance and nonexistent communication with the support staff. Discussion and Lessons Learned The following table shows the results of all the experiments carried out.An important aspect to consider when analysing the results of all the tools shown in Table 7 is that the results gathered with the AndroyPytool tool are more accurate than those obtained with the other tools, since the dataset used is much larger: 7002 versus 106 APKs.As already explained in previous sections, this was possible because of the automatic bulk loading of the APKs to be analysed.However, to be able to make a comparison against the other tools under investigation, the tool has also been run against the 106 APK dataset.The results shown in Tables 7 and 8 and Figures 16 and 17 show that the AndroPytool tool obtained the best performance in all the metrics.The value of these was as follows.Although the AndroPytool tool presented the best results, it is worth mentioning the results of the MobSF tool, which has the second-best prediction and the same detection capability (Recall) as the mentioned tool.Furthermore, in some circumstances, the tool cannot replace the analysis and observations to be performed by malware analysts.In this sense, MobSF is a tool that presents a complete framework for analysts to perform APK analysis manually. Table 9 shows a comparison of the results obtained in this research with the AndroPytool concerning other tools of similar works obtained from the review of the state of the art.The metric included for comparison is the one that all have used in common, namely Accuracy. Concerning online tools, it should be noted that the results of the metrics for this type of tool are not very good, leading sometimes to confusion with states such as unknown or suspicious.Virustotal and Pithus obtained the best results in terms of accuracy and detection capability (Virustotal with an ACC = 0.943 and RE = 0.981 and Pithus with an ACC = 0.925 y RE = 0.962).Virustotal has the same detection capacity as the two best tools.In the specific case of Pithus, the tool is a recent creation (beta version), so it is considered that the margin for improvement of this tool is wide, so it can be assumed that in the future, its results will be better. As stated in the previous paragraph, Metadefender and Jotti have the lowest fall-out or false positive rates.The tools that use techniques such as hybrid and behavioural or dynamic analysis (Joe Sandbox, Hybrid Analysis, and Intezer analyzer) generate many false positives within the group of goodware APKs so they do not obtain good results, unlike tools that perform mainly Static Multiscan Antivirus analysis, if they obtain them.On the other hand, Jotti, Metadefender, and Inteze Analyzer have the worst false negative rate, which means they are the worst at detecting and classifying malware APKs. Regarding the hypothesis established for this research, "Are tools that use existing machine learning techniques more effective than tools that do not use Artificial Intelligence engines?" it can be affirmed that the tool that uses machine learning techniques, AndroPytool, is more effective than the others analysed that do not use artificial intelligence techniques. Finally, the main limitation of the research was that several of the tools available online could not perform bulk scanning of files.As a consequence, three types of experiments (experiment 1: Analysis of the AndroPyTool application, experiment 2: Analysis of the MobSf application and experiment 3: Analysis of the online applications) had to be carried out with different smaller datasets in experiments 2 and 3, when the ideal would have been to carry it out with only one, experiment 1, which had 7200 APKs.Another concern has been the amount of time it has taken to perform scans on applications that did not allow bulk uploading of files. Conclusions Proper malware detection is a very important aspect of today's mobile technology.With the increase in malware daily, there is also a need for a suitable malware detection scheme with a robust malware detection scanning tool.Based on the limitations of existing Android malware scanning tools, it can be concluded that most of the tools only perform static malware scanning. Most of the tools only provide file upload as input and support small file upload to perform malware scanning.The tools also do not support bulk scanning of files.The time taken by the tools to scan a single file is also high. There is a need for a robust malware scanning tool that overcomes all the limitations of existing scanning tools, performs hybrid scanning, and can be deployed as a service.In addition, the results of existing scanning tools can be combined and provide a more detailed and appropriate summary report. Throughout this paper, the problems associated with Android malware classification and detection have been shown and developed from several viewpoints, all of them within the framework of using ML techniques. Because of the present research, it has been demonstrated how the use of ML tools, such as AndroPytool, improves the detection and classification of malicious APKs compared to those obtained using other types of tools already discussed during this work.Furthermore, as shown in Table 9, it is the tool that obtains the best results compared to others analysed in various studies in the literature. The detection and classification of malicious APKs (malware or malicious software) is an important technique that allows the assignment of a given specimen of malware to its corresponding family.This allows improved tracking of the several current families, improved detection of zero-day specimens, and detection of different variations of known malware families.These techniques are fundamental in the fight against malicious software, given the different types of economic damage and data confidentiality that can occur if a device is successfully infected.Knowing the malware family, therefore, facilitates the adoption of practical measures to prevent its spread and minimise its impact. Although the results achieved have revealed a high-level rate of accuracy and detection capability, it is considered that there is still some room for improvement, so it is necessary to analyse new functionalities, representations, learning algorithms, and data processing techniques.Further research of other features should be carried out, building with more files, obtaining data from compiled libraries, and in the presence of functions that load dynamic code.Other research that can be carried out is the improvement of existing malware classifiers. Finally, it is also worth highlighting the functionalities and capabilities of the MobSf framework, since it allows the analyst to reduce detection time by having multiple tools in one, which would otherwise require separate work: decoding, debugging, code review, and penetration testing.Therefore, the framework will also allow the automation of repetitive tasks. Future Work From the results scored in this research, it is considered that the application and integration in AndroPytool of new ML algorithms and tools would allow extracting a broader set of features that could allow the improvement of the classification and detection capabilities of the tool.Other improvements to be highlighted include the following: • Growing the number of both malicious and goodware APKs. • Improving the data characterising the different malware families' classification. • Performing a detailed study of the features to be extracted with the different analysis methods to improve the results received in the detection and classification of malware. • Improve the processing of data gained. Another area for improvement could be to take advantage of the JSON files obtained during the analysis to display an HTML-based graph of the results obtained.Another plausible improvement could be a real-time estimation that, according to the number of APKs as well as the size of them, gives an approximation of the estimated waiting time that the tool would need to finish the whole process. Figure 1 . Figure 1.File and folder structure after unzipping an APK. 5 . Functional analysis of selected tools.6.Running the different experiments.Execution of the different tools against the different datasets and carrying out the different analyses and assessments of the performance of the tools based on the established metrics.• Experiment 1: Analysis of the AndroPyTool application.• Experiment 2: Analysis of the MobSf application.• Experiment 3: Analysis of the online applications.7. Discussion and lessons learned. Figure 3 . Figure 3. Method used to carry out the analysis of the APKs. Figure 5 . Figure 5.Comparison with Virus Total database. Figure 9 . Figure 9. Conversion of JSON file into CSV to be treated. Figure 10 . Figure 10.Filtering of JSON files for conversion to CSV to be processed. Figure 15 . Figure 15.Response when sending device data from the DroidKungfu application. Table 1 . Types of diagnosis. Table 4 . Results obtained with the AndroPytool. The first step of the methodology is to collect and perform a search of the hashes of the application, to classify the sample in case previous research already exists.Although this application is known, in this case, the application is analysed with MobSf.From the manifest file, it was possible to retrieve the following permissions used by this application: Table 6 . Results obtained with the online tools. • Intezer Analyze: Made up according to the classification made by the tool in the group of malware APKs: seven trusted, thirteen unknown, thirty-one malicious, and two suspicious, and in the group of goodware APKs: seventeen trusted, thirty-one unknown, zero malicious, and five suspicious.Those classified as trusted and unknown and malicious and suspicious are added, thus obtaining 32 TP, 20 FN, 48 TN, and 5 FP.•Hybrid Analysis: Taken according to the classification made by the tool in the group of malware APKs: three not specific threats, forty-two malicious, and seven suspicious, and in the group of goodware APKs: nineteen not specific threats, six malicious, and twenty-eight suspicious.Those classified as malicious and suspicious are added, thus obtaining 49 TP, 3 FN, 19 TN, and 34 FP.• Jose SandBox: According to the classification made by the tool in the group of malware APKs, six were clean, five suspicious, and thirty-two were malicious, and in the group of goodware APKs, seventeen were clean, twelve malicious, and fourteen were suspicious.Those classified as malicious and suspicious total 47 TP, 6 FN, 27 TN, and 26 FP. Table 7 . Total results sorted by tool accuracy. Table 8 . Total results are sorted by the detection capability of the tool.Total successful predictions over the total number of test cases with an accuracy of 0.986 with 7002 APKs dataset and 0.972 with 106 APKs dataset.The best accuracy of all tools.•Thetoolshowsadetectioncapability or recall of 0.999 cases on the 7002 APK dataset, so 99.9% of the time it will hit the positive cases of infected APKs, within the group of infected APKs.With the 106 APK dataset, it shows 0.981.•Avalue of 0.001 in the false negative rate with the 7002 APK dataset and 0.019 with the 106 APK dataset.The tool has almost no false negatives.It means that the tool has detected almost all the malware APKs.•The value of the fall-out (false positive rate) is 0.059 with the 7002 APK dataset and 0.038 with the 106 APK dataset, proving to be the best tool being behind the online analysis tools Metadefender and Jotti. Table 9 . Comparison of the results obtained with the AndroPytool concerning other state-of-the-art works.
2024-05-30T15:09:05.911Z
2024-05-28T00:00:00.000
{ "year": 2024, "sha1": "db2c13e1d1bfb1ccf5095a6c18c91b9534ce573e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/electronics13112103", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "0ee62f9eebeb89ae7e2498cb4e8e38aeffa1ad11", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
12211477
pes2o/s2orc
v3-fos-license
Simulated mussel mortality thresholds as a function of mussel biomass and nutrient loading A freshwater “mussel mortality threshold” was explored as a function of porewater ammonium (NH4+) concentration, mussel biomass, and total nitrogen (N) utilizing a numerical model calibrated with data from mesocosms with and without mussels. A mortality threshold of 2 mg-N L−1 porewater NH4+ was selected based on a study that estimated 100% mortality of juvenile Lampsilis mussels exposed to 1.9 mg-N L−1 NH4+ in equilibrium with 0.18 mg-N L−1 NH3. At the highest simulated mussel biomass (560 g m−2) and the lowest simulated influent water “food” concentration (0.1 mg-N L−1), the porewater NH4+ concentration after a 2,160 h timespan without mussels was 0.5 mg-N L−1 compared to 2.25 mg-N L−1 with mussels. Continuing these simulations while varying mussel biomass and N content yielded a mortality threshold contour that was essentially linear which contradicted the non-linear and non-monotonic relationship suggested by Strayer (2014). Our model suggests that mussels spatially focus nutrients from the overlying water to the sediments as evidenced by elevated porewater NH4+ in mesocosms with mussels. However, our previous work and the model utilized here show elevated concentrations of nitrite and nitrate in overlying waters as an indirect consequence of mussel activity. Even when the simulated overlying water food availability was quite low, the mortality threshold was reached at a mussel biomass of about 480 g m−2. At a food concentration of 10 mg-N L−1, the mortality threshold was reached at a biomass of about 250 g m−2. Our model suggests the mortality threshold for juvenile Lampsilis species could be exceeded at low mussel biomass if exposed for even a short time to the highly elevated total N loadings endemic to the agricultural Midwest. INTRODUCTION Native freshwater mussels are large (25-200+ mm in length), long-lived (>25 y) invertebrates that transfer nutrients from the overlying water to sediments through filter feeding (Christian et al., 2005). These benthic, burrowing, and suspension-feeding bivalves stimulate production across multiple trophic levels (Vaughn, Nichols & Spooner, 2008); the biomass of healthy mussel beds can exceed the biomass of all benthic organisms by an order of magnitude (Negus, 1966;Layzer Gordon & Anderson, 1993). There are billions of mussels within the Upper Mississippi River (UMR) and the filtration capacity in a 480 km segment (about 13% of the river length), as a percentage of river discharge, is estimated to be up to 1.4% at high flows, up to 4.4% at moderate flows, and up to 12.2% during low flows (Newton et al., 2011). Collectively, these mussels filter over 14 billion gallons of water, remove tons of particulate organic matter from the overlying water, and deposit tons of ammonium (NH + 4 ), associated ammonia (NH 3 ), and carbon at the sediment-water interface each day. Our previous work showed that native freshwater mussels directly elevate NH + 4 and indirectly elevate nitrate (NO − 3 ) and nitrite (NO − 2 ) concentrations in lab-based mesocosms (Bril et al., 2014). The increase in NH + 4 concentrations by mussels has been associated with ingestion of food (e.g., algae, phytoplankton, bacteria, and fungi), digestion, and subsequent NH + 4 excretion (Thorp et al., 1998;Vaughn, Nichols & Spooner, 2008). However, the dynamics among food, mussels, NH + 4 , and, more broadly the nitrogen (N) cycle, especially given increasing anthropogenic releases of nutrients to mussel habitats, remain poorly understood (Strayer, 2014). The negative aspects of increased nutrient loading are most frequently reported, but an increase in nutrients to some level, may favor growth and fecundity and may increase populations of host fish (Strayer, 2014). However, there is likely a threshold, such that extreme eutrophication may have negative consequences for mussels, perhaps by decreasing the fatty acid content of food (Muller-Navarra et al., 2004;Basen Martin-Creuzburg & Rothhaupt , 2011) and/or by increasing levels of toxic Microcystis algae (Bontes et al., 2007). These realities led us to examine where the biogeochemical boundaries and thresholds are that indicate healthy versus unhealthy outcomes for freshwater mussels as a function of variable nutrient loadings and mussel biomass. A hypothetical relationship between mussel abundance and nutrient loading has been proposed by Strayer (2014) (Fig. 1), that postulates thresholds for minimum food, NH 3 toxicity, interstitial hypoxia and toxic or poor algae quality. Strayer concluded that ''it would be useful to identify early warning signs that the 'death threshold' is about to be crossed.'' Thus, the objective of our study was to develop a numerical model to conceptualize this ''mortality threshold'' as governed by mussel biomass and nutrient loading. Little is known about minimum food thresholds (let alone food quality guidelines) for mussels and, in the current era of increasing nutrient loadings, this concept will likely become less relevant over time (Bergstrom & Jansson, 2006;Strayer, 2014). Therefore, we chose elevated porewater NH + 4 concentration as an easily measured indicator of potential mortality thresholds for mussels. This is biologically relevant because native freshwater mussels have been shown to be some of the most sensitive organisms tested for NH 3 toxicity associated with equilibrium concentrations of NH + 4 (Augspurger et al., 2003;Newton & Bartsch, 2007). A fraction of the toxic biological response, regardless of species, is almost certainly caused by NH 3 in equilibrium with NH + 4 . Therefore, NH + 4 concentration is an acceptable surrogate for total ammonia nitrogen only when the temperature and pH of the aquatic habitat is known. The deposition of NH + 4 and other reduced N species by Figure 1 Hypothetical relationship between nutrient loading and mussel abundance. Concepts of minimum food threshold, ammonia toxicity, etc. are postulated to define the displayed curve. Adapted from Strayer (2014). mussels comes mostly in the form of feces and pseudofeces (Vaughn, Gido & Spooner, 2004;Lauringson et al., 2007;Christian, Crump & Berg, 2008;Gergs, Rinke & Rothhaupt, 2009). About 90% of the food taken in by mussels is excreted (Christian, Crump & Berg, 2008), which emphasizes the importance of knowing food concentrations, especially as a function of N content, when predicting associated porewater NH + 4 concentrations. This study focuses on an intensively sampled 10-d data set that was used to evaluate the ability of our numerical model to simulate food, NH + 4 , NO − 2 , NO − 3 , organic N (org N), and total N concentrations in the overlying water and porewater of continuous-flow laboratory mesocosms. The model was calibrated using literature values and water chemistry measurements from a separate, 7-d mesocosm sampling period reported in our previous work (Bril et al., 2014). The mussel species Amblema plicata and Lampsilis cardium were selected due to their abundance in the Iowa River (Zohrer, 2006) and throughout the UMR Basin (Newton et al., 2011), where N runoff from industrial agriculture severely impacts the aquatic N cycle. This research is novel in that a multi-rate nitrification/denitrification model was developed, calibrated, and evaluated with sensor-based, highly time-resolved data from mesocosms containing mussels. To our knowledge, this is the first use of such a model to simulate various ''mortality threshold'' scenarios for mussels. Mesocosm setup Four 140 L, flow-through mesocosms (Fig. 2) continuously received untreated Iowa River water during the 107-d experiment, which culminated in an intensive 10-d water chemistry sampling period. Two mesocosms contained mussels collected from the Iowa Figure 2 Schematic diagram of the flow-through, 4-mesocosm system, which was continuously fed Iowa River water (monitored with a multisensor device), contained a sand and river-sediment bottom layer and was irradiated with simulated sunlight (12 h daily). Each mesocosm was equipped with a constant head inlet, a flow measurement device, a recirculating pump, photosynthetically active radiation (PAR) sensors, and a multisensor, water-chemistry device. Two mesocosms contained mussels, and 2 contained no mussels. River and two were without mussels (control). Twelve adult A. plicata and 13 adult L. cardium were placed in one mesocosm and 13 A. plicata and 12 L. cardium were placed in another mesocosm. This approximates a density of 70 mussels m −2 , which although high, is still a realistic density in some reaches of the UMR (Newton et al., 2011). Across both mesocosms, shell length (±1 standard deviation) was 95 ± 20 mm in A. plicata and 120 ± 25 mm in L. cardium. Initially, all mesocosms contained 8 cm of purchased sand substrate, but particulate deposition from the river water altered this composition over time. A gravity-fed, constant head system provided a controllable flow rate between 9 and 55 L h −1 . The flow rate during the 10-d intensive sampling period was 8.5 L h −1 (16 h hydraulic residence time). Complete mixing in each mesocosm was provided by 1,500 L h −1 submersible pumps, and two 1,000-watt solar simulators provided illumination on a 12:12 h light-dark cycle. Additional details regarding the mussel mesocosm system are available elsewhere (Bril et al., 2014). Mesocosm sampling and analyses Data from a 10-d intensive sampling period (days 97-107 of the 107-d experiment) were used for model evaluation. We intentionally delayed the start of the intensive sampling by 97 days so that the mussels could acclimate and bacteria responsible for nitrification and denitrification could establish. Electronic water chemistry sensors (model DS5; Hach Chemical Company, Loveland, CO, USA) were used to measure highly time-resolved (30-min) water chemistry data in the overlying water of each mesocosm and in the influent head tank. The sensors measured chlorophyll a (chl-a), NH + 4 , NO − 3 , pH, and temperature. Custom-made flow measurement devices with magnetic reed switches were used to quantify influent flow. Photosynthetically active radiation (PAR) sensors (model SQ-120; Apogee Instruments, Logan, Utah) were used to measure solar irradiance at the substrate and water surface of each mesocosm. All measurements obtained by the sensors were collected and stored using two data loggers. The model inputs for influent river temperature, food, NH + 4 , NO − 2 , NO − 3 and org N ( Fig. 3) were measured values from within the river water head tank during the 10-d sampling period. Discrete water chemistry samples were collected and analyzed at five time points during the 10-d sampling period from the overlying water and porewater of each mesocosm and from the influent head tank. The discrete samples were analyzed for chl-a, NH + 4 , NO − 2 , NO − 3 , org N, and total N. Chl-a was measured by fluorescence. Measured chl-a concentrations (µg L −1 ) were converted to ''food'' biomass (mg L −1 ) based on literature values for phytoplankton chl-a content (Kasprzak et al., 2008). The fraction of nitrogen in food biomass (mg-N L −1 ) was calculated using the empirical formula C 106 H 263 O 110 N 16 P (Chapra, 1997). NH + 4 was determined using the Salicylate Method, and NO − 3 was determined using the Dimethylphenol Method (APHA, 1996). NO − 2 was measured using the Diazotization Method, and total N was measured using the Persulfate Digestion Method (APHA, 1996). Sample measurements for org N were estimated by subtracting the sum of NH + 4 , NO − 3 , and NO − 2 from the total N measurements. A more detailed description of the mesocosm sampling and analysis setup is available (Bril et al., 2014). Model calibration and sensitivity analyses Seven days of the 107-d experiment were intensively sampled and previously reported (Bril et al., 2014) for food, NH + 4 , NO − 2 , NO − 3 , org N, and temperature; these values were used as model calibration inputs. Linear interpolation between discrete samples was used where 30-min measurements were unavailable (org N, NO − 2 , and total N), and ranges for unmeasured model variables (e.g., nitrification rate, denitrification rate) were obtained from the literature ( Table 1). The model, created in Stella (version 8.0, ISEE Systems, Inc., Lebanon, New Hampshire), was initially calibrated using the no-mussel control data, then refined using data from mesocosms containing mussels to properly parameterize clearance and excretion rates (Bayne, Hawkins & Navarro, 1987;Englund & Heino, 1994;Haag, 2012). The optimized values used in the model calibration are given in Table 1. The optimized calibration values were determined by comparing model outputs to sensor and discrete sample measurements and then minimizing normalized mean error and maximizing R 2 values (Table 2). Sensitivity analyses were conducted to identify the most important variables contributing to net system dynamic concentration response. A single variable sensitivity analysis was completed by adjusting the model variables based on a range of literature values (Table 1). When such information was unavailable, the value of the variable used in model calibration was adjusted by ±50%. Ten sensitivity model runs were completed for each variable using values obtained by sampling the range of literature values (or ±50% adjustments) at 10 equal intervals. The sensitivity analysis was considered for the normalized sensitivity coefficient (Fasham, Ducklow & McKelvie , 1990) (NSC): where, ϕ = mean value of a parameter (e.g., NH + 4 , NO − 3 ) over the simulation period for the sensitivity run (mg-N L −1 ), ϕ o = mean value of a parameter over the simulation period for the calibrated model (mg-N L −1 ), P = value of model variable in sensitivity run, and P o = value of model variable in calibrated model. The NSC values for each sensitivity run were averaged to determine a net NSC for each model variable. Mussel mortality threshold simulations Based on 28-day laboratory toxicity tests with juvenile fat mucket mussels (Lampsilis siliquoidea), Wang et al. (2011) reported that 100% mortality occurred at 2.08 mg L −1 total ammonia nitrogen (TAN). Given the pH (8.2) and temperature (20 • C) of that study, of the 2.08 mg L −1 TAN, about 1.9 mg-N L −1 would be in the NH + 4 form and about 0.18 mg-N L −1 would be in the NH 3 form. Given that our models were developed at a similar pH (8.2) and temperature (24 • C) to the Wang et al. (2011) study, we selected 2.0 mg-N L −1 NH + 4 in porewater as a surrogate mortality threshold for Lampsilis mussels. Furthermore, the US Environmental Protection Agency (EPA) determined species mean chronic values of NH 3 for Lampsilis siliquoidea and L. fasciola to calculate a geometric mean chronic NH 3 value of 2.1 mg-N L −1 for the genus Lampsilis (US Environmental Protection Agency, 2013). The average measured porewater concentrations for NH + 4 , NO − 3 , NO − 2 , org N, and food during the 10-d evaluation period (3.9, 0.2, 0.06, 5, and 0.1 mg-N L −1 , respectively) were used as initial conditions for porewater in the model. The average overlying water concentrations for the same variables were 0.05, 5, 0.05, 2.8, and 0.1 mg-N L −1 , respectively, and the ''river water'' inputs for 90-d model simulations were initially set to these values. The mussel density in our mesocosms was converted to estimated biomass (g m −2 ) using the allometric function, M = aL b , where M is tissue dry mass (g) and L is length (mm) and with values for ''a'' and ''b'' for A. plicata taken from the literature (Newton et al., 2011). The resulting mass of 6.0 g mussel −1 was multiplied by 35 mussels m −2 (half the population) to determine an estimated biomass of 210 g m −2 for A. plicata. In the absence of allometric data for L. cardium, the tissue dry mass was assumed to be 10 g mussel −1 (167% of A. plicata), and when multiplied by 35 mussels m −2 resulted in a biomass of 350 g m −2 . Adding these values gave a maximum biomass of 560 g m −2 , which was used as the upper bound for the simulations. To simulate changes in porewater NH + 4 concentration as a function of mussel biomass and food availability, mussel biomass was varied at zero, 140, 280, 420 and 560 g m −2 while the N content of food was varied at zero, 0.1, 1, 5 and 10 mg-N L −1 . Model Evaluation For the river water head tank (pH 8.2), a combination of sensor data (temperature, NO − 3 , ''food,'' and NH + 4 ) and interpolated discrete data (org N and NO − 2 ) were collected and used as input to the numerical model on a 30 min time step (Fig. 3). For overlying water in mesocosms, the ''food'' sensor data were converted to a 25-d moving average (Fig. 4A) to condition the inherently noisy signal to enable visual comparison to the model output. The discrete sample results for NO − 2 concentrations in the overlying water were similar in magnitude, but did not agree closely with the model output (Fig. 4B). The model output for NH + 4 and NO − 3 concentrations (Figs. 4C and 4D) compared well with the sensor measurements. Overall, the model was capable of outputting results that accurately predicted the concentrations, and most of the dynamics, of the major N species at a 30 min time interval for the 10-d evaluation period. The model was evaluated quantitatively using the standard deviation (SD) of the measured data variable compared to the root mean square error (RMSE) of the model output. If the RMSE was less than half the SD, the model output for that variable was deemed ''accurate' ' (Singh et al., 2005;Moriasi et al., 2007). For comparative purposes, values for the mean bias, mean error, normalized mean bias, normalized mean error, and R 2 are reported along with the SD and RMSE for food, NH + 4 , NO − 2 , NO − 3 , org N, and total N for the 7-d model calibration and 10-d evaluation periods (Table 2). The RMSE to SD ratio was ≤0.5 for the sensor-measured data for food, NH + 4 and NO − 3 for the 10-d evaluation period. The model evaluation based on discrete sample data yielded mixed results with RMSE to SD ratios of 0.55, 0.60, and 0.52 for NO − 3 , NO − 2 and total N, respectively. The RMSE to SD ratios for food, NH + 4 , and org N were 4.0, 1.4, and 0.86 for the discrete sample data, respectively. The lower accuracy determinations based on discrete sample data were likely a function of the small sample sizes, as compared to sensor measurements, and the low concentrations of food and NH + 4 which challenged the analytical limits of quantitation for these variables. Figure 4 Overlying water sensor data and discrete sample results from the mesocosms containing mussels compared to model outputs for food, NH + 4 , NO − 2 , and NO − 3 for the 10-d model evaluation period. Sensitivity analysis The modeled nitrogen species were collectively most sensitive to changes in temperature, hydraulic retention time, and mussel biomass (Table 3). Temperature was expected to be an influential variable since the majority of the first-order rate expressions are temperature dependent. Hydraulic retention time was also expected to be influential since the influent river water has a major impact on mesocosm water chemistry in a continuous-flow system. Mussel biomass was an unexpectedly sensitive model variable. However, given the influence of mussels on food, NH + 4 , NO − 2 , and NO − 3 concentrations shown in our previous work (Bril et al., 2014), this result, in hindsight, should have been anticipated. Mussel mortality threshold simulations At the highest simulated mussel biomass (555 g m −2 ) and the lowest simulated influent water food concentration (0.1 mg-N L −1 ), the porewater NH + 4 concentration after a 2,160 h timespan in the absence of mussels, was 0.5 mg-N L −1 compared to 2.3 mg-N L −1 in the presence of mussels (Fig. 5). The food concentration in mesocosms without mussels was visibly higher than in mescocosms with mussels while NH + 4 and NO − 2 concentrations in overlying water were lower in the absence of mussels. Mortality threshold contours were estimated by varying mussel biomass and N concentration in the model (Fig. 6). Even when the simulated overlying water food availability was low, the mortality threshold was reached at a mussel biomass of about 480 g m −2 . At a food concentration of 10 mg-N L −1 the mortality threshold was reached at a biomass of about 250 g m −2 . In eastern Iowa, the median total N concentration in rivers and streams is commonly >10 mg-N L −1 (Kalkhoff et al., 2000), which can place juvenile freshwater mussels at particular risk to ammonia toxicity. Minnesota has a draft criterion for aquatic life of 4.9 mg-N L −1 total N, which was exceeded in 68% of samples collected in a study of Iowa waters between 2004 and 2008 (Garrett, 2012). The US EPA national recommended final acute ambient water quality criterion (AWQC) for protecting freshwater organisms from potential effects of ammonia is 17 mg-N L −1 and the final chronic AWQC for ammonia is 1.9 mg-N L −1 at pH 7.0 and 20 • C (US Environmental Protection Agency, 2013). At a total N concentration of 10 mg L −1 , our model predicts the mortality threshold to be reached when mussel biomass is about 400 g m −2 . However, the maximum total N concentration measured between 2004 and 2008 was 37.8 mg-N L −1 (Garrett, 2012). Our model suggests Figure 6 The mussel mortality threshold, defined as a porewater NH + 4 concentration of ≥2 mg-N L −1 as a function of mussel biomass, overlying water food concentration, and overlying water total N concentration. the mortality threshold for juvenile Lampsilis could be exceeded at low mussel biomass if even a short exposure occurs at such a high total N concentration. Reflecting on the relationships between nutrients and freshwater mussels conceptualized by Strayer (2014), we concur that high nutrient loads (particularly N in the agricultural Midwest) are a threat to the well-being of mussels. Conversely, our model predicts a somewhat linear mortality threshold relationship as mussel biomass and total N are varied, whereas Strayer stated this relationship would probably be non-linear and non-monotonic. In agreement with Strayer, our model suggests that mussels spatially focus nutrients from the overlying water to the sediments as evidenced by elevated porewater NH + 4 in mescosms with mussels. However, our previous work (Bril et al., 2014), and the model developed here, show elevated concentrations of NO − 2 and NO − 3 in overlying waters as an indirect consequence of mussel activity. This still represents a spatial focusing of nutrients by mussels, but the impact is not seen in the sediment alone. CONCLUSIONS The concept of a variable ''mussel mortality threshold'' as a function of mussel biomass and nutrient loading was successfully explored using a numerical model calibrated with data from mesocosms with and without mussels. With a threshold porewater NH + 4 value of 2 mg-N L −1 , mussel mortality was predicted to occur well within the range of documented total N concentrations in eastern Iowa rivers and streams and at biologically relevant mussel biomasses. The model could be used as a screening tool to determine when mussel populations might be at risk due to high levels of chronic and acute nutrient loadings.
2017-07-24T19:51:00.908Z
2017-01-04T00:00:00.000
{ "year": 2017, "sha1": "0c89bab4377c862e5d5653f4aca4a54e90ff08ca", "oa_license": "CC0", "oa_url": "https://doi.org/10.7717/peerj.2838", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0c89bab4377c862e5d5653f4aca4a54e90ff08ca", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science", "Medicine" ] }
2108895
pes2o/s2orc
v3-fos-license
Comparison of the Effectiveness of Embolic Agents for Bronchial Artery Embolization: Gelfoam versus Polyvinyl Alcohol Objective The purpose of this study was to compare the results of different agents for bronchial artery embolization of hemoptysis. Materials and Methods From March 1992 to December 2006, a bronchial artery embolization was performed on 430 patients with hemoptysis. The patients were divided into three groups. Group 1 included 74 patients treated with a gelfoam particle (1×1×1 mm), while group 2 comprised of 205 patients treated with polyvinyl alcohol (PVA) at 355-500 µm, and group 3 included 151 patients treated with PVA at 500-710 µm. We categorized the results as technical and clinical successes, and also included the mid-term results. Retrospectively, the technical success was compared immediately after the procedure. The clinical success and mid-term results (percentage of patients who were free of hemoptysis) were compared at 1 and 12 months after the procedure, respectively. Results Neither the technical successes (group 1; 85%, 2; 85%, 3; 90%) nor the clinical successes (group 1; 72%, 2; 74%, 3; 71%) showed a significant difference among the 3 groups (p > 0.05). However, the mid-term results (group 1; 45%, 2; 63%, 3; 62%) and mid-term results excluding the recurrence from collateral vessels in each of the groups (group 1; 1 patient, 2; 4 patients, 3; 2 patients) showed that group 1 was lower than the other two groups (p < 0.05). No significant difference was discovered for the mid-term results between groups 2 and 3. Moreover, the same results not including incidences of recurrence from collateral vessels also showed no statistical significance between the two groups (p > 0.05). Conclusion Polyvinyl alcohol appears to be the more optimal modality compared to gelfoam particle for bronchial artery embolization in order to improve the mid-term results. The material size of PVA needs to be selected to match with the vascular diameter. emoptysis is a manifestation of pulmonary or tracheobronchial disease (1,2). It is caused by chronic lung diseases such as pulmonary tuberculosis, chronic bronchitis, bronchiectasis, lung cancer, aspergillosis, and pneumoconiosis. In most cases, the amount of hemoptysis is small, and it subsides gradually without a need for treatment. However, massive hemoptysis, defined as 300 to 600 ml per 24 hours, is a lifethreatening condition with a reported mortality rate of 50 to 60% (1). Bronchial artery embolization (BAE) has been proven as a good treatment method for a patient for whom a surgical procedure is not an option as well as for a patient needing palliative therapy requiring hemodynamic stabilization (2). However, 36 to 44% of patients who underwent successful BAE for hemoptysis reported recurrences Seok Hahn, MD 1 Young Ju Kim, MD 1 Woocheol Kwon, MD 1 Seung-Whan Cha, MD 1 Won-Yeon Lee, MD 2 H during long-term follow-up (1,2). Many studies have dealt with causes for the recurrence of hemoptysis, as well as underlying diseases or angiographic findings have been identified as the reasons (2)(3)(4)(5). Nevertheless, a number of studies focused on embolic agents that have been relatively small and have consequently generated much debate. The purpose of this study is to compare results of various embolic agents used in our hospital to treat hemoptysis and to select an embolic agent resulting in a more favorable outcome. MATERIALS AND METHODS Because this study was retrospective, Institutional Review Board approval was not required. Prior to the procedure, written informed consent was obtained, in accordance with the Institutional Review Board policy. From March 1992 to December 2006, 561 patients underwent BAE to treat hemoptysis in our hospital. Of these patients, 430 patients consisting of 246 males and 184 females ranging in age from 18 to 87 years old (mean age 56.7 years) were selected for the study by exclusion criteria. The exclusion criteria were the following: (i) no angiographic finding of hemoptysis on diagnostic bronchial angiography, (ii) embolization by other than gelfoam and PVA or by a combination of more than two embolic agents, (iii) loss during follow-up or inability to confirm recurrence of hemoptysis during 12 months after BAE. Group 1 had 74 patients, of whom 44 had hemoptysis due to pulmonary tuberculosis, 18 had bronchiectasis, five had lung cancer, and seven had other diseases (3 with pneumonia, 2 with lung abscesses, and 2 with pulmonary infarction). Group 2 consisted of 205 patients, of whom 148 had pulmonary tuberculosis, 38 had bronchiectasis, 13 had lung cancer, and six had other diseases (2 with pneumonia, 3 with lung abscesses, and 1 with coagulopathy due to coumadin medication). Among 151 patients in group 3, 94 had pulmonary tuberculosis as the cause of hemoptysis, 36 had bronchiectasis, 14 had lung cancer, and seven had other diseases (4 with pneumonia, 1 with a lung abscess, 1 with coagulopathy due to coumadin medication, and 1 with pulmonary infarction) ( Table 1). To detect bleeding sites prior to BAE, all the patients with hemoptysis underwent chest CT or bronchoscopy, and a diagnostic angiography was performed to confirm the bleeding site. Using the standard Seldinger technique, selective bronchial angiography was performed using a number of different 5-Fr catheters such as G.R.B & G.L.B (JUNG SUNG MEDICAL Co., Ltd., Seongnam, Korea) or Cobra (TERUMO � , Tokyo, Japan) to localize the site. In some cases from each group, there were bleeding sites of transpleural supplies from subclavian branches which included the internal mammary artery, lateral thoracic artery or other branches, bronchial arteries originating from the intercostobronchial trunk (31 patients in group 1, 71 patients in group 2, and 59 patients in group 3) (Fig. 1), or in some cases the anterior spinal artery was seen on angiography (2 patients in group 1, 3 patients in group 2, and 2 patients in group 3). The bleeding vessels were picked using 2 or 2.5 Fr Renegade (Boston scientific, Natick, USA) or Progreat TM (TERUMO � , Tokyo, Japan) microcatheters. We guided all vessels before the embolization, and any findings of hypervascularization, arterial (18) Note.─ � Massive hemoptysis is defined as 300 to 600 ml or more of blood loss from hemoptysis over 24 hour period. enlargement, bronchial-pulmonary artery shunt, and extravasation for contrast material obtained from angiography were considered as evidence of the bleeding (Table 2). In addition, closed fluoroscopic observation was performed during the embolization to prevent other complications such as reflux into the anterior spinal artery. In each procedure, BAE was performed until the bleeding sites were no longer visible. Images from the arterial embolizations were captured by the Optimus DVI System and BV 5000 (Philips Healthcare, Best, The Netherlands). Investigations of underlying diseases, embolic agents, and follow-ups were conducted by reviewing patient medical records or by telephone interview. If medical records indicated recurrence of hemoptysis, follow-ups were then terminated. Also, observations were concluded when no recurrence was observed within 12 months after BAE. We defined technical success as a percentage of patients without a recurrence immediately after BAE. Clinical success was defined as a percentage of patients who hemoptysis-free for at least one month after the embolization, and the mid-term result was set as a percentage of patients who were free of hemoptysis 12 months after the procedure (6). Next, we assessed cumulative hemoptysisfree rates using the Cutler-Ederer method with SPSS version 12.0 (SPSS, Chicago, IL). P-values less than 0.05 were set as the threshold for statistical significance. RESULTS In group 1, 11 patients showed recurrence immediately after the BAE, and the technical success was 85% (63 of 74). After one month, 10 patients had recurrence in previously embolized vessels without collateral vessel origin, and the clinical success was 72% (53 of 74). Recurrence after 12 months was found in 20 patients, and the mid-term result was found to be 45% (33 of 74). Among them, 19 experienced recurrences in previously embolized vessels, while one was due to collateral bleeding. In group 2, 30 patients showed hemoptysis immediately after the procedure, and technical success was calculated to be 85% (175 of 205). After one month, 23 patients experienced recurrence due to recanalization, and the clinical success was 74% (152 of 205). There was no bleeding focus in collaterals after one month; however, after 12 months, 22 patients had experienced a recurrence, and the mid-term result was 63% (130 of 205). Recurrence in previously embolized vessel was found in 18 patients, of whom four had collateral vessels. In group 3, 15 patients showed recurrence of hemoptysis immediately after the procedure. As a result, the technical success rate was 90% (136 of 151). Recurrence occurred in 29 patients, in previously embolized vessels, and they had no collateral recurrence after one month. Hence, the clinical success was 71% (107 of 151) for group 3. After 12 months, a total of 13 patients had recurrence. Of these, 11 originated from the embolized vessels, whereas two originated from collateral vessels. The mid-term result was found to be 62% (94 of 151) ( Table 3) (Fig. 2). There were no significant differences among the three groups for the technical and clinical success rates (p > 0.05). However, group 1 showed a significantly lower midterm result than the other groups. Significant differences for the mid-term results were found between groups 1 and 2 (p = 0.02) as well as groups 1 and 3 (p = 0.03). Furthermore, for the mid-term results, when subtracting incidences of recurrence from collateral vessels in each group (group 1: 32 of 74, group 2: 126 of 205, group 3: 92 of 151), we found that group 1 also showed a significantly lower result than the other two groups (between groups 1 and 2: p = 0.03 and between groups 1 and 3: p = 0.04). There were similar rates of technical failures among the three groups (group 1: 15%, group 2: 15%, group 3: 10%), and there had been no major procedure-related complications including spinal cord ischemia, non-target organ embolization, dysphagia, and so on. DISCUSSION From the moment Remy et al. (7) first performed BAE to treat hemoptysis, BAE has been used to treat both massive and chronic intermittent hemoptysis (8). BAE has also been used as a preoperative method to improve the lung function of patients prior to a surgery, and it is an effective hemostatic treatment modality in patients for whom surgery is not an option (7)(8)(9)(10). Therapeutic benefits of BAE in hemoptysis are diverse depending on the amount of bleeding, risk of recurrent hemoptysis, and overall lung function of patients (8). Among them, the risk of recurrent hemoptysis is a more important factor, especially in long-term recurrence. Moreover, the status of underlying diseases and characteristics of embolic agents may affect the risk of recurrent hemoptysis (6,11). Many embolic agents are used to perform BAE to treat hemoptysis, which include gelfoam and PVA. Each material has its own advantages and disadvantages. Gelfoam is cost-effective, and the size can be controlled. However, recanalization can occur faster than PVA, because gelfoam is absorbed spontaneously. PVA is a permanent material and can occlude a vessel at the small arteriolar level, but it results in collateral flow (12,13). In this study, we compared the relationship between clinical outcomes and embolic agents, especially PVA and gelfoam. We found that there was no significant difference in the results found among gelfoam and two PVA groups immediately after the procedure and after one month. However, PVA had better results than gelfoam after 12 months, regardless of particle size. No difference was found among these groups immediately after the embolization, which means that there was no difference in the rate of technical failure. Causes of technical failure include overlooking other bleeding sites, incomplete embolization, and so on (14). No significant difference was noted after one month also means that there was no difference in the temporary effect of BAE among the three embolic agent groups. However, a significant difference was found after 12 months; PVA had a better outcome than gelfoam in a setting of minimal progression of underlying diseases. Many studies have dealt with the relationship between recurrent hemoptysis and embolic materials. However, they presented a controversy about which of gelfoam or PVA was the more effective embolic agent (15,16). According to the report by Chung et al. (17), if an initially successful embolization was performed, the recurrence risk by gelfoam itself in previously embolized vessels was low, and hence there was no definite difference with other results, despite gelfoam being more absorbable in theory. Some authors also reported that there was no difference in the success rates among the various embolic agents (14,18), and no difference in the results was noted among gelfoam and other non-absorbable agents (18). However in this study, under similar underlying disease conditions, technical failure, and the same follow-up period, we presented that PVA is superior for long-term outcomes in BAE, especially with the comparison among the mid-term results (except for the recurrence from collateral vessels), and provided a buttress for PVA in previous arguments about embolic efficacy. The reason to establish a follow-up period to 12 months is to minimize the recurrence from collateral vessels. According to the report by Tanaka et al. (13), causes of recurrent hemoptysis include recanalization and reperfusion. Reperfusion is achieved by newly developed collateral vessels as underlying diseases progress with time (17). Thus, in our study, we set a follow-up period of 12 months to have a minimal impact on the recurrence risk from collateral vessels due to underlying diseases. Many factors affected these results; because of the gelfoam particle's good absorbability, previously embolized vessels can recanalize earlier than in the case of PVA embolization. Therefore, rebleeding risk can increase with time. Also, the larger size of the gelfoam particles compared to PVA cannot occlude bleeding at the small arteriole level (16). This study has several limitations. First, the maximum follow-up period was 12 months. Second, it was a retrospective analysis of patients from a single center. Finally, we did not compare the results to newer embolic materials such as embosphere, bead block, hepasphere, and others. In conclusion, PVA is a better choice than gelfoam particle for BAE to improve the result, and it is considered that the material size of PVA needs to be selected to match the vascular diameter.
2016-05-04T20:20:58.661Z
2010-08-27T00:00:00.000
{ "year": 2010, "sha1": "1889843a12325d5bf8c917da6a0ab4a700901f1d", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc2930163?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "1889843a12325d5bf8c917da6a0ab4a700901f1d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17070048
pes2o/s2orc
v3-fos-license
The lumbosacral angle does not reflect progressive tethered cord syndrome in children with spinal dysraphism Purpose Our goal was to validate the hypothesis that the lumbosacral angle (LSA) increases in children with spinal dysraphism who present with progressive symptoms and signs of tethered cord syndrome (TCS), and if so, to determine for which different types and/or levels the LSA would be a valid indicator of progressive TCS. Moreover, we studied the influence of surgical untethering and eventual retethering on the LSA. Methods We retrospectively analyzed the data of 33 children with spinal dysraphism and 33 controls with medulloblastoma. We measured the LSA at different moments during follow-up and correlated this with progression in symptomatology. Results LSA measurements had an acceptable intra- and interobserver variability, however, some children with severe deformity of the caudal part of the spinal column, and for obvious reasons those with caudal regression syndrome were excluded. LSA measurements in children with spinal dysraphism were significantly different from the control group (mean LSA change, 21.0° and 3.1° respectively). However, both groups were not age-matched, and when dividing both groups into comparable age categories, we no longer observed a significant difference. Moreover, we did not observe a significant difference between 26 children with progressive TCS as opposed to seven children with stable TCS (mean LSA change, 20.6° and 22.4° respectively). Conclusions We did not observe significant differences in LSA measurements for children with clinically progressive TCS as opposed to clinically stable TCS. Therefore, the LSA does not help the clinician to determine if there is significant spinal cord tethering, nor if surgical untethering is needed. Introduction Spinal dysraphism is often associated with tethered cord syndrome (TCS). TCS is a diverse clinical entity supposedly due to abnormal tension on the spinal cord [1][2][3], in which typical neuroimaging features, such as caudal and/or dorsal displacement of the conus with enlargement of the ventral subarachnoid space, support the diagnosis [3,4]. Tethering usually occurs at the more caudal segments of the cord in association with a low lying conus [1]. However, tethering of the cervical or thoracic cord, as well as tethering of the caudal segments of the cord despite a normally positioned conus may also be encountered [2]. The latter condition is often termed 'occult tethered cord syndrome' [5]. The clinical presentation of TCS is quite divers, including neurologic, urologic, orthopaedic and/or gastrointestinal symptoms and signs. Clinical deterioration in one or more of these categories may be subtle or slowly progressive and therefore difficult to interpret, or even inconspicuous. Imaging studies are often inconclusive. Therefore, diagnosing a TCS may prove difficult in some cases. In this regard, Tubbs et al. [6][7][8] made the interesting observation that the lumbosacral angle (LSA) increases in children with (lipo)myelomeningocele at the time symptoms of TCS deteriorate. They suggested that the LSA as an objective tool may aid the clinician to determine if the spinal cord is symptomatically tethered, and if surgical untethering is needed. The aim of our study was to validate the findings of Tubbs et al in a consecutive cohort of children with different types of spinal dysraphism. Our initial goal was to validate the hypothesis that the LSA increases in children with spinal dysraphism who present with progressive symptoms and signs of TCS, and if so, to determine for which different types and/or levels of spinal dysraphism the LSA would be a valid indicator of progressive TCS. A secondary goal was to evaluate the influence of surgical untethering and eventual retethering on the LSA. Definition of TCS There clearly is some inconsistency in the literature with regard to the definition of tethered cord and TCS. Therefore, we want to clearly state our definition of TCS in the context of spinal dysraphism as used in this article. Because some symptoms and signs of TCS may already be present at birth [9], some may worsen over time, and some may newly develop, we propose the following definition: neurologic, urologic, orthopaedic and/or gastrointestinal symptoms and signs supposedly due to abnormal tension on the spinal cord, which are (a) already present at birth, and have not changed during development, growth or maturation, or (b) already present at birth, and worsen during development, growth or maturation, or (c) not present at birth, and become clinically apparent during development, growth or maturation. Importantly, we consider TCS to be clinically progressive only if criteria described under (b) or (c) are fulfilled. Study group inclusion/exclusion criteria We retrospectively analyzed the magnetic resonance imaging (MRI) studies on all children (n=115) from the Maastricht University Medical Center spina bifida database. Inclusion criteria were age between 0 and 18 years, and availability of at least two sagittal MRI studies, including one study obtained in the postnatal period, one study obtained at the time symptoms of TCS deteriorated and the decision was made to operate, and (whenever available) one study obtained 2 to 3 years postoperatively. Whenever a child demonstrated symptoms and signs of progressive TCS (category b and c) after a previous untethering operation, imaging studies from this retethering episode were analyzed as well. Finally, whenever a child remained clinically stable, the most recent imaging study was analyzed. Exclusion criteria were severe deformity of the caudal part of the spinal column (because of aberrant angles and/or the impossibility to measure the LSA), and for obvious reasons caudal regression syndrome (sacral agenesis). The former group of children included those with congenital lumbar kyphosis or kyphoscoliosis, as well as those with severe forms of spinal dysraphism who developed pronounced (kypho)scoliosis which made LSA measurements difficult or even impossible. Control group Because little is known about the LSA, we decided to analyze the MRI studies conducted on a control group as well. These were all children diagnosed with medulloblastoma, in whom the neuraxis had been screened for spinal metastases. Children with metastatic disease were included only if their spinal cord was not compressed, as cord compression may mimic a tethering mechanism. Again, at least two sagittal MRI studies had to be available. Children for this control group were included from our center as well as the University Medical Center Groningen and the Erasmus University Medical Center Rotterdam. Data collection In the Maastricht University Medical Center, children with spinal dysraphism are followed by a multidisciplinary spina bifida team. Most children undergo operation when their TCS is clinically progressive (category b and c); however, some children and especially those with a split cord malformation or a dermal sinus tract are operated on prophylactically. Whenever the spina bifida team suspects a progressive TCS, full spine MRI and urological studies as needed are performed. After approval from the Medical Ethical Committee and Board of Directors of our center, we reviewed the children's medical records for sex, age, type of spinal dysraphism, level of spinal dysraphism, ambulatory status, symptoms and signs indicative for TCS, surgical untethering, and symptoms and signs indicative for retethering. We divided their symptoms and signs into four categories as mentioned in the introduction (neurologic, urologic, orthopaedic and gastrointestinal). Finally, we reviewed the medical records of children with medulloblastoma for sex, age, local or metastatic disease, and spinal cord compression. LSA measurements The LSA was determined by the intersection of two straight lines drawn on a sagittal MRI of the lumbosacral region obtained in the supine position. We decided not to use plain radiographs because of another position (sitting or standing) during image acquisition, which may influence the LSA. The lines determining the LSA are the following: a line drawn perpendicular to a line tangential to the anterior surface of the body of the third lumbar vertebra, and a line drawn perpendicular to the sacral line, which is drawn by joining the middle of the anterior border of the body of the first sacral vertebra with that of the second sacral vertebra [6,10] (Fig. 1). The LSA was determined using a set triangle and/or a goniometer. Each LSA was measured twice by one of the authors (FR). Some measurements, most frequently from difficult cases with severe spinal deformity, were discussed with an orthopaedic surgeon specialized in scoliosis surgery (LvR). Importantly, to determine interobserver agreement, 46 LSA measurements were repeated by one of the coauthors (JV). Intra-and interobserver agreement were scored using kappa statistics. Finally, we propose the following terms to identify the LSA measurements at different moments in time: initial LSA (first measurement), subsequent LSA (second measurement), LSA change (subsequent LSA minus initial LSA), preoperative LSA (preoperative) and postoperative LSA (postoperative). Data analysis Data are reported as mean (±SD) and median (range). Statistical analysis was performed using SPSS software version 15.0 for Windows. Because the data contained ordinal and interval variables in small sample sizes, we tested significance by parametric tests (analysis of variance (ANOVA), paired and unpaired t-tests), and nonparametric tests (Kruskal-Wallis test and Mann-Whitney U-test). P values less then 0.05 were considered statistically significant. Results Unfortunately, as many as 82 children from the cohort of 115 children in our spina bifida database were excluded from the study (Fig. 2), the reason almost invariably being that the original MRI hardcopies were missing. The remaining 33 children (19 boys, 14 girls) were included (Table 1). Their mean age at initial LSA measurement was 4 months (SD=10 months). The underlying dysraphic disorder was a myelomeningocele in 20 patients (60%), a tight filum in six patients (18%), a lipoma in three patients, a Currarino syndrome in two patients, a split cord malformation in one patient and a meningocele in one patient. The level of spinal dysraphism was thoracic in two, thoracolumbar in four, lumbar in 12, lumbosacral in 13, and merely sacral in the remaining two. Seven children (21%) were clinically stable with a mean interval between LSA measurements of 58 months Table 2). This group included four myelomeningoceles, one Currarino syndrome, one split cord malformation, and one lipoma. Of note, the latter two children were operated on prophylactically. Twenty-six children (79%) were clinically progressive. This group presented with their first progressive tethering episode at a mean age of 68 months (SD=44 months) ( Table 2). Finally, retethering occurred in three children, and a second retethering in two. The control group (24 boys, nine girls) included one suprasellar and 32 posterior fossa medulloblastomas. Eleven children (33%) had metastatic disease, most often spinal leptomeningeal spread. The mean age at initial LSA measurement was 95 months (SD=42 months), and the mean interval between LSA measurements was 42 months (SD=31 months) ( Table 1). LSA measurements intra-and interobserver agreement Kappa statistics for LSA measurements showed high intraobserver agreement (unweighted kappa 0.736, kappa with quadratic weighting 0.918) as well as interobserver agreement (unweighted kappa 0.588, kappa with quadratic weighting 0.872) (data not shown). LSA in different types of spinal dysraphism LSA measurements in different types of spinal dysraphism are listed in Table 6. Due to the small sample sizes (some subgroups included ≤2 individuals) we were unable to perform proper statistical analysis, however, the analyses we did perform (Kruskal-Wallis test and ANOVA without post hoc comparisons) did not show a significant difference in LSA measurements between different types of spinal dysraphism. Figure 3 illustrates LSA measurements in a girl with Currarino syndrome and a tight filum, whereas Fig. 4 illustrates LSA measurements in a girl with a myelomeningocele (both pre-and postoperatively). Table 7. Statistical analysis did not show a significant difference in LSA measurements between these different levels. LSA before and after surgical untethering Twenty-six children were clinically progressive at some point and underwent surgical untethering. Postoperative MRI studies were available in 20 of these children, with a mean interval in between measurements of 30 months. Mean preoperative LSA was 62.6°, mean postoperative LSA was 67.8°. Thus, LSA increased after untethering with a statistically significant difference (P<0.015) ( Table 8). Retethering occurred in three children and a second retethering in two; however, because of very small sample sizes, we did not perform statistical analysis in these particular subgroups. LSA in other subgroups We observed no significant difference in LSA measurements in the study group between children with a normal gait, children with an impaired gait, children who walked late, and children who never learned to walk. Also, we observed no significant difference in LSA measurements between boys and girls, neither in the study nor in the control group. Finally, we observed no significant difference in LSA measurements for children in the control group with and without metastatic disease (data not shown). Discussion In this study, we analyzed the LSA in children with spinal dysraphism (clinically progressive TCS as opposed to clinically stable TCS), and in controls with medulloblastoma. When dividing these groups into comparable age categories, we observed no significant differences in LSA measurements. Abitbol studied the LSA in 131 healthy children, including 62 boys and 69 girls [10]. The LSA (measured in subjects resting on their side) increased from an average of 20°at birth to an average of 70°at the age of 5 years, and remained stable thereafter. The LSA progressed at approximately the same rate for both sexes, and at the age of 11 months varied from 20°to 45°. Abitbol theorizes the LSA develops as a result of the acquisition of erect posture and the ontogeny of bipedal locomotion more than as a result of a generalized growth trend as manifested by increasing age, height, or weight. He observed that in children who were able to stand up and walk early, the LSA developed early, while in children who were slower to learn to stand or walk, the LSA developed and reached its final state later in life. Moreover, children who never learned to walk or had an impaired posture and gait because of a pathologic condition developed only a minimal LSA. Tubbs et al. [6,7] retrospectively analyzed the LSA in 30 children with a myelomeningocele and 25 children with a lipomyelomeningocele. They observed that the LSA was often increased for their age. Statistical analysis of symptomatic and asymptomatic children, i.e. clinically progressive and clinically stable children, demonstrated a mean LSA change of 18.85°and 6.69°(p = 0.0371) respectively for children with a myelomeningocele [6], and 13°and 5°(p=0.0202) respectively for children with a lipomyelomeningocele [7]. There is little doubt that making a decision in favour of an untethering operation may sometimes be quite difficult even in the setting of an experienced spina bifida team. Based on the study by Tubbs et al. as mentioned above, we assumed that the LSA would increase in children at the time of progressive TCS. If so, the LSA would have been an objective indicator for progressive TCS. Our findings, however, do not support the earlier findings of Tubbs et al. Although LSA measurements in the first year of life in our study group were comparable to those of Tubbs et al., we did not observe a significant difference in LSA measurements for children with clinically progressive TCS as opposed to clinically stable TCS. Moreover, although LSA measurements in our control group were comparable to those observed by Abitbol [10] in healthy children, we did not observe a significant difference in final LSA development (expressed as subsequent LSA) in children with progressive TCS as opposed to the control group with medulloblastoma (Tables 3 and 5). The reasons for this discrepancy are unclear. One possible explanation may be the simultaneous use of more than one imaging modality by Tubbs et al., more specifically X-ray and MRI studies, whereas we used MRI studies exclusively. Also, the LSA may be measured more accurately on MRI studies because of their superior resolution. Interestingly, in contrast to the findings of Abitbol, we observed no differences for children with different ambulatory status (walking versus wheelchair bound). We therefore hypothesize that the LSA develops as a result of the acquisition of erect posture and the influence of gravity rather than as a result of bipedal locomotion. The reader may have noted that the LSA change was larger in the study group than in the control group. He should realize, however, that initial measurements in the control group cannot be compared to initial measurements in the study group, because mean age at initial measurement was 95 months in the control group as opposed to 4 months in the study group. When dividing both groups into comparable age categories, a significant difference in LSA measurements was no longer observed. As pointed out by Abitbol, it is the natural course of the LSA to increase in the first 5 years of life. Therefore, the difference in LSA change between study and control group is explained by the different age distribution of the children, implying a different stage of LSA maturation. This also explains the observed increase in postoperative LSA (67.8°) compared to preoperative LSA (62.6°): the LSA does not change in children with progressive TCS but merely follows its natural course. Study limitations and future perspectives We used the same method used by as Abitbol [10] and Tubbs et al [6] in measuring the LSA. To the best of our knowledge, this method has never been validated; however, intra-and interobserver agreement in our study were high, suggesting the method is valid at least for this group of children. As mentioned above, this method is not suitable for severe (kypho)scoliosis that may be affecting children with severe spinal dysraphism. Three children were excluded for this reason. Unfortunately, as many as 82 children from the cohort of 115 children in our database were excluded, the reason almost invariably being that the original MRI hardcopies were missing. Some subgroups (e.g. different types of spinal dysraphism, different levels of spinal dysraphism) were too small (n≤2) to perform statistical analysis. However, we do not believe this influenced our overall conclusion. Ideally, the control group should have been obtained by random selection of healthy, non-hospitalized, age-matched children; however, we did not take this option because of ethical considerations and costs. Therefore, we chose children with medulloblastoma, excluding those with metastatic spinal cord compression which may mimic a tethering mechanism. For obvious reasons, imaging obtained in the postnatal period was unavailable in these children. Finally, conflicting findings in this study as compared to those obtained by Tubbs et al (both retrospective) may warrant a prospective study with healthy, agematched controls. Conclusion We did not observe significant differences in LSA measurements in children with clinically progressive TCS as opposed to clinically stable TCS. Therefore, the LSA does not help the clinician to determine if there is significant spinal cord tethering, nor if surgical untethering is needed. Conflict of interest The authors declare that they have no conflict of interest. Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
2014-10-01T00:00:00.000Z
2010-09-21T00:00:00.000
{ "year": 2010, "sha1": "219b93378c3bf9c377add78980ae3ae41e287109", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00381-010-1281-0.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "219b93378c3bf9c377add78980ae3ae41e287109", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257599373
pes2o/s2orc
v3-fos-license
Lactiplantibacillus plantarum N-1 improves autism-like behavior and gut microbiota in mouse Introduction The gut-brain axis has been widely recognized in autism spectrum disorder (ASD), and probiotics are considered to potentially benefit the rescuing of autism-like behaviors. As a probiotic strain, Lactiplantibacillus plantarumN-1(LPN-1) was utilized to investigate its effects on gut microbiota and autism-like behaviors in ASD mice constructed by maternal immune activation (MIA). Methods Adult offspring of MIA mice were given LPN-1 at the dosage of 2  ×  109  CFU/g for 4  weeks before subject to the behavior and gut microbiota evaluation. Results The behavioral tests showed that LPN-1 intervention was able to rescue autism-like behaviors in mice, including anxiety and depression. In which the LPN-1 treatment group increased the time spent interacting with strangers in the three-chamber test, their activity time and distance in the central area increased in the open field test, and their immobility time decreased when hanging their tails. Moreover, the supplementation of LPN-1 reversed the intestinal flora structure of ASD mice by enhancing the relative abundance of the pivotal microorganisms of Allobaculum and Oscillospira, while reducing those harmful ones like Sutterella at the genus level. Discussion These results suggested that LPN-1 supplementation may improve autism-like behaviors, possibly via regulating the gut microbiota. Introduction Autism spectrum disorder (ASD) is a heterogeneous neurodevelopmental disorder consisting of three core symptoms: communication deficits, impaired sociability, and repetitive or restricted behavior (Lord et al., 2018). The incidence is higher in males than in females, with the ratio being closer to 3:1 (Loomes et al., 2017). ASD affects more than 1% of children in Western countries, while the prevalence in China is as high as 0.7% , and the rate is on the rise due to the improvements in identification, screening, clinical assessment, and diagnostic testing (Genovese and Butler, 2020). However, effective treatments for ASD remain elusive, and the etiology is also unknown. The major contributing factors that have been studied include genetics, environmental factors, and health conditions (Lyall et al., 2017). OPEN ACCESS EDITED BY It is reported that ASD patients are often afflicted with gastrointestinal (GI) problems (Kohane et al., 2012;Vuong and Hsiao, 2017), including diarrhea/constipation, abdominal pain, and gastric reflux. The studies suggest that may be caused by the presence of different intestinal flora structures in people with ASD than in healthy ones (Williams et al., 2011;Vuong and Hsiao, 2017;Coretti et al., 2018;. Recent studies have found that increased intestinal Lactobacillus and Desulfovibrio species in ASD patients are associated with the severity of ASD (Adams et al., 2011;Tomova et al., 2015). It has also been shown that Bifidobacterium, Prevotella, and butyric acid-producing bacteria are reduced and Desulfovibrio, Clostridium, and Sutterella are increased in ASD patients compared to healthy individuals (Zhang et al., 2018;. Moreover, evidence from animal models indicates that specific gut microbial changes may result in clinical symptoms resembling ASD. Probiotics and prebiotics can alleviate behavioral deficits, inflammatory responses and intestinal flora dysbiosis in a prenatal valproic acid (VPA)-induced rodent model of autism (Adıgüzel et al., 2022). What's more. The ecological dysbiosis of the intestinal microbiota in ASD mice was found to be driven mainly by alterations in specific operational taxonomic units (OTUs) of the bacterial classes Clostridium and Bacteroides fragilis, and treatment with B. fragilis was found to improve autism-related symptoms by improving intestinal flora and intestinal barrier function (Hsiao et al., 2013). These suggest that gut microbiota regulates normal host physiology, metabolism, nutrition, and brain function. Increasing research reveals the ability of the gut microbiota to signal across the so-called microbiota-gut-brain axis. A recent study shows that oral probiotics prevent maternal immune activation (MIA)-induced increases in IL-6 and IL-17A levels in both maternal serum and fetal brains, parvalbumin-positive (PV+) neuron loss, and the decrease in γ-aminobutyric acid levels in the prefrontal cortex of adult offspring . Clinical studies have also demonstrated that the use of probiotics and fructo-oligosaccharides can ameliorate ASD symptoms, including hyperserotonergic states and dopamine metabolism abnormalities, by altering the gut microbiota and increasing the amount of short-chain fatty acids (SCFAs) and serotonin (Wang et al., 2020). Two other studies showed that probiotics could improve social and self-grooming behaviors as well as intestinal permeability in the BTBR T + Itpr3 tf /J (BTBR) Mouse Model of ASD (Nettleton et al., 2021;Pochakom et al., 2022). Although previous studies showed that probiotics had the potential to reduce GI distress in individuals with ASD (Sanctuary et al., 2019), little was known about their effects on ASD behavior directly. The strain of Lactiplantibacillus plantarum N-1 (LPN-1; CGMCC NO. 15463), isolated from traditional cheese in Daocheng County, Sichuan Province by our laboratory before, is a probiotic strain. In vivo and in vitro experiments have shown that LPN1 has multiple probiotic functions, including acid-and bile salt-tolerant biology, the ability to modulate intestinal flora structure by producing multiple SCFAs, especially butyric acid, enhance intestinal barrier function, and reduce inflammation levels (Liu et al., 2017Wei et al., 2021;Tian et al., 2022Tian et al., , 2023. Therefore, we hypothesized that LPN-1, with broad-spectrum intestinal flora improvement effects, may improve anxiety-like behavior in ASD mice by improving their gut microbiota. Therefore, in this study LPN-1 intake was examined for its improvement of autistic-like behavior and its effect on the gut microbiota in the ASD mice model. Maternal immune activation rodent care and intervention This study was approved by the Animal Ethics Committee of West China Second University Hospital, Sichuan University (2020-035). There is a link between viral infection during pregnancy and an increased incidence of ASD in the child (Choi et al., 2016). Therefore, MIA is widely used in ASD research (Naviaux et al., 2013(Naviaux et al., , 2014Vuillermot et al., 2017;Minakova et al., 2019;Fujita et al., 2020;Xu et al., 2021;Tartaglione et al., 2022). We used a mouse model subjected to MIA, which was constructed by injecting pregnant mothers with poly (I:C; 20 mg/kg) on embryonic day 12.5, while the control group was injected with phosphate-buffered saline (PBS). Adult male offspring of MIA mice were randomized to (1) PBS; (2) ASD; (3) ASD + LPN-1 (2 × 10 9 CFU/g Also called LPN-1 group) administered through food for 4 weeks as shown in Figure 1A. Body weight and food intake were measured weekly. The animals [four mice/cage, and no single cage rearing for animal welfare (National Research Council Committee, 2011)] were housed in the Medical Laboratory Animal Center of West China Second University Hospital, Sichuan University, under SPF conditions, with a relative humidity of about 50%, temperature control of 22-25°C, adequate food and water, and 12/12 h fixed light cycle. Behavior tests The test mice were placed in the behavioral room 5 days in advance to acclimate to the environment. During the behavioral period, the testers tried to keep the color of their clothing the same. At about 14:00-18:00 every day, the mice were stroked on the experimenter's hand at a fixed time, 5 min each time for each mouse, to reduce nervousness and familiarize them with the experimenter. Mice were rested for 3-5 days before the next behavioral test. Three-chamber test A three-chamber device was used to test the social communication abilities of different groups of mice. The apparatus consisted of three Plexiglas chambers (60 × 40 × 22 cm), with the side chambers each connected to the middle chamber by a corridor (10 × 5 cm). The sociability of ASD mice was tested using a three-chambered device for three consecutive 10 min phases. During the first phase, mice were habituated to the three chambers for 10 min. In the second phase, two wire cages were introduced to the side chambers: one wire cage was empty, while the other was set up with unfamiliar mouse of the same sex and age which had no previous contact (stranger 1). The testing mouse was placed in the middle chamber, and the amount of time spent around each cage (stranger 1 or empty) was measured. Finally, an unfamiliar mouse (stranger 2) was placed in one of the side chambers, and a familiar mouse (stranger 1) was placed in the other side chamber. The testing mouse was free to explore the mouse from the previous sociability test (stranger 1), and the novel mouse (stranger Frontiers in Microbiology 03 frontiersin.org 2). The time spent in each chamber was recorded. Social behaviors were analyzed using a social behavioral analysis system (BW-Social LAB, Shanghai Biowill Co., Ltd.). The Plexiglas chamber was sterilized with 75% ethanol and wiped dry using paper towels between animal tests. Open-field test An open-field experiment device (40 × 40 × 40 cm) was used to detect the mice's anxious behavior. The test was performed using a method similar to a previous report (Katano et al., 2018). Before the test, the mice were placed in the device for 5 min, and then their behavior was recorded for 10 min. During the experiment, a curtain was used to completely isolate the experimental device from the external environment to avoid noise affecting the behavior of mice. Anxious behaviors were analyzed using a social behavioral analysis system (BW-Social LAB, Shanghai Biowill Co., Ltd.). The Plexiglas chamber was sterilized with 75% ethanol and wiped dry using paper towels between animal tests. Novel object recognition test The test was performed in an open field arena (40 × 40 × 40 cm). The novel object recognition test consisted of two stages. During a 10 min acquisition phase, the animals were placed at the center of the arena in the presence of two identical objects (6 × 6 × 6 cm). After 2 h, a 5 min retrieval phase was conducted, and one of the two familiar objects was replaced by a novel object (5 × 5 × 5 cm). The time spent exploring familiar and novel objects was recorded and analyzed. Exploration time is defined as the action of pointing the nose toward an object, at a maximum distance of 2 cm or touching it (Ennaceur and Delacour, 1988).The Plexiglas chamber was sterilized with 75% ethanol and wiped dry using a paper towel between animal tests. The "discrimination index" was calculated as follows: [(novel object time)/ (novel object time + familiar object time)]. Tail suspension test The tail suspension test is a behavioral test commonly used to detect depression in mice. We used specially manufactured tail suspension boxes made of plastic with the dimensions 55 cm height × 15 cm width × 11.5 cm depth. The mouse was suspended in the middle of this compartment, and the width and depth were sufficiently large so that the mouse could not make contact with the walls. The approximate distance between the mouse's nose and the apparatus floor was 20-25 cm. The resultant behavior was recorded by a video camera for 6 min. The behavior was later analyzed to determine the total duration of immobility; the total amount of time during which each mouse remained immobile was recorded in seconds. The Plexiglas chamber was sterilized with 75% ethanol and wiped dry using paper towels between animal tests. Histopathological examinations At the end of all behavioral experiments, the liver, kidney, and colon tissue were carefully removed and followed by phosphatebuffered saline wash. Then they were fixed in 10% phosphate-buffered formalin for 24 h. After dehydration, they were embedded in paraffin, the paraffin blocks were cut at 5 μm using a microtome, and the deparaffinized tissue slices were subjected to Masson and hematoxylin eosin (H&E) for histological examination. The 16S rRNA gene sequencing Fresh fecal samples were collected from the rectum at the end of the experiment and stored at −80°C. DNA was extracted and quantified by Nanodrop and the quality of DNA extraction was detected by 1.2% agarose gel electrophoresis (Nazhad and Solouki, 2008). The V3-V4 region of the bacterial 16S rRNA genes was amplified by polymerase chain reaction with primers 338F 5′-ACTCCTACGGGAGGCAGCA-3′ and 806R 5′-CGGACTACHVGGGTWTCTAAT-3′ (Wei et al., 2021). The PCR-amplified product was purified, quantified, and sequencing libraries were prepared using Illumina's TruSeq Nano DNA LT Library Prep Kit. The original sequences that passed the initial quality screening were subjected to the library and sample partitionin. Sequence denoising was performed according to the QIIME2 dada2 analysis process to obtain amplicon sequence variants (ASV). α-diversity and β-diversity were finally analyzed. And raw sequences have been uploaded to the NCBI database, No. PRJNA916455. Statistical analysis All data were expressed as mean ± SEM. Statistical analyses were performed using GraphPad Prism (version 8.0.2). The results were performed using two-way analysis of variance (ANOVA) or one-way ANOVA. p < 0.05 was considered statistically significant. LPN-1 improves social tests, reduce anxious and depression behavior in ASD mice We used the three-chamber social test to determine the socialbehavior abnormality (Figure 2A). We compared the time and distance spent in the chamber containing stranger 1 and the empty chambers. Mice in the PBS group (n = 8) spent more time and traveled a greater distance with stranger 1, whereas mice in the ASD group (n = 7) spent less time ( Figure 2B) and traveled a shorter distance ( Figure 2C), indicating social interaction deficits. In contrast, mice treated with LPN-1 spent more time with stranger 1 (p < 0.0001; Figure 2B) while spending significantly less time in empty chambers, indicating that the social preference index was significantly altered. We also found a significant increase in the number of entries in the ASD + LPN-1group (n = 8; p < 0.001; Figure 2D). There was no significant difference in the social time ( Figure 2E) among the groups or in the social distance Frontiers in Microbiology 05 frontiersin.org ( Figure 2F), but the number of entries to stranger 2 significantly increased in the ASD group (p < 0.05; Figure 2G). The results showed that LPN-1 could effectively rescue part social deficiency caused by poly (I:C) treatment during pregnancy. The novel object recognition (NOR) test is a relatively fast and efficient means of testing different phases of learning and memory in mice. In the NOR paradigm, we found a slight decrease in the time, distance, and the number of entries to novel objects explored by ASD mice. However, there were no significant differences among the three groups in the time spent around the novel object ( Figure 3A), the distance ( Figure 3B), or the number of entries ( Figure 3C) and rearing ( Figure 3D). Thus, in terms of cognitive performance, mice in the PBS (1) Novel object recognition test, phosphate-buffered saline (PBS) (n = 6), autism spectrum disorder (ASD) (n = 6), ASD + LPN-1 (n = 8). Experimental schedule showing the different phases of the NOR familiarization (10 min), and test (5 min (n = 6), ASD (n = 6), and ASD + LPN-1 (n = 8) groups did not show significant differences. However, LPN-1 intervention tended to increase the ability of ASD mice to explore new things, and may reach significant levels if the duration of LPN-1 intervention increases. We then performed the open-field test for a total duration of 10 min to detect anxious behavior in mice. In this test, the open-field trials present a conflict between the innate drive to explore a new environment and personal safety (Crawley, 2008). The longer time spent in the central area of the open field, the more distance traveled in the central area, and the more entries to the center indicate less anxious behavior in the mice (Crawley et al., 1997). As shown in Figures 3E,F, the PBS group (n = 12) spent more time and traveled longer distances in the center compared to the ASD group (n = 9). The results showed that the ASD group spent less time in the center, walked shorter distances, and entered the central area fewer times ( Figure 3G), suggesting that the ASD group had obvious anxiety behavior. However, after supplementation with LPN-1, there was no significant difference between the PBS group and the ASD + LPN-1 group (n = 12), indicating that the anxiety behavior of the mice was reduced. However, the increased rearing in the central area of the ASD + LPN-1 group indicated an increase in repetitive behavior ( Figure 3H). We used the tail suspension test to analyze depression-like behavior, as previously described (Umemura et al., 2017;Ueno et al., 2019). In the tail suspension test, the ASD group (n = 6) showed significantly increased immobility (p < 0.05; Figures 3I,J), indicating enhanced depressive-like behavior. Immobility time decreased after LPN-1 supplementation, and there was no significant difference in immobility time between the PBS group (n = 6) and the ASD + LPN-1 group (n = 11), indicating that LPN-1 may reduce the depressive behavior of mice. Together, all the battery of behavior tests indicate that LPN-1 may improves social tests, reduce anxious and depression behavior in ASD mice model. LPN-1 conduce no harm to the organ tissues of ASD mice in this study We recorded the body weight and food intake of the animals on a weekly basis during the experiment (Figures 1B-D). All the mice were sacrificed at the end of the behavioral test, and their liver, kidney, and colon tissues were excised to assess the safety of LPN-1. H&E staining revealed that regular hepatic sinusoidal structure and clear hepatic lobules were observed in liver tissues, and cell edema, inflammatory cell infiltration, and severe intrahepatic hemorrhage were not observed in the three groups of mice ( Figure 4A). The morphology and organization of renal tissues in the sham group were normal; vacuolar degeneration in renal tubular epithelial cells, detachment of renal tubular epithelial cells, and infiltration of inflammatory cells were not observed ( Figure 4B). As shown in Figure 4C, the colonic structure of the three groups of mice was intact, and the intestinal glands were well arranged. Moreover, infiltration of inflammatory cells was not observed in the lamina propria mucosa and muscular layer. LPN-1 modulates the gut microbiota of ASD mice Figures 5-7 showed the results of species annotation analysis at the phylum and genus levels in three groups revealed by 16S rRNA sequencing. The alpha diversity indexes of Chao1, Pielou_e, and Shannon characterized significant differences in microbial populations among the PBS (n = 5), ASD (n = 5), and LPN-1 (n = 5) groups (p < 0.05). Multiple alpha diversity metrics of evenness, diversity and richness in ASD mice were higher than in PBS mice, but LPN-1 supplementation decreased those alpha diversity indexes ( Figure 5). According to the Non-metric Multidimensional scaling (NMDS), the apparent separation of microbial population structures between the ASD group and LPN-1 group was illustrated ( Figure 6C). Furthermore, hierarchical clustering analysis of the unweighted pair-group method with the arithmetic mean (UPGMA) showed that LPN-1 group clustered differently between ASD and PBS groups ( Figure 6D). This indicated that the three groups have different gut microbial compositions. In addition, the reduced alpha diversity of the LPN-1 group suggests that a dominant genus may have emerged and occupied the ecological niche of gut microflora. Therefore, the phylum and genus levels of gut microbiota in each group were further analyzed. Firmicutes and Bacteroidetes were the most predominant phyla in the gut bacteria of mice and abundant in all samples accounting for almost 90% ( Figure 6A). Compared to the PBS group, Bacteroidetes increased and Firmicutes decreased in the ASD group. However, the H&E staining showing histological cell morphology and inflammatory changes. H&E used to observe liver tissue (A), the cortex, including glomeruli, and renal interstitium are shown (B), and sections stained with H&E to assess the mucous membrane appearance of the colon (C). Frontiers in Microbiology 07 frontiersin.org LPN-1 supplementation reversed this appearance ( Figures 7A,B). And a significantly lower Bacteroidetes/Firmicutes ratio was shown in the LPN-1 group compared to the ASD group (p < 0.01; Figure 7C). In addition, the results at the genus level showed that the relative abundance of Allobaculum was found to be more than 3-fold elevated after LPN-1 intervention, becoming the absolute dominant group of intestinal microorganisms in the treated group of mice ( Figure 6B). To further illustrate the significance of the differences, a one-way ANOVA analysis was performed on partial genera. The results showed that the intervention of LPN-1 significantly elevated the abundance of beneficial bacteria including Allobaculum and Oscillospira (p < 0.01; Figures 7D,E) in the intestinal flora of ASD mice, as well as Ruminococcus ( Figure 7F), Bifidobacterium ( Figure 7I) and Akkermansia ( Figure 7J) in spite of no significance yet. Further calculations showed that Allobaculum was elevated from 14.33 to 62.04% after LPN-1 intervention compared with the model group. In addition to increasing the variety of probiotic bacteria, we observed that LPN-1 treatment also significantly suppressed the abundance of the harmful bacterium Sutterella (p < 0.05; Figure 7H), and Desulfovibrio ( Figure 7G) showed a decreasing trend in the LPN-1 group. Discussion Neurodevelopmental diseases represented by ASD cause a huge medical burden to patients' families and the whole society. Although the etiology is still unclear, infection and inflammation during pregnancy are considered to be key causes of ASD (Modabbernia et al., 2017). In animal models, poly (I:C) injection during pregnancy results in increased release of local cytokines, including IL-17a (Choi et al., 2016), which can recapitulate the key symptoms of ASD and be used to examine the efficacy of the candidate remedies, especially Histogram of species distribution at the phylum (A) and genus (B) levels revealed by 16S rRNA sequencing. NMDS analysis based on weighted_unifrac_ distance among phosphate-buffered saline (PBS), autism spectrum disorder (ASD), and ASD + LPN-1 groups (C). Hierarchical clustering analysis (D). in MIA-associated ASD. Previous studies have found that probiotic supplementation in animal models can improve social deficits in mice with ASD (Sgritta et al., 2019) and improve anxiety-like behavior and elevate hippocampal BDNF levels in mice with low-grade intestinal inflammation (Bercik et al., 2010(Bercik et al., , 2011. Meanwhile, clinical studies have found that probiotic supplementation can reduce anxiety and depression behaviors and ameliorated the opposition and defiance behaviors of children with ASD Kong et al., 2021). These studies only found the effects of probiotics on improving social interaction and alleviating anxiety, and did not find any negative effects, nor did they perform 16 s gene sequencing. In the present study, we found that LPN-1 supplementation improved social and anxiety-like behaviors as well as depressive behavior, and that LPN-1 intervention tended to increase the ability of ASD mice to explore new things. In contrast, repetitive behaviors have increased after LPN-1 intervention. The present results showed that LPN-1 intervention significantly altered the intestinal flora structure of the ASD mice. The alpha diversity analysis revealed a significant decrease (p < 0.05) in the abundance, diversity, and homogeneity of the gut microbiome composition in all three groups, which may be due to the process of constructing the autism model led to an increase in the species and abundance of conditionally pathogenic bacteria in the intestine of the mice, and the LPN-1 intervention resulted in antagonism between microorganisms reduced the species and abundance of conditionally pathogenic bacteria, leading to an overall decrease. Similar results were seen in a study related to autism (Wan et al., 2021), which measured gut microbes in children with autism and showed that gut microbial abundance was significantly higher in children with autism than in age-matched normal children. Treatment with LPN-1 helped to restore the gut microbes of autistic mice to a similar structure to those of normal mice at the phylum level, including elevating the abundance of Bacteroidetes and reducing the abundance of Firmicutes. In addition, the ratio of gut microbial Bacteroidetes/ Firmicutes in autistic mice was significantly different from that of normal individuals. Several publications have demonstrated that the ratio of Bacteroidetes/Firmicutes in the gut bacteria of children with ASD was significantly increased compared to normal subjects (Kang et al., 2017;Coretti et al., 2018;Zhang et al., 2018). The results of the present study are consistent with previous reports, and the ratio of Bacteroidetes/Firmicutes was significantly reduced compared to the model group by LPN-1 treatment (p < 0.01). In addition, analysis of gut microbial 16 s sequencing results revealed that LPN-1 significantly increased the abundance of the probiotics Allobaculum and Oscillospira (p < 0.01) and decreased Sutterella (p < 0.05) at the genus level. Previous studies have shown that in ASD mice, Allobaculum abundance was significantly decreased and that GW4064 (a farnesoid X receptor agonist) restored the abundance of Allobaculum and improved autism . Moreover, it has been shown that Allobaculum is highly correlated with depression in mice, and this study showed a positive association between Allobaculum and neurotransmitter norepinephrine secretion Frontiers in Microbiology 09 frontiersin.org in mice by correlation analysis Xia et al., 2021). In conclusion, Allobaculum may be positively correlated with the treatment of various neurological diseases and showed a correlation with neurotransmitter secretion and neuronal development. Therefore, We supposed LPN-1 may affect the neurodevelopment of the organism by increasing the abundance and metabolism of the Allobaculum in the intestine to improve autism-related symptoms. The correlation between intestinal flora and clinical characteristics of children with ASD revealed that Oscillospira was negatively correlated with the Total Childhood Autism Rating Scale score and Oscillospira was significantly increased after LPN-1 intervention in our study (p < 0.01) . More surprisingly, the probiotics Bifidobacterium and Akkermansia occurred from absent to present in the intestine of ASD mice after LPN-1 intervention. As far as why it did not reach a significant increase, we speculate the time of one-month intervention is a bit short and the intestinal flora structure has not yet been achieved much well. Therefore, our subsequent animal experiments as well as clinical experiments will increase the intervention time of LPN-1 to make it reach the best condition. In contrast, there was no Sutterella in the LPN-1 group. Sutterella was one of the most important sources of lipopolysaccharide LPS, which could affect intestinal permeability and lead to an increase in plasma LPS concentration, triggering chronic low-grade inflammation in the organism. The relative abundance of Sutterella was higher in the intestine of children with ASD compared to normal children (Kang et al., 2017). A study shows that Sutterella was the predominant flora The relative abundance of gut microbiota at the phylum (A-C) and genus (D-J) level. *p < 0.05, **p < 0.01, ***p < 0.001, and ****p < 0.0001. Frontiers in Microbiology 10 frontiersin.org in ileal and cecum biopsies of children with autistic children with gastrointestinal dysfunction (AUT-GI) (Williams et al., 2012). In animal experiments, again with results similar to human studies, it was shown that the abundance of Sutterella in the colon of the offspring of autistic mice was significantly higher than that of normal mice (Sharon et al., 2019). Therefore, our findings suggested that the intake of probiotic LPN-1 not only increased the abundance of probiotics, but also reduced harmful bacteria, improved the structure of intestinal flora, and facilitates its healthy development. Our results suggested that probiotics may improve ASD by affecting gut flora, however it was inconsistent with the results of another study which, after correlating fecal macrogenomic and phenotypic data from children with ASD at a mean age of 8.7 years, concluded that it was not differences in gut flora that caused ASD, but the dietary preferences of children with ASD that caused the differences in gut flora (Yap et al., 2021). This discrepancy between our study and the results of that study, may be due to the fact that the data collection time of that study mostly spanned a critical period of neurodevelopment [before the age of three is an essential stage of human brain development (Cody et al., 2017)], and that the symptoms of these children were generally mild and perhaps not representative of the typical autistic population, not to mention denying the driving role of the flora. Of course, our ongoing experiments are proposed to elucidate how the probiotic LPN-1 improves autistic symptoms through the gut-brain axis (e.g., enterobacterial metabolites, intestinal permeability, blood-brain barrier, etc.), and we hope that our research can scientifically and objectively guide the public's perception of the relationship between autism and intestinal flora. However, there are some limitations in this study as well. We examined the effect of LPN-1 in ASD mice, but not in normal mice. The combination of LPN-1 with other probiotics or therapeutic drugs and the duration of effective treatment deserved further study. Therefore, much studies in the prevention of neurological diseases like ASD by combining probiotics with other drugs are needed. In addition, our study was conducted only in adult c57BL/6 male ASD mice, and female ASD mice were not included. Results may also be different in mice from other disease backgrounds, other age groups and other strains, like juvenile mice with unstable and immature microbiome structures. Probiotics act slowly and require a long-term continuous intervention to achieve a stable intervention, whereas in our study we only intervened for 4 weeks after the weaning period. For the sake of animal welfare, the mice in our experiments were not housed singly in a single cage and the final conclusions may need to be treated with caution. Conclusion We demonstrated that LPN-1 improved autism-like social phobic and depressive behavior in mice from a poly (I: C)-induced maternal immune activation model. The vital role of LPN-1 in increasing probiotic bacteria, including Allobaculum and Oscillospira, and decreasing the harmful ones of Sutterella in the gut microbiota was also highlighted, indicating the efficacy of LPN-1 intervention in the animal model. Further research on how LPN-1 affects neurologically related autism-like behavior via the gut-brain axis is under process. This study may provide new insight into the development of psychobiotics to ameliorate the autism-associated neurological disorders. Data availability statement The data presented in the study are deposited in the NCBI repository, accession number PRJNA916455.
2023-03-18T15:16:48.182Z
2023-03-16T00:00:00.000
{ "year": 2023, "sha1": "af657cfb5e29ec9123516daaf715cdf86740187f", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2023.1134517/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "57c5188ed83f571a2cad69c5af93431c68e081f8", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
13693039
pes2o/s2orc
v3-fos-license
Quantised inertia from relativity and the uncertainty principle It is shown here that if we assume that what is conserved in nature is not simply mass-energy, but rather mass-energy plus the energy uncertainty of the uncertainty principle, and if we also assume that position uncertainty is reduced by the formation of relativistic horizons, then the resulting increase of energy uncertainty is close to that needed for a new model for inertial mass (MiHsC, quantised inertia) which has been shown to predict galaxy rotation without dark matter and cosmic acceleration without dark energy. The same principle can also be used to model the inverse square law of gravity, and predicts the mass of the electron. Introduction Although special relativity and quantum mechanics have been partially merged in quantum field theories, some aspects, and general relativity and quantum mechanics are still incompatible. For example, relativity is based on a smooth spacetime and demands locality, whereas quantum mechanics is modelled using discrete particles and quantum experiments seem to demand non-locality [1][2][3][4][5]. In some instances it has been possible to combine general relativity and quantum mechanics, at least partially, for example [6] proposed that the event horizons caused by the strong gravity within black holes would seperate pairs of particles produced by the quantum vacuum, leaving one to fall into the black hole and one to escape, giving rise to a new kind of radiation called Hawking radiation that originates from a combination of relativity (curved space) and quantum mechanics on a large scale. There is now some evidence that at least analogues of this process occur [7]. [8], [9] and [10] showed that when an object accelerates, say, to the left, an information horizon, very like an event horizon, forms to its right since information which is limited to the speed of light by relativity cannot now get to the object from behind that horizon. They showed that this horizon can seperate paired virtual particles in a similar way to a black hole event horizon, leading to the production of acceleration-dependent Unruh radiation. This conclusion is now generally accepted, but see [11] for remaining controversies. It is possible that Unruh radiation has already been observed [12]. An early inspiring attempt to implicate quantum mechanics and the zero point field in inertial mass was made by [13]. However, they required an arbitrary cutoff to make their scheme work. Also, [14] questioned whether Unruh radiation might account for inertial-MoND (Modified Newtonian Dynamics), but concluded that Unruh radiation was unlikely to be the cause of inertia because it was isotropic. A new model for inertia was proposed by [15,16]. It is called Modified inertia by a Hubble-scale Casimir effect, MiHsC or quantised inertia. This model assumes that the inertia of an object is due to the Unruh radiation it sees when it accelerates. The relativistic Rindler horizon that appears in the opposite direction to its acceleration damps the Unruh radiation on that side of the object producing an anisotropic radiation pressure that looks like inertial mass [16]. So inertia arises in this model from the interplay of relativity (horizons) and quantum mechanics (Unruh waves). Also, when accelerations are extremely low the Unruh waves become very long and are also damped, this time equally in all directions, by the Hubble horizon (Hubble-scale Casimir effect) [15]. This leads to a new loss of inertia as accelerations become tiny. So MiHsC modifies the standard inertial mass (m) to a modified one (m i ) as follows: where c is the speed of light, Θ is the diameter of the observable universe and '|a|' is the magnitude of the acceleration of the object relative to surrounding matter. Eq. 1 predicts that for terrestrial accelerations (eg: 9.8m/s 2 ) the second term in the bracket is tiny and standard inertia is recovered, but in low acceleration environments, for example at the edges of galaxies (when a is tiny) the second term in the bracket becomes larger and the inertial mass decreases in a new way so that quantised inertia (MiHsC) can explain galaxy rotation without the need for dark matter [17] and cosmic acceleration without the need for dark energy [15,18]. There are also anomalies seen in Solar system probes [19] that can be explained by this model [15,20]. Quantised inertia does not significantly affect the predictions of general relativity for high accelerations and only becomes significant for very low accelerations or upon a change in acceleration. Similarly, applying quantum mechanics on a large scale [21] derived Newtonian gravity from the uncertainty principle. The main aim of this paper is to extend [21] and show that both gravity and quantised inertia can be derived by allowing large-scale dynamics or horizons to determine the position uncertainty in the Heisenberg uncertainty principle, and allowing the resulting energy uncertainty to become real. Gravity from Uncertainty Imagine there are two Planck masses orbiting each other. With Planck masses, we are still, just, in the quantum realm, Heisenberg's uncertainty principle applies to their mutual position uncertainty (∆x) given by the distance between them, and momentum (∆p), and the total uncertainty is twice that for a single particle If a bigger mass M has N Planck masses in it, and another big mass m has n of them, then we can add up all the possible interactions (all the various uncertainties: c) between the various Planck masses The double summation on the right hand side is equal to the number of Planck masses in mass m (m/m P ) times the number in M (M/m P ), where m P is the reduced Planck mass, so Now let us imagine that the Planck masses within m and M are being buffeted from all sides by particles from the zero point field and moving at random. The net effect, forgetting horizons for a moment, will be zero. Sometimes random motion will increase the distance between the two objects, ∆x, so their uncertainty in energy, ∆E, decreases, and sometimes it will decrease ∆x, so the uncertainty in energy, ∆E, will increase. This latter event means that energy will suddenly be available that wasn't before, extracted from the decrease in position uncertainty, and if the objects continue to move together then more energy will be released in this way allowing the motion to continue. What if we assume that the sum of the kinetic energy and the energy uncertainty is conserved? Differentiating Since the right-most fraction can be written as ∆v we get Now we assume that m(∆a) = F (force) and that the uncertainty of the average position (△x) is the orbital radius r This looks like Newton's gravity law, and if we insert the value of the Planck mass, for which the value of G must be assumed, we get The force required to drive the motion only becomes available for objects moving closer together since this reduces ∆x and increases ∆E (the inevitability of attraction was not discussed in [21]). In this model, gravity is a process by which quantum mechanics applies at this large scale and converts position uncertainty to energy uncertainty, which shows up as an acceleration-dependent heat (Unruh radiation) and so it satisfies the second law of thermodynamics: increasing entropy. It has therefore been shown that Newton's gravity law can be produced if a summation is made for all interactions between masses equal to the Planck mass, but this requires an assumption of the value of G [21]. Quantised Inertia from Uncertainty Again, using Heisenberg's momentum-position uncertainty principle we get ∆p∆x ∼ Since E = pc we can write The energy uncertainty is then ∆E ∼ c/∆x. The new proposal here is that if the particle in question accelerates and a relativistic Rindler horizon forms then this destroys knowledge of all positions beyond the horizon and decreases the uncertainty in position ∆x. From Eq. 12 we would then expect the uncertainty in energy to go up. Now, as above we assume that what is conserved in nature is not mass-energy, but rather mass-energy plus the energy uncertainty identified above, as follows where the m 1 and m 2 are the initial and final inertial masses and ∆x 1 and ∆x 2 are the initial and final positional uncertainties. Note that the energy uncertainty terms are usually many orders of magnitude smaller than the massenergy terms. Rewriting we get Now we can start to consider relativistic horizons. For an minimally-accelerated object (a zero acceleration cannot exist in MiHsC) the maximum uncertainty in position has to be due to the cosmic horizon, and equal to the radius of the cosmos, so ∆x 1 = Θ/2 so that If an object then is subjected to an acceleration, a, then a Rindler horizon forms at a distance d = c 2 /a away. So the new uncertainty in position is smaller Now an acceleration 'a' is associated with Unruh radiation of wavelength λ where, using Unruh's expression for the Unruh temerature T = a/2πck and Wien's law T = βhc/kλ where β = 0.2, it follows that that a = 4π 2 c 2 β/λ. Also E = hc/λ. Using these to replace the 'a' in the factor, we get So that Using E = mc 2 we get This is the same as Eq. 1, except for the initial factor of 2πβ ∼ 1.26 which could be due to the crudity of this model, which has treated the Rindler horizon as being a sphere around the object whereas it is a more complex shape. The important point is that Eqs. 1 and 20, by allowing quantum mechanics and relativity to interact in this way, can model the observed anomalous galactic rotation without dark matter [17] and the observed cosmic acceleration without dark energy [15,18]. Particle masses An electron can be regarded as a photon that has become confined to a particular orbit and so Eq. 14 can be used to predict the mass-energy of the electron as follows Initially the photon is confined to the cosmic scale so ∆x 1 = Θ/2 and it is known that for it to form an electron it must have the Compton wavelength λ C = 2.426 × 10 −12 m so Neglecting the second term, which since Θ ∼ 10 26 m is about 38 orders of magnitude smaller than the first, we get This is very close to the mass of the electron measured in experiments. Similarly we can consider the protons and neutrons which are confined to the nucleus of radius r n = 1.75 × 10 −15 m (for hydrogen) so that This is close to the observed masses of the proton and neutron which are 1.67 × 10 −27 kg. Equation 24 also predicts a small correction to the proton mass given by the second term in the bracket, which is about 41 orders of magnitude smaller than the first term in the bracket. If we use the Planck length 1.616 × 10 −35 m instead this gives This is close to the Planck mass, which is 2.2176 × 10 −8 kg. The agreement is very close if we use a scale of 2πl P Thus the assumption that what is conserved in nature is not mass-energy as previously assumed, but mass-energy plus the energy uncertainty and assuming the position uncertainty is determined by relativistic horizons, allows the calculation of some particle masses in this way as well as Newtonian gravity and quantised inertia (MiHsC). Discussion These derivations can be explained more intuitively as follows. For gravity: As the radius of an orbit decreases and so the uncertainty in position decreases, then the momentum (dp = F dx/c) or force (F ) on the orbiting body must increase, producing an inverse square law. In the above gravitational derivation, the correct value for the gravitational constant G can only be obtained when it is assumed that the gravitational interaction occurs between whole multiples of the Planck mass, but this last part of the derivation involves some circular reasoning since the Planck mass is defined using the value for G (this was not discussed in the precursor gravity paper, [21]). This paper also builds on [21] by showing how this formalism specifically implies attraction rather than repulsion (previously it could have been either). For inertia: as an object accelerates, a relativistic Rindler horizon forms in the opposite direction. This curtails the object's observable space and reduces its uncertainty in position. The uncertainty principle then implies that the uncertainty in momentum (or energy) must increase, and the energy released agrees (within the uncertainty of the calculation) with the specific energy required for quantised inertia (MiHsC) which allows the prediction of galaxy rotation without dark matter and cosmic acceleration without dark energy. Conclusion The uncertainty principle of quantum mechanics states that if the uncertainty in position reduces, then the uncertainty in momentum increases. Relativity predicts that if an object accelerates, a Rindler horizon forms, curtailing its observable space. If we combine these two principles, the formation of the Rindler horizon reduces position uncertainty, increasing energy uncertainty. It has already been shown, in a similar way, that if we accept this energy as being real, Newtonian gravity is the result, though a value for G has to be assumed. It is shown here that using the same method, the model known as quantised inertia or MiHsC can also be derived, solving the problems of galaxy rotation and cosmic acceleration, and predicting the electron mass.
2016-10-13T14:50:14.000Z
2016-10-13T00:00:00.000
{ "year": 2016, "sha1": "32731e774b42e44087f463fedbc3a1432a946b5b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1610.06787", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "32731e774b42e44087f463fedbc3a1432a946b5b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
257426870
pes2o/s2orc
v3-fos-license
Preparation, characterisation, and in vitro cancer‐suppression function of RNA nanoparticles carrying miR‐301b‐3p Inhibitor Abstract Background Multidrug resistance is the biggest barrier on the way to chemotherapy for lung adenocarcinoma (LUAD). For some LUAD patients with cisplatin (DDP) resistance and poor prognoses, the authors put forward RNA nanoparticles (NPs) carrying miR‐301b‐3p Inhibitor. Methods The NPs were composed of miR‐301b‐3p, A549 aptamer (A549apt), and Cyanine 5 in a bottom‐up manner with a 3‐way‐junction (3WJ) structure. Diameter, assembly process, and morphology of NPs were observed by Dynamic Light Scattering, Native‐Polyacrylamide Gel Electrophoresis, and Atomic Force Microscopy. Cell internalisation, toxicity, proliferation, migration, invasion, and apoptosis were assayed by confocal laser scanning microscope, CCK8, colony formation assay, Transwell, western blot, and flow cytometry. Results 3WJ‐apt‐miR was evenly distributed, with diameter of 19.61 ± 0.49 nm and triangular branching structures. The accurate delivery of this NP in vivo was ensured by A549 aptamer featuring specific targeting, with smaller side effects than traditional chemotherapy. Such nanomaterials were effectively internalized by cancer cells, with normal cell activity intact. Cancer cell proliferation, invasion, and migration were suppressed, and DDP sensitivity was enhanced, causing DNA damage and facilitating apoptosis of DDP‐resistant cells. Conclusion Based on RNA self‐assembling, the authors researched the effect of miRNA on DDP sensitivity in LUAD regarding gene regulation. 3WJ‐apt‐miR paves the way for clinical tumour therapy. suppress tumour growth by manipulating miRNA levels. For instance, Wang et al. [8] extracted miR-141, an RNA associated with LC metastasis, from exosomes, and revealed that miR-141 promotes angiogenesis in LC by targeting growth arrest specific homeobox gene (GAX), thus affecting invasion and metastasis of cancer cells. Fan H et al. [9] mentioned that miR-301b-3p is highly expressed in gastric cancer and miR-301b-3p knockdown substantially represses cell proliferation and induces G1 phase arrest and apoptosis. Researchers also clarified that miR-301b-3p is highly expressed [10] and can be an early diagnostic biomarker for NSCLC [11]. Liu et al. [12] unveiled that miR-301b-3p restrains tumour growth through downregulating DLC1. However, the short half-life and in vivo circulation time of miRNAs and undesired off-target effects are disadvantages that limit their delivery efficiency at tumour sites. Thence, identifying a suitable vector for delivering miRNAs is an urgent need. The achievements in nanotechnology bring forth a novel delivery system, which allows siRNAs or miRNAs to be delivered to cells in the tumour microenvironment, thereby affecting cancer cells and immune-infiltrating cells [13,14]. For instance, Li et al. [15] prepared a peptide nano-delivery system using peptides as raw materials and assembled with miR-16 molecules, and it is homogeneous in particle size, stable, and able to target ovarian cancer cells and reduce cisplatin resistance. Yang et al. [16] utilised liposomes as carriers to load and encapsulate miR-214, and this material is biocompatible and able to reduce the activity of p53 pathway and reverse expression of related proteins, thereby constraining apoptosis of intestinal cancer cells. These delivery systems protect miRNAs from being degraded by nucleases and prolong the half-life of miRNAs in blood [17], which frees miRNAs from lysosome degradation and delivers them to the cytoplasm and nucleus. In recent years, the application of RNA nanoparticles (NPs) in tumour therapy has become increasingly sophisticated [18]. For example, Yin et al. [19] used stable three-way junction (3WJ) motifs as scaffolds to deliver RNA aptamers that bind to CD133 as well as anti-miRNA21 to improve tumour targeting affinity and therapeutic efficacy. Furthermore, nanosystems are manifested to treat cancer and induce anti-tumour immune response by reducing systemic toxicity and eventually delivering siRNAs and miRNAs to tumour cells and immune cells [20]. These novel nanomedicines have been developed and become a focus of research. We referred to the synthesis methods in the works that used RNA nanotechnology to specifically deliver miRNAs to effectively inhibit prostate cancer and triple-negative breast cancer [21,22]. We applied a self-assembling technology for RNA NPs, by which miR-301b-3p (upregulated in LUAD cells and related to cisplatin (DDP) sensitivity), A549apt (the nucleolin aptamer AS1411, which can bind to nucleolin on the membrane surface of cancer cells with high specificity) and Cyanine 5 (Cy5) (for marking and tracing) were assembled into thermodynamically stable NPs with a 3WJ structure, and we investigated whether LUAD can be treated through repressing miR-301b-3p. Figure 1 shows the cellular mechanism of cell apoptosis and DDP sensitivity mediated by 3WJ-A549apt-miR-301b-3p Inhibitor NPs. Meanwhile, we examined the transfection efficiency of self-assembled NPs, the cellular functions, and DDP sensitivity of post-transfected cancer cells, shedding light on treatment for LUAD. | Design and generation of 3WJ-apt-miR nanoparticles 2.2.1 | Generation of RNA oligonucleotides DNA template synthesis was done based on previous literature with some modifications [23]. First, a pair of DNA primers was selected by standard polymerase chain reaction (PCR) to obtain a DNA template containing the T7 promoter (5 0 -TAATA-GACTCACTATA-3 0 ) at the 5 0 end. The amplified DNA templates were purified with gel extraction kit, and the obtained DNA templates were placed in agarose gels (2%) in Ethylene Diamine Tetraacetic Acid (EDTA) buffer and treated at room temperature for 40 min. The elution process of DNA samples was observed using UV, and the purified DNA samples were collected and stored in 1�Tris-EDTA (TE) buffer (10 mm Tris-HCl and 1 mm EDTA) and set aside. RNA oligonucleotides (Strand 1, Strand 2, and Strand 3) were transcribed in vitro with corresponding DNA templates by T7 RNA polymerase. For specific synthesis steps, refer to article [23]. | Construction of 3WJ-apt-miR nanoparticles The synthesis steps of 3WJ-apt-miR NPs are shown in Figure 2, and the synthesis method was referred to previous literature with modifications [24]. The schematic structure of 3WJ motif was presented in Figure 2a, based on which we mixed b3WJ and A549apt in distilled water in the ratio of 1:1 at room temperature and annealed the product from 95°C to 4°C, and finally obtained 3WJ-b-A549apt. 3WJ-c-Cy5 was prepared in the same way, and the four modular strands (3WJ-a-sph1, sph1-miR-301b-3p, 3WJ-b-A549apt, and 3WJ-c-Cy5) were mixed in the ratio of 1:1:1:1 at room temperature. The products were annealed from 95°C to 4°C, and finally, the stable 3WJ-A549apt-miR-301b-3p Inhibitor (3WJ-apt-miR) structural domain was obtained (Figure 2b). For comparison, 3WJ-miR-301b-3p Inhibitor (3WJ-miR) was synthesised following the same procedure without the addition of A549apt. | Material characterisation The particle size distribution of 3WJ-apt-miR was tested with Dynamic Light Scattering (DLS) particle size analyser; the interaction of RNA in the material was analysed using Native-Polyacrylamide Gel Electrophoresis (Native-PAGE). The procedure was as follows: 5 μL of 10 mM bovine serum albumin (BSA), 5 μL of Dithiothreitol (DTT), 5 μL of yeast tRNA, 10 μL of binding buffer and 50 μL of the RNA substrate to be detected were added to the tube, mixed, and incubated for 30 min at room temperature in the dark. 15 μL of 50% glycerol was added to the incubated reaction product and mixed well, and the mixture was added to the gel in a volume of 30 μL. After turning on the power, gels were run under 120 V at 4°C in the dark. Next, images were captured by using an imager [25]; the surface morphology of 3WJ-apt-miR was examined using Atomic Force Microscopy (AFM). | RNA extraction and quantitative reverse transcription PCR (qRT-PCR) Complying with the instructions, we extracted total RNA from cells by using TRIzol Kit (10,296,010, Invitrogen, Carlsbad, CA, USA). The reverse transcription of RNA was accomplished by M-MLV (Takara, Otsu, Japan). The cDNA acquired was amplified by Synergy Brands (SYBR) Green Master Mix kit (Takara). miR-301b-3p level was quantified by an Applied Biosystems 7300 Real-Time PCR System (Applied Biosystems, USA). This assay was repeated independently three times. U6 was an internal reference for miR-301b-3p (Table 1) (with 2 −ΔΔCt value for quantification and normalisation). In the same manner, 50 nM of FITC-labelled 3WJ-apt-miR NPs (Cy5-labelled) and 3WJ-miR NPs were co-cultivated with A549 cell line for 0, 1, 2 h, 4 h, 8 h, 12 h, and 24 h, and the system was subjected to flow cytometry at each time point. | Cell toxicity detection CCK-8 assay evaluated the effect of 3WJ-apt-miR NPs on the viability of BEAS-2B and A549/DDP cell lines. Steps were as follows: 100 μL of BEAS-2B cell line and A549/DDP cell suspension (about 5 � 10 4 cell/mL) were plated onto 96-well plates. These plates were pre-cultivated in a humidified incubator for 24 h (37°C, 5% CO 2 ). 5 nM of miR-301b-3p Inhibitor and 50 nM of 3WJ-apt-miR NPs were used to treat these two groups of cells, with a pure medium as the control group. Then the cultivation was continued in an incubator (37°C, 5% CO 2 ) for 24, 48, and 72 h. After cells were cleaned with PBS, 100 μL of fresh medium (with 10% CCK-8) was introduced. Following incubation, samples were subject to a microplate (Victor X, PerkinElmer, λ = 450 nm) to test their optical density (OD). | Colony formation assay A549/DDP cell suspension treated with PBS, miR-301b-3p Inhibitor, and 3WJ-apt-miR NPs were plated onto 6-well plates per 1 � 10 3 cell for each well. After 2 mL of medium was introduced, the cultivation lasted 7 days. Cells were rinsed twice with PBS, fixed with 4% paraformaldehyde (Thermo Fisher, USA), and died with 0.1% crystal violet (Thermo Fisher, USA). Four random sights were selected under a light microscope (Leica, Wetzlar, Germany) for cell counting. | In vitro assays for migration and invasion Effects of NPs on the migration and invasion of A549/DDP cells were assessed using a modified 6.5-mm transwell chamber with a polycarbonate membrane (pore size 8.0 mm). A549/ DDP (1 � 10 5 cells) were transfected for 24 h, trypsinised, and suspended in 200 μL of serum-free Roswell Park Memorial Institute-1640 (RPMI-1640) medium. Cells were then seeded in the top chamber uncoated (for migration assay) or precoated with 20 mg of Matrigel (for invasion assay). Medium containing 10% FBS in the bottom chamber was the chemical attractant. After 24 h of incubation, non-migrated or noninvaded cells were wiped off from the top chamber with a cotton swab. Cells at the bottom of the chamber were fixed with 4% paraformaldehyde and then stained with 0.5% crystal violet for 1 h and observed under a microscope. | Western blot Cells were lysed using Radio Immuno Precipitation Assay (RIPA) lysis solution (Solarbio Technology Co., Ltd.) for total protein extraction, and total protein was quantified by bicinchoninic acid assay. Extracted proteins were resolved using 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis and transferred onto polyvinylidene difluoride (PVDF) membranes (Sigma-Aldrich). Membranes were subsequently blocked (room temperature, 1 h) using 5% skimmed milk. At the end of blocking, the membranes were rinsed three times with PBS + Tween-20 (PBST) buffer and then cultivated with antibodies including rabbit anti-γ-H2AX (Abcam, China), rabbit anti-Bax (Abcam, China), rabbit anti-Bcl-2 (Abcam, China), and -227 rabbit anti-GAPDH (Abcam, China), followed by 3 PBST washes. Finally, HRP-labelled goat anti-rabbit secondary antibody IgG H&L (ab7090) was introduced for 1 h of incubation at room temperature. Protein bands were developed using a highly sensitive enhanced chemiluminescence kit for luminescence (Solarbio, Beijing, China), and photographed with gel imaging software. | Statistical analysis All experimental data were acquired by repeating assays at least three times and all statistical analyses were performed using Prism software (GraphPad Prism 6). The results were expressed as mean � standard deviation. T-test was used to compare the two groups, and one-way analysis of variance was used to compare the multiple groups. The threshold for pvalue was 0.05, and values less than 0.05 were considered statistically significant. | Synthesis and characteristics of 3WJapt-miR nanoparticles We utilised a highly stable 3WJ motif from phi29-packaging RNA as the core structure for the NP, and the entire system would be chemically stable after 2 0 -fluoro (2 0 F) modification. The functional module targeting the A549 aptamer (A549apt) in A549 cell lines, Cy5 fluorescent dye, and miR-301b-3p Inhibitor, which can regulate miR-301b-3p expression, were all integrated into the pRNA-3WJ structure. Then, multifunctional RNA NPs were synthesised in a bottom-up manner by self-assembling. The DLS test (Figure 3a) revealed that the synthesised RNA NPs had a particle size of about 19.61 � 0.49 nm (PDI: 0.22) with even disperse. As depicted in Native-PAGE gel plots (Figure 3b), the bands from left to right corresponded to 4 oligonucleotide strands, namely 3WJ-c-Cy5 (16 nt), sph1-miR-301b-3p (33 nt), 3WJ-a-sph1 (43 nt), and 3WJ-b-A549apt (59 nt), respectively. When different oligonucleotides were assembled, the gel electrophoresis plots were also changed, indicating the successful synthesis of 3WJ-apt-miR. Figure 3c illustrated AFM image of the composite RNA nanomaterial and depicted homogeneous triangular branching structures similar to the 3WJ motif. | Cellular uptake of nanoparticles RNA NPs performance is closely related to cellular uptake levels. Fluorescein isothiocyanate and Cy5 were used to colabel NPs. Then, cellular uptake of 3WJ-miR NPs and 3WJapt-miR NPs was observed by CLSM, as shown in Figure 4a. After 4 h of cultivation, the strongest fluorescence by FITC and Cy5 was observed in the 3WJ-apt-miR NPs group incubated with A549 cells, which had a strong binding ability in comparison with 3WJ-miR NPs. The reason may be the ability of A549apts to target and recognise A549 cells, greatly promoting the internalisation ability of tumour cells to NPs, and the NPs were proportional to the fluorescence intensity. The experimental results presented that 3WJ-apt-miR NPs had efficient internalisation ability. about 2-fold higher than that of 3WJ-miR NPs at 4 h, with a significant difference. This suggested that A549apts greatly enhanced the uptake of NPs. The results implied that A549apt on NPs could effectively target and recognise A549 cells, facilitate the internalisation of NPs by cells, and effectively improve the inhibitory ability of miR-301b-3p Inhibitor on cancer cells. | 3WJ-apt-miR nanoparticles regulate proliferation, invasion, migration, apoptosis, and cisplatin sensitivity of lung adenocarcinoma cells In previous work, we have verified that miR-301b-3p was upregulated in LUAD and was able to effectively repress the proliferation of LUAD by silencing this gene. LncRNA (LINC01089), an endogenous RNA against miR-301b-3p, is able to affect LC progression by competitively modulating 15-Hydroxyprostaglandin Dehydrogenase (HPGD) [28]. SLC16A1-AS1 is a lncRNA that, when overexpressed, is able to target miR-301b-3p and act as a tumour suppressor [29]. We herein successfully synthesised an RNA NP that could target A549 cells, which loaded miR-301b-3p Inhibitor. Next, we investigated the effect of 3WJ-apt-miR NPs on the biological function of LUAD cells. First, miR-301b-3p level in each transfection group was examined via qRT-PCR. As manifested in Figure 5a, miR-301b-3p level was remarkably repressed in 3WJ-apt-miR NPs, indicating the successful preparation of 3WJ-apt-miR NPs. CCK-8 revealed that DDP could repress the growth of A549/DDP cells (IC50 = 21.79 μmol/L at 48 h) and A549 cells (IC50 = 2.195 μmol/L at 48 h). The resistance of A549/DDP was about 10-fold higher, indicating a repression on the growth of A549/DDP and A549 cells by DDP and a marked DDP resistance of A549/DDP cells (Figure 5b). Next, we assessed the toxic effect of 3WJ-miR/3WJ-apt-miR NPs on BEAS-2B and A549/DDP cells using CCK-8 assay. After transfection at 24, 48, and 72 h, the cell growth inhibition rate was determined. It was found that above 90% BEAS-2B viability remained with the increase of incubation time, and there was no marked effect on BEAS-2B cells with the increase of NP concentration. Additionally, in A549/DDP cells, after transfection at 24, 48, and 72 h, the cell viability of the 3WJ-miR and 3WJ-apt-miR NPs groups gradually decreased, which was notably different from the control group, and the 3WJ-apt-miR NPs group manifested better ability in targeting, and the higher the NP concentration, the greater the effect on cell viability (Figure 5c). To verify whether A549/DDP cells repress tumours through the synergistic effect of miR-301b-3p Inhibitor and A549apt, A549/DDP cells were cocultured with varying materials. The effect of 3WJ-apt-miR NPs on LUAD cell proliferation was explored by colony formation assay (Figure 5d). When NPs were not introduced, the proliferation ability of LUAD cells was intact, and the proliferation of cancer cells was remarkably inhibited when the concentration of NPs increased. It can also be reflected from the data in the histogram that 3WJ-apt-miR NPs prominently repressed cell proliferation compared with 3WJ-miR NPs at a concentration of 50 nM. Similarly, migration and invasion of A549/DDP cells were further suppressed with increasing concentrations of NPs (Figure 5e), and 3WJ-apt-miR NPs had a more profound inhibitory effect. As the results suggested, the NPs could repress the proliferation and invasion of A549/DDP cells, and the introduction of A549apt created a better effect. Besides, treatment with DDP resulted in DNA damage and facilitated apoptosis in tumour cells. γH2AX was the most sensitive marker for DNA damage, and Bax and Bcl-2 were associated with apoptosis. Detection of γH2AX, Bax, and Bcl-2 protein levels by Western blot assay is applicable to the F I G U R E 4 Cellular uptake and internalisation of 3WJ-apt-miR nanoparticles (NPs). (a) confocal laser scanning microscope (CLSM) images of cellular uptake of fluorescein isothiocyanate (FITC) and Cyanine 5 (Cy5) co-labelled 3WJ-miR and 3WJ-apt-miR NPs after being cocultured with A549 cells for 4 h (DAPI staining of A549 cells, FITC localised miR-301b-3p, Cy5 localised oligonucleotide strand); (b) FITC and Cy5 co-labelled 3WJ-miR and 3WJ-apt-miR NPs were incubated with A549 cells for different times, with flow cytometry being used to test the entry of NPs into A549 cells over time (*p < 0.05). ZHAO ET AL. -229 assessment of the therapeutic effect of DDP on tumour cells. As manifested in Figure 5f, low concentrations of DDP did little DNA damage in the absence of NPs, even when the concentration was increased to 20 μM. For 3WJ-miR and 3WJapt-miR NPs (50 nM) groups, the treatment with DDP remarkably increased DNA damage to cancer cells, indicating that miR-301b-3p Inhibitor could improve the DDP sensitivity of LUAD cells, and NPs containing A549apts caused higher damage. With the increase in DDP concentration, the DNA damage of tumour cells also became serious. We further examined the apoptosis of A549/DDP LUAD cells after treatment with different NPs by flow cytometry (Figure 5g). In line with Western blot results, increasing concentrations of DDP had little effect on apoptosis in the absence of NPs, and 3WJ-apt-miR NPs could increase the sensitivity of LUAD cells to DDP in comparison with 3WJ-miR NPs. Lung cancer featured by high incidence is prone to recurrence and metastasis in clinical treatment due to the absence of sensitivity and specificity of conventional therapies [30]. DDP is able to disrupt DNA structure and function and is the most used platinum drug for LC [31]. However, prolonged use of DDP predisposes to resistance and then induces relapse, invasion, and treatment failure [32]. Numerous studies have implied that DDP resistance leads to poor prognosis in LC patients [33,34] and is the main cause of failure in treating LUAD and LUAD-related death. There is an urgent need for a novel therapeutic modality instead of conventional treatment. MiRNAs are about 22 nucleotides and are non-coding RNAs participating in gene regulation [35]. Recent studies have pointed out that miRNAs can be a kind of new therapeutic and diagnostic tool with mighty high therapeutic value [36,37]. miR-301b-3p can be found in varying cancers and contributes to the malignant progression of gastric cancer [9], hepatocellular carcinoma [38], and ovarian plasmacytoma [39]. Based on previous work, we found upregulated miR-301b-3p in LUAD tissue and cell lines, and miR-301b-3p facilitated LUAD cell proliferation, migration, and invasion by regulating DLC1 [12]. We, therefore, intended to improve DDP resistance by repressing the expression of miR-301b-3p. miRNA therapeutic effects are promising, but the effective delivery of miRNA to the targeted tissue is the main challenge [37]. Hence, we constructed multifunctional 3WJ-apt-miR NPs by selfassembling in a bottom-up manner. We herein explored the treatment of LUAD by delivering miR-301b-3p Inhibitor via RNA NPs. Ultra-stable pRNA-3WJ motifs were used as core scaffolds and A549apts were introduced into branches of pRNA-3WJ for specific targeting in LUAD. Multifunctional RNA NPs were designed and assembled and then characterised by DLS, Native-PAGE and AFM. The results suggested efficient assembly and determined size and structure. First, to investigate the targeting ability and specificity conferred by the A549 nucleic acid aptamer in the material, the RNA NPs with or without A549apts were compared using CLSM and flow cytometry. As the results suggested, 3WJ-apt-miR effectively propelled the cellular uptake behaviour of NPs, which is the same as the uptake of PAMAM-CPT-AS1411 with the aptamer AS1411 prepared by Liu et al. [40], confirming the higher binding affinity of RNA NPs containing A549apts. The results above were in line with our expectations. The miR-301b-3p level in each group as tested by qRT-PCR showed a reduction by using 3WJ-apt-miR NPs, which also supported the view that down-regulation of miR-301b-3p represses LC as reported by Li et al. [10]. Then we proved that the NPs had good biocompatibility by cytotoxicity experiments. Subsequent experiments about miR-301b-3p Inhibitor on DDP sensitivity in LUAD cells demonstrated that miR-301b-3p Inhibitor in the NPs was able to reduce DDP resistance in LUAD. With the help of A549apts, miR-301b-3p Inhibitor could better target cancer cells, which provides evidence in support of the finding that downregulation of miR-301b-3p increases the sensitivity of cancer cells to cisplatin as proposed by Zhu et al. [41].Western blot assay implied that γH2AX and Bax protein levels were upregulated and Bcl-2 expression was downregulated in response to NPs and DDP treatment, indicating that miR-301b-3p Inhibitor in materials was able to enhance DDP sensitivity and induce cell apoptosis. This correlates with previous findings that miR-301b-3p is involved in apoptosis in cancers [42][43][44]. MiR-301b-3p Inhibitor and A549apt synergised more prominently than those NPs without A549apts. Therefore, the RNA NPs with A549apt were constructed in this study to specifically target cancer cells to deliver miR-301b-3p Inhibitor and regulate gene expression.3WJ-apt-miR NPs enabled efficient cellular internalisation in the A549 cell line and could reduce miR-301b-3p level, as well as the cloning and proliferation abilities of LUAD. The NPs also improved the DDP chemosensitivity of LUAD cells and could greatly facilitate the apoptosis of LUAD cells after treatment with DDP, and remarkably improved the therapeutic effect under A549apt. To conclude, our research results implied that the 3WJ-apt-miR NPs have potential value in clinical anti-cancer treatment.
2023-03-10T06:16:57.202Z
2023-03-09T00:00:00.000
{ "year": 2023, "sha1": "891540161725b18736c5d10e0fe2eb93cbb25a81", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "2bb1e621b1cd46a7d664d85918f17dc3e4be89cc", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
232082283
pes2o/s2orc
v3-fos-license
Roles of Macrophages in the Development and Treatment of Gut Inflammation Macrophages, which are functional plasticity cells, have the ability to phagocytize and digest foreign substances and acquire pro-(M1-like) or anti-inflammatory (M2-like) phenotypes according to their microenvironment. The large number of macrophages in the intestinal tract, play a significant role in maintaining the homeostasis of microorganisms on the surface of the intestinal mucosa and in the continuous renewal of intestinal epithelial cells. They are not only responsible for innate immunity, but also participate in the development of intestinal inflammation. A clear understanding of the function of macrophages, as well as their role in pathogens and inflammatory response, will delineate the next steps in the treatment of intestinal inflammatory diseases. In this review, we discuss the origin and development of macrophages and their role in the intestinal inflammatory response or infection. In addition, the effects of macrophages in the occurrence and development of inflammatory bowel disease (IBD), and their role in inducing fibrosis, activating T cells, reducing colitis, and treating intestinal inflammation were also reviewed in this paper. INTRODUCTION The intestinal tract is the largest independent immune system in the body. It is continuously exposed to foreign antigens and the distinction between harmful and harmless antigens is necessary for the intestine to ensure an appropriate response to every antigen (Weiner, 2000;Ma et al., 2020a). The gut needs to produce a strong protective immune response to resist the invasion of pathogenic antigens, while similar reactions to harmless antigens such as dietary proteins or symbiotic microorganisms, may lead to chronic inflammatory diseases. Macrophages are phagocytes found in tissues and maintain tissue homeostasis, regulate inflammation, and play a significant role in host protection. There are many microorganisms colonized in the human intestine, and more than 1000 bacterial species in the intestinal ecosystem of a single individual. Among them, Actinobacteria, Bacteroidetes, Firmicutes, Proteobacteria, and Tenericutes are the predominant bacterial phyla, while the abundances of Fusobacteria, Saccharibacteria, Spirochaetes, Synergistetes and Verrucomicrobia are lower (Fassarella et al., 2020). The production of phagocytic cytotoxic substances by activated macrophages is a key process in the control of intracellular pathogens (Piacenza et al., 2019). The pattern recognition receptors on the surface of macrophages recognize and bind to the corresponding pathogen associated molecular pattern (PAMP)-a specific molecular structure shared by some pathogens-on pathogens, and nonspecifically phagocytize and remove pathogenic microorganisms (Ley et al., 2016). Different kinds of microorganisms express different PAMPs, including mainly lipopolysaccharides (LPSs), phosphoteichoic acid, peptidoglycan, and other structures that usually do not exist in the host. Then, the pathogens are phagocytized and digested by macrophages, and the lymphocytes or other immune cells are activated to kill these pathogens (Jain et al., 2019). On the other hand, phagosomes are formed when the pathogen is engulfed by macrophages and fuse with lysosomes to release enzymes and toxic substances, resulting in killing or having cytotoxic effects on bacteria and tumor cells. The intestinal mucosa is the first line of defense for organisms against intestinal pathogens. The lamina propria of the small intestine is the main site of the intestinal immune system, which contains a large number of macrophages, CD4 T cells, and dendritic cells. These cells play a key role in early resistance to intestinal pathogens. Macrophages play a significant role in many processes, such as the human immune function, parasite infection, and tissue remodeling by secreting cytokines and producing reactive oxygen and nitrogen intermediates. In a broad sense, intestinal macrophages are divided into two categories: resident and inflammatory (Mills et al., 2000). The former maintains intestinal health, while the latter plays an important role in the occurrence of inflammatory reactions. Multiple studies have shown that macrophages are associated with the development of intestinal inflammation and secrete a large number of cytokines and bioactive substances that participate in the inflammatory response (Cummings et al., 2016;Joeris et al., 2017). Herein, we review the origin and development of macrophages and their role in intestinal inflammation and treatment. INTESTINAL INFLAMMATION The healthy gut can control inflammation through its powerful mechanisms, but inflammatory bowel disease (IBD) can occur if the inflammation is not resolved (Murray and Smale, 2012). IBD, which includes Crohn's disease and ulcerative colitis, is a kind of chronic gastrointestinal inflammatory disease with unknown etiology and recurrent attacks (Torres et al., 2017;Ungaro et al., 2017). The pathogenesis of IBD is unknown, but it is believed that the uncontrolled immune response of genetically predisposed individuals to environmental factors and intestinal microorganisms is the cause (Khor et al., 2011;Kostic et al., 2014;de Souza and Fiocchi, 2016;Liu and Stappenbeck, 2016;Ni et al., 2017;Ananthakrishnan et al., 2018). In other words, the combined effects of genetic, microbial, immune, and environmental factors lead to an abnormal and excessive immune response of the commensal microbiota (Wallace et al., 2014). When the intestine is invaded by pathogens, which can cross the damaged intestinal epithelial cell barrier, the intrinsic defense cells in the epithelium, especially the macrophages, will produce pro-inflammatory cytokines after being stimulated, and then release interleukin-1 (IL-1), IL-6, IL-18, transforming growth factor-β (TGF-β), and tumor necrosis factor-α (TNF-α). These cytokines directly or indirectly affect the intestinal epithelial cells, leading to the injury or necrosis of these cells, which promotes the occurrence and development of IBD (Figure 1). An over-secretion of cytokines and chronic inflammation are the typical features of IBD, with clinical symptoms of diarrhea, abdominal pain, fever, intestinal obstruction, and disability symptoms of blood or mucus or both (Baumgart and Carding, 2007;Geremia et al., 2014;Geremia and Arancibia-Cárcamo, 2017;Hidalgo-Garcia et al., 2018;Ding et al., 2019b). IBD occurs exclusively in the colon in ulcerative colitis and almost anywhere along the gastrointestinal tract in chronic diarrhea (Jones et al., 2018). In addition, IBD also has the characteristics of intestinal microbiota dysbiosis. Compared with the gut of healthy people, the quantity and diversity of intestinal bacteria is lower in IBD patients (Frank et al., 2007;Lepage et al., 2011;Kostic et al., 2014). In some sufferers, the inflammation of the mucosa is associated with these changes and bacterial factors Franzosa et al., 2019;Lloyd-Price et al., 2019). Some scholars have analyzed the pro-inflammatory and anti-inflammatory pathways of IBD patients, and the results show that the imbalance of immune responses is caused by the change of balance among inflammatory, regulatory and anti-inflammatory cytokines (Bouma and Strober, 2003). When IBD occurs, monocyte infiltration will increase and produce many pro-inflammatory mediators, including TNFα, IL-1, IL-23, and nitric oxide (Ogino et al., 2013;Bain and Mowat, 2014;Sanders et al., 2014;Magnusson et al., 2016;Joeris et al., 2017). Many types of mucosal immune cells are related to the pathogenesis of IBD: intestinal epithelial cells, innate arm dendritic cells, innate lymphoid cells, neutrophils, macrophages, Foxp3 + regulatory T (Treg) cells of the adaptive arm, interferonγ-producing type 1 helper T cells (Th1), interferon-γ helper T cells (Th17), and secretory mediators of the adaptive arm of the mucosal immune system-cytokines, chemokines, eicosanoic acid, reactive oxygen species and nitrogen species (Xavier and Podolsky, 2007;Wu et al., 2015). A study has found that, compared with quiescent IBD or the healthy intestine, IBD in active humans was related to the increase of colonic mRNA expression of TNF, IL-1β and IL-6, and of the HLA-DR Int :HLA-DR Hi and CD14 Hi :CD14 Lo cell ratios (Jones et al., 2018). Molecular cues are also responsible for the contribution of intestinal macrophages in the development of IBD. Tolllike receptors (TLRs) play a key role in maintaining intestinal homeostasis. After recognizing PAMPs, TCLs are activated to regulate both innate and adaptive immunity. Innate immunity is regulated by mediating the phosphorylation of IκB, thereby activating NF-κB. Moreover, the proliferation and differentiation of Th1 and Th2 from T cells is regulated by TCLs to regulate adaptive immunity. When these regulations are disturbed, the expression of TLRs increases and the downstream signaling cascade is over activated, resulting in the over production of inflammatory cytokines and IBD (Lu et al., 2018). When IBD occurs, it is often accompanied by the death of intestinal epithelial cells (Abraham and Cho, 2009). Epithelial injury and inflammation in IBD patients are usually dependent on TNF (Zeissig et al., 2004). When the production of TNF FIGURE 1 | Macrophages promote the development of IBD. Pathogens cross the damaged intestinal epithelial cell barrier and stimulate macrophages to produce pro-inflammatory cytokines, such as interleukin-1 (IL-1), IL-6, IL-18, transforming growth factor-α (TGF-α) and tumor necrosis factor-β (TNF-β) are released. These act on intestinal epithelial cells directly or indirectly, leading to the injury or necrosis of these cells, thus promoting the occurrence and development of IBD. increased in IBD, the expression of the TNFAIP3 gene, which encodes A20, also increases. The A20 protein is the negative feedback regulator of NF-κB. In the intestinal epithelium of IBD patients, A20 is expressed by an intestinal epithelial cell specific promoter and is highly sensitive to intestinal epithelial cell death, intestinal injury, and shock induced by TNF (Garcia-Carbonell et al., 2018). Generally speaking, IBD often occurs in young individuals, and most patients with IBD are expected live to a normal life due to the progress of medical treatment. Despite the low mortality rate of IBD, the incidence rate is still a serious problem. Moreover, IBD is incurable and increases the risk of lymphoma, cholangiocarcinoma, and colorectal cancer (Samadder et al., 2019;Scharl et al., 2019). Many patients with IBD have to undergo surgery multiple times to relieve symptoms, which may lead to postoperative complications and infections, adversely affecting their quality of life (Torres et al., 2017;Ungaro et al., 2017;Liang et al., 2018). There have been some-although relative few-experiments using immunomodulators for IBD treatment, but the effect of the treatment declines with time (Torres et al., 2017;Ungaro et al., 2017;Friedrich et al., 2019). Tissue reparative programs may also contribute to restoring the barrier, but improper regulation may lead to fibrosis and intestinal structuring due to the dysregulation of intestinal, which is a possible complication of IBD (Rieder and Fiocchi, 2009;Rieder et al., 2017). Origin and Differentiation of Macrophages Macrophages are white blood cells located in tissues. In general, it is believed that macrophages are derived from monocytes, and monocytes are derived from precursor cells in bone marrow, which is also known as the granulocyte-macrophage colonyforming unit (GM-CFUc) (Figure 2; van Furth and Cohn, 1968;van Furth et al., 1972). However, whether monocytes differentiate into tissue-specific macrophages in the blood is still controversial. Some scholars believe that monocytes continue to develop and mature in the blood, where they can migrate to different tissues to form cell groups with different functions and structures. According to their function during the migration from blood to tissue, they can be divided into "inflammatory" and "resident" monocytes (Geissmann et al., 2003). The resident FIGURE 2 | Origin and differentiation of macrophages. GM-CFUc can be divided into resident and inflammatory monocytes by CX3CR1. Inflammatory monocytes may also be one of the sources of resident monocytes. Resident macrophages are usually produced by resident monocytes or, sometimes, by inflammatory monocytes. Resident macrophages and Foxp3 + T cells play a significant role in maintaining intestinal homeostasis through IL-10 and TGF-β dependent mechanisms. When there is inflammation in the intestine, inflammatory monocytes migrate to the intestine and differentiate into dendritic cells and inflammatory macrophages, which can produce a variety of cytokines involved in the inflammatory reaction. GM-CFUc: granulocyte macrophage colony forming unit; IL: interleukin; TNF-α: tumor necrosis factor-α; TGF-β: transforming growth factor-β. monocytes are defined as CCR2 − , CX3CR1 hi , and GR1 −, they exist in the non-inflammatory tissues and have a long half-life. The precursor inflammatory monocytes are CCR2 + , CX3CR1 low , and GR1 + are found in the inflammatory tissues, having a short survival time. The two monocyte populations can be distinguished by the expression of CX3CR1, a cell surface marker (Figure 2; Geissmann et al., 2003;Strauss-Ayali et al., 2007). One study has shown that when blood vessels are damaged and infected, the colonized monocytes rapidly invade the tissues, and then initiate the innate immune response and differentiate into macrophages (Figure 2; Geissmann et al., 2003). By contrast, inflammatory monocytes reach the site of inflammatory response and differentiate into inflammatory dendritic cells after infection. It has been demonstrated that inflammatory monocytes can differentiate into inflammatory macrophages, and the migration of resident monocytes may depend on chemical signals from damaged tissues or endothelial cells (Figure 2; Mowat and Bain, 2011). In the human bone marrow, monocytes can be divided into Ly6C lo monocytes and Ly6C hi monocytes through the expression of Ly6C/GR1, CCR2, and CX3CR1. Because Ly6C hi monocytes tend to perform functions that traditionally belong to monocytes, they are now called "classical" monocytes. On the other hand, some scholars have proposed that Ly6C lo monocytes are the precursors of tissue resident macrophages because they do not enter inflammatory tissue (Geissmann et al., 2003). In addition, a study has shown that the "non-classical" Ly6C lo monocytes are not used as circulating intermediates, but their main function is to patrol the vascular system and remove necrotic endothelial cells (Carlin et al., 2013). Therefore, Ly6C lo monocytes can be considered the macrophages of the circulatory system in some ways. In the normal colon, monocytes gradually differentiate into resident macrophages. When there is inflammation in the gut, resident macrophages still originate from monocytes in blood circulation but change from anti-inflammatory to inflammatory macrophages with high expression of TLRs. However, studies have shown that Ly6C hi monocytes can also be converted to Ly6C lo monocytes and returned to the bone marrow to replenish the resident macrophages (Figure 2; Gren and Grip, 2016). Some scholars proposed that resident macrophages also have the characteristics of self-renewal (Figure 2; Bain and Mowat, 2011). Resident macrophages produce anti-inflammatory cytokines, such as IL-10 and TGF-β. Studies have reported that IL-10 produced by macrophages has the effect of regulating the expression of Foxp3 + Tregs, and macrophages highly express the TGF-β receptor and participate in the signal transduction of activated TGF-β (Figure 2; Mowat and Bain, 2011;Weinhage et al., 2015). TGF-β can combine with the Foxp3 expressed by Tregs to form CD4 + Foxp3 + Tregs, which can reduce the activation of macrophages and translocation of NF-κB in the mucosa . Some studies have shown that many tissue macrophages do not originate from blood mononuclear cells but exist independently of conventional hematopoiesis and originate from embryonic precursors produced by the yolk sac or fetal liver (Schulz et al., 2012;Hashimoto et al., 2013;Yona et al., 2013;Epelman et al., 2014;Sheng et al., 2015). As we know, the differentiation continuum of monocyte to macrophage exists in intestinal lipoprotein, which has been called the monocyte "waterfall" (Tamoutounour et al., 2012;Bain et al., 2013). Ly6C hi CX3CR1 int MHCII − monocytes exist at one end of the "waterfall", and their phenotype and morphology are similar to those of their counterparts in the blood. In fact, the expression of molecules of monocytes in the mucosa, including CCR2, VLA-1, CD62L, Ly6C and LFA-1, is still preserved; these molecules are related to the chemotaxis and extravasation of circulation (Schridde et al., 2017). First, these monocytes show MHCII expression; Then, the molecules associated with extravasation, including LFA-1, CCR2, and CD62L, are downregulated; Finally, CX3CR1 is upregulated to obtain fully mature macrophages (Tamoutounour et al., 2012;Bain et al., 2013;Schridde et al., 2017). At the same time, it has been proved that the human intestinal mucosa presents a similar "waterfall", with the classic CD14 hi CCR2 + CD11C hi monocytes and mature CD14 lo CCR2 − CD11C lo macrophages at the two ends (Bain et al., 2013;Bernardo et al., 2018;Bujko et al., 2018). Inflammation includes the detection of tissue injury or infection, the subsequent inflammatory response and the final resolution. Monocytes are equipped with a large number of scavengers and pattern recognition receptors, which can react to local danger signals quickly. Their high plasticity enables them to adapt to molecular changes in response to the production of effector molecules that drive inflammation. Although we now have a deeper understanding of the function of Ly6C, more research is needed to explain the molecular mechanism of monocytes acting in a restorative rather than pathological manner. Distribution of Intestinal Macrophages Macrophages play a significant role in regulating intestinal peristalsis. They are distributed throughout the gastrointestinal mucosa, with a large proportion of them being located in the natural layer (LP) near the epithelium and a small part of them appear in the smooth muscle layer of the intestinal wall (Tajima et al., 2012;Gabanyi et al., 2016). In different parts of the gastrointestinal tract, the number of macrophages varies in the intestinal mucosa. Both in humans and rodents, the number of macrophages in the colon and lamina propria was found to be more than that in the small intestine (Nagashima et al., 1996;Denning et al., 2011). However, the number of macrophages follows a continuous gradient trend between the proximal and distal intestines of mice, while the number of macrophages in different parts of the colon was similar in mice and humans (Nagashima et al., 1996;Grainger et al., 2017). Functional Plasticity of Macrophages Generally speaking, macrophages are phagocytes in tissues and play an important role in homeostasis of adipose and tissue, regulation of inflammatory response and defense protection of host. Macrophages have the property of plasticity and can change their physiology, being able to produce different cell populations with different functions, according to environmental cues (Mosser and Edwards, 2008). The activation state of macrophages was initially divided into classically activated M1 macrophages and alternatively activated M2 macrophages. Inflammatory macrophages are usually activated as the M1 phenotype, while resident macrophages usually belong to the activated M2 phenotype. M1 and M2 macrophages are induced by interferon-γ (IFN-γ) and IL-4, respectively, and participate in the anti-microbial response and the reaction of wound healing and tissue remodeling, respectively (Stein et al., 1992;Mills et al., 2000). It is difficult to distinguish M1 and M2 in vivo due to the mixing of activated M1/M2 macrophages caused by the multitude of stimulations, although the polarization state of prototypes M1/M2 has been established in vitro (Martinez and Gordon, 2014). Some studies have shown that macrophages become a continuum of activation states when they are stimulated by certain cytokines or complexes, such as TNF-α, LPS, TGFβ, IL-10, IL-13, Glucocorticoid or the immune complex, and macrophage activation with similar but different transcriptional and functional is subsequently produced along the M1/M2 axis (Martinez and Gordon, 2014;Murray et al., 2014;Xue et al., 2014;Murray, 2017). Moreover, some studies have found that macrophages are activated outside the M1-M2 continuum when they are stimulated by high-density lipoproteins, free fatty acids, or chronic-inflammation-related stimulants (Popov et al., 2008;Xue et al., 2014). The activation and function of macrophages are complex, but the activated states can be identified and distinguished by the abundance of transcription factors, cytokines, and surface molecules ( Table 1). For example, M1 macrophages usually produce high levels of pro-inflammatory cytokines, such as TNF-α, IL-6 and IL-12, and promote the induction of nitric oxide synthase (iNOS) and the expression of indoleamine 2,3-dioxygenase in mice and humans, respectively, while M2 macrophages are generally distinguished by stimuli-specific molecules and more general M2 markers (Murray et al., 2014;Xue et al., 2014). CD206 is one such surface marker induced by IL-4/IL-13 and IL-10 in mice and humans, respectively (Stein et al., 1992;Mantovani et al., 2004;Murray et al., 2014). The expression and activity of arginase I also constitute a marker of M2-polarized macrophages in mice, but not in human (Thomas and Mattila, 2014). The expression of IL-10 in several polarization states of M2 macrophages (except for those induced by IL-4/IL-13) is higher than for M1 macrophages, making it a frequently used marker of M2 macrophages. In addition, macrophages can also differentiate into Mregs and TAM, which have different stimulating factors, surface markers, cytokines and functions ( Table 1) (Mosser and Edwards, 2008). However, it is not clear what causes the change of activation status of macrophages, the reasons may be the recruitment of monocytes and their response to local changes, the repolarization between M1 and M2 macrophages, or a combination of the two (Italiani and Boraschi, 2014). The traditional macrophage polarization model is not sufficient to describe the full range of macrophage activity. Due to the increased heterogeneity of macrophages in the gut, further work is needed to analyze the role of macrophage subsets in health and disease. Many technologies have been used to study the heterogeneity of macrophages. For example, single-cell RNA sequencing has been used for the transcriptomic profiling of haematopoietic cells in humans, and macrophage heterogeneity across multiple anatomical sites was mapped, with diverse subsets being identified (Bian et al., 2020). A rapid three-dimensional (3D) printing method was also used in the research of cell heterogeneity. Tang et al. reported a controllable, repeatable, and quantifiable 3D bioprinting model of the glioblastoma microenvironment, simulating the high cell heterogeneity and cell interaction in the tumor microenvironment (Tang et al., 2020). In addition, macrophages have highly specialized functions in different tissues, and their receptors are also different. They may cooperate or compete for ligand recognition, which will affect their function. Intestinal Homeostasis and Its Disruption During Inflammation The gut, which is exposed to pathogens, commensal microbiota, and food antigens, is one of the main interfaces for contact with the outside ambient. The balance between immune responses to pathogens and tolerance is necessary for this bodily niche in order to maintain intestinal homeostasis and body health (Hill and Artis, 2010;Pabst and Mowat, 2012;Belkaid and Hand, 2014). The intestinal epithelium, which is mainly made up of a single layer of intestinal cells, is tightly connected with adjacent cells to form a critical continuous physical barrier, which regulates the selective permeability of luminal content (Odenwald and Turner, 2017;Chelakkot et al., 2018). Except for those in physical barriers, several other types of epithelial cells produced by stem cells, which are located at the base of the intestinal crypt, also play a role in the homeostasis of the body (Clevers and Bevins, 2013;Peterson and Artis, 2014;Johansson and Hansson, 2016;Martens et al., 2018). There is only one mucus layer in the small intestine, while both an internal and outer layer can be found in the colon, making it a habitat for many microbes (Johansson and Hansson, 2016). After passing through the epithelial barrier, the luminal antigens come in contact with the immune cells in the second and third lymphoid organs in the lamina propria (Buettner and Lochner, 2016;Ahluwalia et al., 2017;Da Silva et al., 2017;Mowat, 2018;Tordesillas and Berin, 2018). After the internalization of mononuclear phagocytes, the treated antigens are presented to lymphocytes to induce oral tolerance and interact with the intestinal flora and dietary factors (Hadis et al., 2011;Pabst and Mowat, 2012;Muller et al., 2014;Chinthrajah et al., 2016;Esterházy et al., 2016;Loschko et al., 2016;Nutsch et al., 2016;Belkaid and Harrison, 2017;Kim et al., 2018;Mowat, 2018). Moreover, conventional dendritic cells can polarize naïve T cells by migration, while macrophages lack the characteristics of active migration, but help to amplify the T cell response of lymphocytes (Gaudino and Kumar, 2019). In addition, intestinal macrophages maintain T cell function by scavenging apoptotic/dead cells, secreting cytokines, and remodeling epithelial cells, thus maintaining tissue homeostasis (Zigmond et al., 2012;Ortega-Gómez et al., 2013;Cerovic et al., 2014;Zigmond et al., 2014;Schett and Neurath, 2018;Sugimoto et al., 2019). These processes of active regulation, and T cell deletion and anergy are associated with maintaining oral tolerance (Sun et al., 2015;Luu et al., 2017;Wawrzyniak et al., 2017;Mowat, 2018). In addition, as a response to microbial induction, conventional dendritic cells also support the conversion of immunoglobulin M and immunoglobulin G to immunoglobulin A on B cells, which is essential for the homeostasis of the intestinal environment because immunoglobulin A inhibits the interaction between microorganisms and epithelial cells by transporting across the epithelial cell layer (Litinskiy et al., 2002;Macpherson et al., 2018;Castro-Dopico and Clatworthy, 2019). In general, mononuclear phagocytes control the stability of the intestinal environment and the ability to trigger the immune response to pathogens by maintaining immune tolerance to commensal animals and diet (Hadis et al., 2011;Bain and Mowat, 2014;Cerovic et al., 2014;Kim et al., 2018;Leonardi et al., 2018). Ideally, these immune responses can promote inflammation remission and rapid homeostasis recovery in tissues. However, due to the repeated and abnormal activation of the immune system, the chronic inflammatory microenvironment of IBD will be produced in the body (Caër and Wick, 2020). Destruction of intestinal homeostasis, including an immune response to commensal bacteria, dysfunction of the epithelial barrier function, the reduction of nutrient absorption, and changes in tissue autophagy and oxygenation, can induce the recruitment of immune cells (Maloy and Powrie, 2011;Johansson et al., 2013;Peterson and Artis, 2014;Colgan et al., 2016;Ramakrishnan and Shah, 2016;Odenwald and Turner, 2017;Okumura and Takeda, 2017;Ahluwalia et al., 2018;Mowat, 2018;VanDussen et al., 2018). These intestinal defects are associated with IBD, and the gene expression related to the prognosis variation of Crohn's disease can be detected in mononuclear phagocytes. Thus, we can speculate that mononuclear phagocytes play a significant role in the cellular signaling pathway that regulates tolerance and chronic inflammation in the intestine (Lee et al., 2017). The Role of Macrophages in Intestinal Inflammation The largest macrophage population in the body exists in the gastrointestinal mucosa, which plays a key role in maintaining epithelial and immune homeostasis (Lee et al., 1985;Pull et al., 2005;Isidro and Appleyard, 2016;Guan et al., 2019). When intestinal homeostasis is disturbed, the composition of the intestinal macrophage pool will change greatly. The inflammatory macrophages will accumulate in the intestinal mucosa of patients with Crohn's disease and ulcerative colitis, for example. Compared with CD14 low , these inflammatory macrophages can be identified by the expression of CD14 hi , which produces multiple inflammatory mediators, such as TNF-α, IL-1, IL-6, ROS mediators, and nitric oxide, which makes them different from macrophages in healthy intestines (Thiesen et al., 2014). Ly6C hi monocytes and their derivatives play a significant role in intestinal pathology (Bain and Schridde, 2018). When inflammation occurs in the gut, classical monocytes (Ly6C hi ) respond to the stimulation of Toll-like receptors in a highly proinflammatory manner, expressing reactive oxygen intermediates (Figure 3; Varol et al., 2009;Weber et al., 2011;Rivollier et al., 2012;Tamoutounour et al., 2012;Zigmond et al., 2012;Bain et al., 2013). CD11c high CCR2 + CX3CR1 + monocytes infiltrate in the colonic mucosa of IBD patients in a CCR2-dependent manner and cannot completely differentiate into macrophages and produce pro-inflammatory cytokines . Intestinal macrophages in IBD patients produce more pro-inflammatory cytokines, which promote or perpetuate the pathological environment (Kamada et al., 2008;Kamada et al., 2009;Lissner et al., 2015;Barman et al., 2016;Friedrich et al., 2019). In patients with Crohn's diseases, some factors, such as IFN-γ, induce the differentiation of inflammatory monocytes and the secretion of IL-23, thus creating a vicious circle of inflammation (Kamada et al., 2008). In addition, other mechanisms and disease-related changes in the function of macrophages may also promote the occurrence and development of IBD. For instance, TREM-1 + macrophages, which are mainly immature macrophages, increase in frequency and number in patients with IBD, especially in the active lesion area (Schenk et al., 2007;Brynjolfsson et al., 2016). It has also been suggested that bacterial clearance of intestinal macrophages in patients with IBD is impaired, and patients with Crohn's disease phenotype mainly through dysfunctional autophagy (Smith et al., 2009;Schwerd et al., 2017). After the removal of infectious or inflammatory factors, the intestinal tract must be restored to balance so that a chronic inflammatory reaction will not follow. At the same time, the macrophage pool changes significantly. During colitis in mice, the expansion rate of CX3CR1 int macrophages returned to normal (Zigmond et al., 2012). On the other hand, Ly6C hi monocytes supplement CX3CR1 hi macrophages in intestinal homeostasis. Once the inflammatory response begins to subside, some of the induced Ly6C hi cells may be transformed into resident macrophages with anti-inflammatory effects and may play an active role in tissue injury. IL-1β is believed to be induced mainly by monocytes, and its susceptibility to chemically induced colitis is reduced due to its neutralization (Seo et al., 2015). Meanwhile, the selective ablation of Tnfa in Ly6C hi monocytes also reduces the development of colitis (Varol et al., 2009). Mice with defective recruitment of inflammatory mucosal monocytes, which is due to the neutralization or deletion of CCL2, CCR2 or β 7 integrins, are protected from colitis induced by DSS (Platt et al., 2010;Takada et al., 2010;Zigmond et al., 2012;Bain et al., 2013;Becker et al., 2016;Schippers et al., 2016). There are other additional functions of Ly6C monocytes (Figure 3). Studies have shown that CCL2, CCL3 and CCL11 may come from Ly6C hi and play a role in recruiting innate immune effector cells in the gut (Waddell et al., 2011;Schulthess et al., 2012;Bain and Mowat, 2014). Ly6C hi monocytes can also prevent immunopathology by inhibiting the production of TNF-α and ROI by local neutrophils (Grainger et al., 2013). Furthermore, intestinal macrophages participate in tissue repair and fibrosis (Figure 3; Karin and Clevers, 2016;Vannella and Wynn, 2017). Some with Crohn's disease have symptoms of intestinal fibrostenosis, while others develop fibrosis complications several years later . Another study has shown that in the colon of patients with Crohn's disease with stenosis, the number of IL-36α + macrophages in the intestine is increased (Scheibe et al., 2019). The direct effect of IL-36 on human mesenchymal cells leads to profibrosis transcription, which indicates that intestinal fibrosis in patients with IBD can be induced by the increase of IL 36α + macrophages FIGURE 3 | The role of Ly6C monocytes in intestinal inflammation. Ly6C plays an important role in promoting intestinal inflammation, reducing colitis, activating T cells, promoting tissue fibrosis, regulating neutrophils and recruiting innate immune effector cells. IL: interleukin; TNF-α: tumor necrosis factor-α; IFN-γ: interferon-γ; Th1: interferon-γ-producing type 1 helper T cells; Th17: interferon-γ-producing helper T cells. (Bettenworth and Rieder, 2017;Salvador et al., 2018;Scheibe et al., 2019). Other studies have reported that immature macrophages are always close to activated fibroblasts in the intestinal mucosa of patients with Crohn's disease, and immature macrophages, as well as conventional dendritic cells 2, activate fibroblasts to induce intestinal inflammation by oncostatin M/OSMR signaling (West et al., 2017;Martin et al., 2019;Smillie et al., 2019). Some genes expressed by intestinal macrophages also affect the development of intestinal inflammation. For example, the gene ablation of GPBAR1, a G protein-coupled receptor that is highly expressed in macrophages, enhances the recruitment of classically activated macrophages in the colonic lamina propria and aggravates the severity of inflammation (Biagioli et al., 2017). Macrophages also play a role in the activation of T cells (Figure 3). Some studies have shown that, in patients with Crohn's disease, intestinal macrophages can induce the proliferation of naïve CD4 + T cells and the expression of integrin β-7 and CCR9, while other works deem mature macrophages from patients with ulcerative colitis unable to inhibit the proliferation of T cells (Barman et al., 2016). In addition, CD14 hi monocytes/macrophages in the IBD mucosa can produce IL-23 and express CD40 and CD80 to support the function of pathogenic T cells (Rugtveit et al., 1997;Carlsen et al., 2006;Kamada et al., 2008). Intestinal macrophages in patients with Crohn's disease induce the Th1 and Th17 polarization of naïve CD4 + T cells, which seems to be caused by the accumulation of immature macrophages in the total macrophage population of patients with Crohn's disease (Kamada et al., 2009;Ogino et al., 2013). In fact, it has been proved by previous studies that immature macrophages from patients with IBD mainly produce IL-1β to induce Th17 cells and pathological IFN-γ + IL-17 + T cells, which come from autologous colon CD4 + T cells (Ramesh et al., 2014;Chapuy et al., 2019;Chapuy et al., 2020). The Role of Intestinal Macrophages in the Treatment of Inflammation and Diseases There are many methods to treat IBD, and regulating macrophage activation is one of them. In fact, it has been considered an attractive treatment for IBD to increase the phenotype of antiinflammatory M2 (Hidalgo-Garcia et al., 2018). Endoplasmic reticulum stress, which is involved in the regulation of IEC inflammatory injury, is common in IBD patients (Woehlbier and Hetz, 2011;Hosomi et al., 2015). Grp78 is a marker of endoplasmic reticulum stress, and its expression is increased in inflammatory IEC. However, after increasing the expression of IL-10, the expression of Grp78 decreases, and endoplasmic reticulum stress is inhibited (Shkoda et al., 2007). IL-10 inhibits the NF-κB RelA phosphorylation induced by TNF by regulating Grp78, the expression of pro-inflammatory cytokines is subsequently down-regulated, and the IEC barrier function is maintained. According to a previous study, the neutralization of IL-10/TGF-β or alternatively activated macrophages did not show resistance to colitis induced by DSS in mice infected with schistosome (Smith et al., 2007). Parasites inhibited colitis induced by DSS through a new colonic infiltrating macrophage population-i.e., the schistosome infection stimulates a new macrophage population that preferentially migrates to the colonic LP, where it can inhibit colitis (Smith et al., 2007). This finding highlights a variety of immunomodulatory macrophage activation states. It is worth noting that infliximab, a monoclonal antibody of anti-TNF-α, has been successfully used in the treatment of human IBD, and the regulatory macrophages CD68 + CD206 + were induced in patients with IBD responsive to treatment (Vos et al., 2012;Danese et al., 2015). Some studies have proved that macrophages are significant for the treatment of IBD. For example, alternative activated macrophages can activate the Wnt signaling pathway, which is related to ulcerative colitis, and promote mucosal repair in IBD, while Yes-associated protein (YAP), a Hippo pathway molecule, can aggravate the occurrence of IBD by regulating macrophage polarization and the imbalance of intestinal flora homeostasis (Cosín-Roger et al., 2016;Zhou et al., 2019). Macrophages play an important role in in the treatment of colitis. For example, it has been found that intracolonic administration of chromofungin can induce macrophages to enter alternatively activated macrophages (AAM), which reduce the deposition of colonic collagen and maintain the homeostasis of intestinal epithelial cells, thus protecting colitis induced by DSS Ding et al., 2019a). MicroRNAs (miRNAs), which are noncoding RNAs, are essential for many biological processes in fine tuning. In macrophages, miR-155 acts as a pro-inflammatory regulator by promoting M2 polarization or affecting NF-κB signal transduction (Vigorito et al., 2013;Zhang et al., 2016). Li et al. found the central role of alternative M2 skewing of miR-155 in colitis and suggested that macrophages might be the main target of treatment . The Grb2-associated binding protein 2 (Gab2), which plays a role in regulating the activation of macrophages and T cells, and Grb2associated binding protein 3 (Gab3), which is highly expressed in some immune cell types, redundantly regulate the activation of macrophages and CD8 + T cells to inhibit colitis (Uno et al., 2010;Bezman et al., 2012;Best et al., 2013;Festuccia et al., 2014;Kaneda et al., 2016;Wang et al., 2019;Ma et al., 2020b). Human catestatin (hCT), which has immunomodulatory properties, can reduce the severity of inflammatory recurrence by regulating M1 macrophages and releasing pro-inflammatory cytokines (Zhang et al., 2009;Rabbi et al., 2017). Triggering receptor expressed on myeloid cells-1 (TREM-1) is a pattern recognition receptor (PRR) of the surface immunoglobulin receptor superfamily and is expressed by activated macrophages. A study has found that when TREM-1 is deficient, the number of M1 macrophages, which produce IL-1β, in DSS-treated colons decreases, and the damage mediated by DSS can be alleviated by providing TREM-1 expressing macrophages to TREM-1 deficient mice (Yang et al., 2019). Other studies have found that vitamin D supplementation can also reduce the severity of Crohn's disease, and its active form, 1,25-dihydroxyvitamin D (1,25D), can inhibit the secretion of pro-inflammatory cytokines by macrophages (Dionne et al., 2017). Moreover, 1,25 D is also very important for the regulation of bone homeostasis and various immune responses (Hewison, 2012). In addition to inhibiting intestinal inflammation, macrophages also play a significant role in other diseases. For example, REG3γ is a secretory antimicrobial lectin and REG3γ-associated Lactobacillus can enlarge the macrophage pools in the intestinal lamina propria, spleen and adipose tissue. The anti-inflammatory macrophages induced by REG3γassociated Lactobacillus in the lamina propria may migrate to the adipose tissue and participate in the resistance to high-fat-dietmediated obesity, and adipose tissue homeostasis (Huang et al., 2017). Since the gastrointestinal tract contains many HIV target cells, it has become the main site of HIV infection. Some studies have shown that Toll-like receptor 3 activation of macrophages can produce a variety of intracellular HIV limiting factors and effectively inhibit HIV infection Zhou et al., 2013). The supernatant of activated intestinal epithelial cells can induce macrophages to express several key HIV limiting factors, thus inhibiting the replication of HIV . Whether in mice or human, the cross-talk between liver and intestine is vital in the development of metabolic diseases (Zhang et al., 2010;Qin et al., 2014). For example, non-alcoholic fatty liver disease is usually accompanied by changes in the intestinal microflora and bacterial overgrowth these are related to increased intestinal permeability and pathological bacterial translocation, in which macrophages may also be involved Hundertmark et al., 2018). Macrophage inducible C-type lectin expressed on macrophages may contribute to the integrity of the intestinal barrier, but in the advanced stages of chronic liver disease, once the intestinal barrier leaks, it seems to cause inflammation and fibrosis (Schierwagen et al., 2020). Receptorinteracting protein (RIP)-3, a member of the serine threonine kinase family, is the central mediator of necrosis and is associated with many human diseases (Ramachandran et al., 2013;Roychowdhury et al., 2013;Linkermann and Green, 2014). It has been shown that the deficiency of RIP3 can inhibit macrophage accumulation and reduce inflammation in mice by inhibiting the TLR4-NF-kB pathway, and thus may be a potential therapeutic target for immune-mediated liver fibrosis (Wei et al., 2019). Cytokine blockade has been used to suppress intestinal inflammation, but there are still some problems that should be considered, such as the prediction of the therapeutic effect and its prospect. Treating IBD by treating anti-tumor factors is an important breakthrough. However, many treatments have not achieved satisfactory results, and although some treatments are promising in animal models, they have not yet undergone rigorous clinical trials. Moreover, the deficiency of intestinal macrophages may increase the susceptibility to infection and inhibit the activity of tissue repair. Therefore, the potential risks associated with this immunotherapy require careful monitoring procedures. Other ways to improve intestinal homeostasis may consist of promoting the anti-inflammatory effects of macrophages. It is worth noting that, due to their high phagocytic capacity, intestinal macrophages can be promoted through "delivery systems" such as nanomaterials and biomaterials. Finally, the reprogramming of macrophages with metabolites may be a promising method to inhibit intestinal inflammation. SUMMARY AND PROSPECT This paper reviews the origin, development, and function of macrophages and their role in intestinal inflammation and treatment. In the past few years, we have made significant progress in understanding the ontogeny and differentiation of intestinal macrophages. Advancements have been made in the recognition and regulation of tissue-specific phenotypes and functional environmental signals as well. Macrophages not only have the function of phagocytizing pathogens, but can also secrete a variety of cytokines under certain conditions and combine with different immune cells to participate in the occurrence, development, and persistence of IBD in different ways. At the same time, macrophages play a role in treating IBD, inhibiting colitis, maintaining adipose tissue homeostasis, and inhibiting HIV infection. In conclusion, macrophages are vital in gut homeostasis and immune defense. However, many aspects of intestinal macrophages still need to be explored. For example, the understanding of heterogeneity in the septum of intestinal macrophages needs to be more complete. An important feature of IBD is pro-inflammatory monocyte/macrophage accumulation. Therefore, it is very important to elucidate the exact character of the molecular factors that control the differentiation of monocyte/macrophage, the changes of these factors in the course of disease, the local regulation, and long-term effects. In addition, the study of the interaction between macrophages and other cells, intestinal microorganisms and metabolites will also contribute to the treatment of intestinal inflammation. AUTHOR CONTRIBUTIONS XH did the writing. SD did the writing-review and editing. HJ did the supervision. GL did the funding acquisition. All authors contributed to the article and approved the submitted version.
2021-03-02T14:07:49.401Z
2021-03-02T00:00:00.000
{ "year": 2021, "sha1": "28e544a47b5146b20b115b48ae629034265f362e", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcell.2021.625423/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "28e544a47b5146b20b115b48ae629034265f362e", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
248798888
pes2o/s2orc
v3-fos-license
Tracy-Widom distribution for heterogeneous Gram matrices with applications in signal detection Detection of the number of signals corrupted by high-dimensional noise is a fundamental problem in signal processing and statistics. This paper focuses on a general setting where the high-dimensional noise has an unknown complicated heterogeneous variance structure. We propose a sequential test which utilizes the edge singular values (i.e., the largest few singular values) of the data matrix. It also naturally leads to a consistent sequential testing estimate of the number of signals. We describe the asymptotic distribution of the test statistic in terms of the Tracy-Widom distribution. The test is shown to be accurate and have full power against the alternative, both theoretically and numerically. The theoretical analysis relies on establishing the Tracy-Widom law for a large class of Gram type random matrices with non-zero means and completely arbitrary variance profiles, which can be of independent interest. I. INTRODUCTION Detection of unknown noisy signals is a fundamental task in many signal processing and wireless communication applications [4], [47], [61], [65]. Consider the following generic signal-plus-noise model y = s + z, (I. 1) where s and z are independent p-dimensional centered signal and noise vectors, respectively. In many applications, s is usually generated from a low-dimensional MIMO filter such that s = Γν [47], where Γ is a p × r deterministic matrix, ν is an r-dimensional centered random vector and r is some unknown fixed integer that does not depend on p. The value of r is one of the most important inputs for many computationally demanding parametric procedures such as direction of arrival estimation, blind source deconvolution, and so on. In the literature of statistical signal processing, the most common approaches to determine the value of r are perhaps the information theoretic criteria, including the minimum description length (MDL), Bayesian information criterion (BIC) and Akaike information criterion (AIC) and their variants. For a detailed review of this aspect, we refer the reader to [67]. All these methods assume that the dimension p is fixed and the sample size n, i.e., the number of observations, goes to infinity. Consequently, none of these estimators is applicable to large arrays where the number of sensors is comparable to or even larger than the sample size [49]. To address the issue of high dimensionality, many methods and statistics have been proposed to infer the value of r under various settings. Many methods have been proposed to test H 0 : r = 0 against H a : r 1, which is equivalent to testing the existence of the signals. When z is a white noise, a non-parametric method was proposed in [49], the generalized maximum likelihood test was studied [9] and a sample eigenvalue based method was proposed in [61]. When z is a colored noise, i.e., z = Σ 1/2 x for a positive definite covariance matrix Σ and a white noise x, the same testing problem has been considered in [7], [15], [62], [72] under different moment assumptions on the entries of x. However, all the aforementioned methods assume explicitly that the noise vectors z 1 , · · · , z n are generated independently from the same distribution. If the noise vectors are correlated or generated from possibly different distributions, none of these methods works or has been justified rigorously. One such example is the doubly heteroscedastic noise, whose matrix of noise vectors (z 1 , · · · , z n ) take the form A 1/2 N B 1/2 [55], where N is a p × n white noise matrix, and A and B are two positive definite symmetric matrices representing the spatial and temporal covariances, respectively. Many previous works also depend crucially on the null hypothesis r = 0, and cannot be applied to the more general setting with null hypothesis r = r 0 for a fixed r 0 0. A. Problem setup and test statistics In this paper, we present a more general setting for the statistical analysis of the detection of the number of signals. On the one hand, we propose some statistics to study the following hypothesis testing problem H 0 : r = r 0 vs H a : r > r 0 , where r 0 is some pre-given integer representing our belief of the true value of r. (I.2) generalizes the previous works, which mainly focus on the r 0 = 0 case, i.e., the testing of the existence of signals. On the other hand, we consider more general covariance structures of the noise, which include the doubly-heteroscedastic noise, sparse noise and noise with banded structures X. Ding is with the Department of Statistics, University of California, Davis (e-mail: xcading@ucdavis.edu). F. Yang is with the Department of Statistics and Data Science, University of Pennsylvania (e-mail: fyang75@wharton.upenn.edu). as special cases. We refer the readers to Examples II.6 and II.7 and the simulation settings in Section IV for more details. We emphasize that through (I.2), a natural consistent sequential testing estimate of r can be generated, that is, r := inf{r 0 0 : H 0 is accepted}. (I. 3) We refer the readers to (III.10) and Corollary III.5 for more rigorous arguments on this aspect. In order to test (I.2), we propose some data-adaptive statistics utilizing the edge eigenvalues of the data matrix. Suppose we observe n data samples and stack them into the data matrix where Y = (y 1 , · · · , y n ) ∈ R p×n collects the noisy observations, R = (s 1 , · · · , s n ) is the signal matrix of rank r, and Z = (z 1 , · · · , z n ) is the noise matrix. The matrix (I. 4) is commonly referred to as the signal-plus-noise matrix in the literature, which is also closely related to the problem of low-rank matrix denoising [6], [17], [21], [60], [70], [73]. In the current paper, we consider the high-dimensional regime where p and n are comparably large so that τ p/n τ −1 , for a small constant 0 < τ < 1. We assume that the entries of Y are independent random variables satisfying that Ey ij = r ij , Var(y ij ) = s ij . (I.5) Correspondingly, we will also call R = (r ij ) the mean matrix, while the variance matrix S = (s ij ) describes a heterogeneous variance profile for the noise. In this paper, we refer to Y Y as a random Gram matrix. We mention that the detection of the number of signals has been studied rigorously in the literature only when S is of sample covariance type, that is, s ij = a i for some a i > 0. Even for the doubly-heteroscedastic noise with s ij = a i b j for some a i , b j > 0, the aforementioned testing methods in the literature will lose their validity. There exists a vast literature on conducting high-dimensional statistical inference using the largest eigenvalues of Y Y when S is of sample covariance type. For instance, they have been employed to test the existence and number of spikes for the spiked covariance matrix model [46], [65], test the number of factors in factor model [64], detect the signals in a signal-plus-noise model [4], [7], [9], [72], test the structure of covariance matrices [24], [40], and perform the multivariate analysis of variance (MANOVA) [37], [40]. In most of these applications, on the one hand, researchers aim to test between the null hypothesis of a non-spiked sample covariance matrix and the alternative of a spiked sample covariance matrix. Under the null hypothesis, the largest few eigenvalues have been proved to satisfy the Tracy-Widom law asymptotically under a proper scaling [7], [18], [24], [45], [48], [53], [63], [66]. More precisely, there exist parameters λ + and such that p 2/3 (λ 1 − λ + ) converges in law to the type-1 Tracy-Widom distribution [68], [69], where λ 1 is the largest eigenvalue of Y Y . Then it is natural to choose p 2/3 (λ 1 − λ + ) as the test statistic. On the other hand, especially in the setting of factor models in economics, researchers are interested in inferring the number of factors. Under the null hypothesis that there are r large factors, the (r + 1)-th eigenvalue λ r+1 obeys the Tracy-Widom distribution asymptotically [64]. Based on the above observations, if we can show that λ r+1 obeys the Tracy-Widom law in our setting (I.5), we can naturally choose p 2/3 (λ r+1 − λ + ) as the test statistic for the testing problem (I.2). However, in practice, the two parameters and λ + depend on the usually unknown variance matrix S. To resolve this issue, we can follow [64] to use the statistic , (I. 6) where λ 1 λ 2 · · · λ p are the eigenvalues of Y Y arranged in descending order, and r * is a pre-chosen integer that is interpreted as the maximum possible number of signals the model can have. We will also see in Section III-B that (I.6) can be used to count the number of outlier eigenvalues that correspond to signals through a sequential testing procedure. Onatski [64] observed that in the setting of sample covariance matrices, T is independent of and λ + under the null hypothesis, and hence is asymptotically pivotal. Moreover, its asymptotic distribution is determined by the Tracy-Widom law of the edge eigenvalues. Consequently, we can approximate the distribution of T using Monte Carlo simulations of Wishart matrices. We point out that in many literature and scientific applications [6], [44], [59], [60], [72], it is reasonable to assume that the signals are distinct. Under this assumption, we also propose the following statistic T r0 := λ r0+1 − λ r0+2 λ r * +1 − λ r * +2 . (I.7) Compared to (I.6), the statistic (I.7) relies on fewer (actually, only three or four) sample eigenvalues. Moreover, for commonly used alternatives with low-rank signals, we expect that the statistic (I.7) has better performance in terms of power (i.e., it is sensitive to a wider class of alternatives and has higher power for some fixed alternative). Our expectation, although without full theoretical justification, is partly due to the fact that T r0 has smaller critical values compared to T as illustrated in Table I, which is reasonable because taking maximum over a sequence of random variables increases critical values. Empirically, our simulations in Section IV will show that (I.7) indeed has better finite-sample performance than (I. 6) in terms of power. In fact, we believe that the statistic (I.7) will also work when the signals are degenerate, because the corresponding sample eigenvalues will be separated. We refer the reader to Remark III.6 for more details. The statistics (I. 6) and (I.7) are applicable to statistical inference only if the Tracy-Widom law has been established for the associated random Gram matrix Y Y . However, to the best of our knowledge, this has only been proved rigorously for sample covariance type random Gram matrices in the literature. Therefore, for hypothesis testing problems involving random Gram matrices with general mean and variance profiles, we need to prove the Tracy-Widom fluctuation rigorously before validating the use of T and T r0 . This motivates us to study the limiting distributions of the edge eigenvalues in the general setup (I.5). Here the notion "edge eigenvalues" refers to the largest few eigenvalues near the right edge of the bulk eigenvalue spectrum, excluding the outliers of Y Y caused by the signals. B. Tracy-Widom distribution for random Gram matrices The Tracy-Widom law for the edge eigenvalues of non-spiked sample covariance matrices has been proved in a series of papers. For Wishart matrices, it was first proved in [45] that the largest eigenvalue satisfies the Tracy-Widom law asymptotically. This result was later extended to more general sample covariance matrices with generally distributed entries (assuming only certain moment assumptions) and variance profiles s ij = a i (assuming certain regularity conditions on the sequence {a i : 1 i p}) in a series of papers under various settings; see e.g. [7], [18], [24], [48], [53], [63], [66]. However, when the mean and variance profiles of the random Gram matrix become more complicated, much less is known about the limiting distribution of the edge eigenvalues. In this paper, motivated by the applications in signal detection as discussed in Section I-A, we establish the Tracy-Widom asymptotics for the edge eigenvalues of a general class of random Gram matrices. The informal statement is given in Theorem I.1. Following the conventions in the random matrix theory literature, we shall rescale the matrix Y properly so that the limiting ESD of Y Y is compactly supported as n → ∞. Moreover, recall that GOE (Gaussian orthogonal ensemble) refers to symmetric random matrices of the form H := (X + X )/ √ 2, where X is a p × p matrix with i.i.d. real Gaussian entries of mean zero and variance p −1 . In this paper, we will consistently denote the eigenvalues of H by (I.8) Theorem I.1 (Informal statement of Theorem III.2). For Y satisfying (I.5), we denote the eigenvalues of Q := Y Y by λ 1 λ 2 · · · λ p . Let λ + be the rightmost edge of the limiting bulk eigenvalue spectrum, and a ∈ N be the index of the largest edge eigenvalue. Then, there exists a deterministic sequence of numbers ≡ (R, S) depending on R and S, such that for any fixed k ∈ N, the first k rescaled edge eigenvalues, { p 2/3 (λ a+i − λ + ) : 0 i k − 1}, have the same asymptotic joint distribution as the first k rescaled eigenvalues of GOE, It is well-known that p 2/3 (µ GOE 1 − 2) converges to the type-1 Tracy-Widom distribution [68], [69]. Furthermore, for any fixed k ∈ N, the joint distribution of the largest k eigenvalues of GOE can be written in terms of the Airy kernel [38]. Hence Theorem I.1 gives a complete description of the finite-dimensional correlation functions of the edge eigenvalues of Q. Once Theorem I.1 is established, we can determine the asymptotic distributions of the statistics (I.6) and (I.7), and apply them to the hypothesis testing problem (I.2). Our proof of Theorem I.1 is based on the following result on the edge eigenvalues of a general class of Gaussian divisible random Gram matrices. , where X is a p × n random matrix independent of Y and has i.i.d. Gaussian entries of mean zero and variance n −1 . Denote the eigenvalues of Q t by λ 1 (t) λ 2 (t) · · · λ p (t). Let η * > 0 be a scale parameter depending on n. Suppose the empirical spectral distribution of Q = Y Y has a regular square root behavior near the right edge λ + on any scale larger than η * (in the sense of Definition V.1 below). Let a ∈ N be the index of the largest edge eigenvalue. Then for any t √ η * and fixed k ∈ N, there exist deterministic sequences of numbers t and λ +,t such that the first k rescaled edge eigenvalues of Q t , { t p 2/3 (λ a+i (t) − λ +,t ) : 0 i k − 1}, have the same asymptotic joint distribution as the first k rescaled eigenvalues of GOE, On one hand, Theorem I.2 covers more general matrices than the random Gram matrices proposed in (I.5), because it only requires a regular square root behavior of the ESD near the right edge without assuming any independence between matrix entries of Y . We remark that the square root behavior of the ESD is generally believed to be a necessary condition for the appearance of the Tracy-Widom law in the asymptotic limit. For example, if the ESD has a cubic root behavior, then the corresponding cusp universality is different from the Tracy-Widom law [16], [29]. On the other hand, Theorem I.2 gives the Tracy-Widom law for the edge eigenvalues of a different matrix Q t other than Q. To obtain the Tracy-Widom law for the original matrix Q, we still need to show that the edge eigenvalues of Q t have the same joint distribution as those of Q asymptotically, which, however, is not always true. In fact, if t is too large, then the edge statistics of Q t can be very different from those of Q. For example, if Y is a rectangular matrix whose singular values are all the same, then Q trivially has a square root behavior on any scale larger than η * = 1 in the sense of Definition V.1. But in the setting of Theorem I.2, for t 1, the edge statistics of Q t is dominated by a Wishart matrix tXX . From the above discussions, we see that in order to prove the Tracy-Widom law for the edge eigenvalues of Q using Theorem I.2, we need to establish the following two results: • the ESD of Q has a regular square root behavior near λ + on a sufficiently fine scale η * 1; • for some √ η * t 1, the edge statistics of Q t match those of Q asymptotically. In random matrix theory, there is a general way to accomplish this by using some sharp estimates, called local laws, on the resolvent of Q, defined as G(z) := (Q − z) −1 for z ∈ C. Such local laws for the model (I.5) have been proved in [2], [3] under quite general conditions. Combining these local laws with Theorem I.2, we can conclude Theorem I.1 using some standard resolvent comparison arguments developed in e.g. [35], [48], [54], [74]. We remark that there exists another method in the literature [37], [52], [53], [75] to prove the Tracy-Widom law for sample covariance type matrices, that is, a so-called resolvent flow argument. While we expect this method to be also applicable to our setting, the techniques seem to be much harder, and we do not pursue this direction in this paper. The rest of this paper is organized as follows. In Section II, we give the precise assumptions on the signal matrix R and the variance matrix S. We also provide some concrete examples with complicated heterogeneous variance profiles S, which have not been studied rigorously in the literature. In Section III, we state our main results. The Tracy-Widom distribution for general random Gram matrices is presented in III-A, while the theoretical properties of the testing statistics (I.6) and (I.7) are analyzed in Section III-B. In Section IV, we conduct numerical simulations to verify the accuracy and power of the proposed statistics for the testing problem (I.2) under various noise settings that have not been considered in the literature. In Section V, we sketch the strategy for proving the Tracy-Widom distribution. The technical proofs are put into Appendices A-C. II. THE MODEL ASSUMPTIONS AND EXAMPLES In this section, we impose some general assumptions on the signal matrix R and the variance matrix S. We also provide some important examples that have been used in the literature. Note that Y Y and Y Y have the same non-zero eigenvalues. Hence without loss of generality, we only need to consider the high-dimensional setting where the aspect ratio c n := p/n satisfies that τ c n 1, for a small constant τ > 0. For the signal matrix R, we assume that for a fixed r ∈ N that is independent of p and n. Note that when r = 0, Y is a centered random Gram matrix. Following [2], [3], we impose the following regularity assumptions on the heterogeneous variance profile. Assumption II.1. Suppose S satisfies the following regularity conditions. (A1) The dimensions of S are comparable, that is, (II.1) holds. (A2) The variances are bounded in the sense that there exist constants s * , ε * > 0 such that The matrices S and S are irreducible in the sense that there exist L 1 , L 2 ∈ N and a small constant τ > 0 such that (A4) The rows and columns of S are sufficiently close to each other in the following sense. There is a continuous monotonically decreasing (n-independent) function Γ : (0, 1] → (0, ∞) such that lim ε↓0 Γ(ε) = ∞, and for all ε ∈ (0, 1], we have where S i and (S ) j denote the i-th row of S and j-th row of S , respectively. In addition to (II.2), we introduce the following assumption on the signal strengths, i.e. the singular values of R. Assumption II.4. We assume that (II.2) holds. When r 1, denote by σ r (R) the smallest non-trivial singular value of R. We assume that for a small constant τ > 0, where M is defined in (II.5). Remark II.5. (II.7) is commonly referred to as the supercritical condition, and has appeared in lots of literature in random matrix theory and statistics [6], [9], [60], [62]. It is a sufficient condition for the mean matrix R to give rise to r outliers of Y Y that are detached from the bulk spectrum. By Lemma A.6 below, we have that the largest eigenvalue of (Y −R)(Y −R) is at most λ + + o(1) with high probability. Combining it with (II.7) and applying Weyl's inequality, it is easy to check that Y Y has r eigenvalues that are larger than (2 + τ − o(1)) 2 M. On the other hand, by the Cauchy interlacing, the limiting bulk eigenvalue spectrum of Y Y is supported on [0, λ + ]. Hence, under (II.7), there are r outliers that are away from the spectrum edge λ + . However, we remark that 4 √ M is quite likely not the exact threshold for BBP transition [5]. To guarantee the Tracy-Widom law of the edge eigenvalues, it is necessary that all spikes of R are away from (i.e., either above or below) the BBP threshold. If there are critical spikes (i.e., spikes that are exactly equal to the BBP transition threshold), then the Tracy-Widom law of the edge eigenvalues can fail; see Theorem 1.1 in [5]. Here we have chosen (II.7) simply to ensure that all spikes are supercritical. To determine the exact BBP threshold and to include settings with subcritical spikes, we need to perform a more detailed study of spiked random Gram matrices. We postpone it to future works, since it is not the focus of the current paper. In what follows, we give two concrete examples which satisfy the above assumptions and have not been studied rigorously in the literature. Example II.6 (Doubly-heteroscedastic noise, [55]). Consider the following doubly-heteroscedastic noise matrix where A and B are deterministic positive definite symmetric matrices, and N = (N ij ) is a p × n random matrix with i.i.d. entries of mean zero and variance n −1 . Suppose A and B are diagonal matrices with a 1 a 2 · · · a p > 0 and b 1 b 2 · · · b n > 0. Then Q = Y Y is a random Gram matrix as in Theorem I.1 with variance matrix S = ((a i b j )/n) and mean matrix R = 0. It is easy to see that (A2) and (A3) of Assumption II.1 hold if a i 's and b j 's are all of order 1. Furthermore, assumption (II.4) is reduced to (II. 10) and condition (II.6) is reduced to In fact, if we have a i = f (i/p) and b j = g(j/n) for some piecewise 1/2-Hölder continuous functions f and g, then (II.11) holds true. One special case is that f and g are piecewise constant functions, which happens when the eigenvalues of A and B take at most O(1) many different values. If (II.10) or (II.11) holds, as we will see in Section III-A, Theorem I.1 applies to (II.8) with r = 0. We remark that the diagonal assumption (II.9) is not necessary for the Tracy-Widom asymptotics. When the matrices A and B are non-diagonal, we get a model that extends the setting in (I.5) because the entries of Y = A 1/2 N B 1/2 can be correlated. Finally, we remark that (A4) of Assumption II.1 can be violated by allowing for some large a i 's and b j 's. Then we get a spiked separable covariance matrix, which has been studied in detail in [20]. Our Theorem I.1 also applies to this case. Example II.7 (Sparse noise, [43], [57]). In this example, we consider the sparse noise matrix Z as proposed in [43]. The sparse random Gram matrices can be used as a natural model to study high-dimensional data with randomly missing observations. For instance, given a probability p, we set z ij = h ij w ij , where w ij are random variables independent of {h ij }, and h ij are i.i.d. (rescaled) Bernoulli random variables with P(h ij = (np) −1/2 ) = p and P(h ij = 0) = 1 − p. More generally, we say that Q = Y Y is a sparse random Gram matrix if Y satisfies the following properties: the entries y ij , 1 i p, 1 j n, are independent random variables satisfying for a large constant C > 0 and sparsity parameter q with 1 q √ n. In the above setting with randomly missing observations, we have that q = √ np. III. MAIN RESULTS In this section, we state the main results of this paper. The Tracy-Widom distribution of the edge eigenvalues for a general class of random Gram matrices, i.e., the formal statement of Theorem I.1, will be presented in Section III-A. The theoretical properties of the test statistics (I.6)-(I.7) and the associated sequential estimator (I.3) will be given in Section III-B. A. Tracy-Widom distribution for random Gram matrices In this subsection, we provide the formal statement for Theorem I.1. Before stating our main result, we first introduce the necessary notations. If (II.3) holds, then there exists a unique vector of holomorphic functions m(z) = (m 1 (z), · · · , m p (z)) : C + → C p , C + := {z ∈ C : Im z > 0}, satisfying the so-called vector Dyson equation such that Im m k (z) > 0, k = 1, · · · , p, for any z ∈ C + [2], [3], [39]. In the above equation, 1 denotes the vector whose entries are all equal to 1, and both 1/m and 1/(1 + S m) mean the entrywise reciprocals. Moreover, for each k = 1, · · · , p, there exists a unique probability measure ν k that has support contained in [0, 4M] and is absolutely continuous with respect to the Lebesgue measure, such that m k is the Stieltjes transform of ν k : (If we consider the case p > n, then ν k will also have a point mass at zero, but we do not have to worry about this issue under (II.1).) Let ρ k be the density function associated with ν k . Then the asymptotic ESD of (Y − R)(Y − R) is given by ν := p −1 k ν k , with the following density ρ and Stieltjes transform m, We summarize the basic properties of the density functions ρ and ρ k , 1 k p. Lemma III.1 (Theorem 2.3 of [2]). Under Assumption II.1, for any 1 k p, there exists a sequence of positive numbers a 1 > a 2 > · · · > a 2q 0 such that where q ∈ N depends only on S. Moreover, ρ has the following square root behavior near a 1 : where ≡ (S) is an order 1 positive value determined by S. In what follows, we shall call a k the spectral edges. In particular, we will focus on the right-most edge a 1 and denote it by λ + ≡ a 1 following the convention in the random matrix theory literature. We remark that as discussed in [2], it is possible that the density ρ has some cusp singularities when two edges are close to each other or when ρ touches zero. In the current paper, since we are mainly interested in the edge eigenvalue statistics around a 1 , we only need assumptions to ensure (III.3). However, to show the Tracy-Widom law at other edges, we need extra edge regularity and edge separation conditions to avoid cusp singularities as in [37], [48]. We will pursue this direction in future works. Now, we are ready to state the Tracy-Widom law of the largest edge eigenvalues for a general class of random Gram matrices with variance and mean matrices satisfying Assumptions II.1 and II.4. Theorem III.2. Let Y = (y ij ) be a p × n random matrix such that y ij := (y ij − r ij )/ √ s ij are real i.i.d. random variables. Suppose y 11 follows a probability distribution that does not depend on n, and satisfies E y 11 = 0, E y 2 11 = 1 and lim x→∞ x 4 P (| y 11 | x) = 0. (III.4) Suppose the variance matrix S = (s ij ) satisfies Assumption II.1 and the mean matrix R = (r ij ) satisfies Assumption II.4. Denote the eigenvalues of Q = Y Y by λ 1 λ 2 · · · λ p . Then we have that where is the value defined in (III.3), and F 1 is the type-1 Tracy-Widom cumulative distribution function. More generally, for any fixed k ∈ N, we have that 8), where N is a p × n random matrix with N ij = n −1/2 y ij for a sequence of i.i.d. random variables y ij . Suppose y 11 follows a probability distribution that does not depend on n, and satisfies E y 11 = 0, E y 2 11 = 1 and (III.4). In addition, assume that E y 3 11 = 0. (III.8) Let A and B be p × p and n × n deterministic positive definite symmetric matrices, whose eigenvalues satisfy that for a small constant τ > 0, and satisfy the condition (II.10) for a continuous monotonically decreasing function Γ : (0, 1] → (0, ∞) such that lim ε↓0 Γ(ε) = ∞. Then, for any fixed k ∈ N, we have that for all (x 1 , x 2 , . . . , x k ) ∈ R k , where λ + and are defined for the variance matrix S = ((a i b j )/n). Finally, the condition (III.8) is not necessary if either A or B is diagonal. Corollary III.4. Suppose Q = Y Y is a sparse random Gram matrix, where the entries of Y satisfy (II.12) with q n 1/3+c φ for a small constant c φ > 0. Suppose the variance matrix S = (s ij ) satisfies Assumption II.1 and the mean matrix R = (r ij ) satisfies Assumption II.4. Then for any fixed k ∈ N, we have that We also mention that the condition (III.8) in Corollary III.3 and the condition q n 1/3+c φ in Corollary III.4 are mainly technical. The edge universality in [74] was proved under the vanishing third moment condition. Hence, we have kept (III.8) in Corollary III.3, but we believe it can be removed with further theoretical development. We also believe that q n 1/3+c φ can be weakened to q n 1/6+c φ , while Corollary III.4 may fail when q n 1/6 . Since these problems are not the main focus of this paper, we will pursue them in future works. We also refer the readers to Remark A.10 for more details. B. Theoretical properties of the test statistics With Theorem III.2, we can readily obtain the asymptotic distributions of the statistics T(r 0 ) in (I.6) and T r0 in (I.7) under the null hypothesis in (I.2), and analyze the statistical power of them under the alternatives. Corresponding to T(r 0 ) and T r0 , we define the following two sequential testing estimators r 1 := inf{r 0 0 : T(r 0 ) < δ (1) n }, r 2 := inf{r 0 0 : T r0 < δ (2) n }. (III.10) We will show that r 1 and r 2 are consistent estimators of r as long as we choose the critical values δ (1) n and δ (2) n properly. Let W ∼ W p (I p , n) be a standard Wishart matrix. We define the following statistics G 1 and G 2 in terms of the eigenvalues of W, , Corollary III.5. Suppose the assumptions of Theorem III.2 hold and r * > r. Under the null hypothesis H 0 in (I.2), we have that lim Proof. (III.11) follows directly from (III.6). On the other hand, under H a and the assumption r * > r, we have that By Theorem III.2, we have that with probability 1 − o(1). Furthermore, under Assumption II.4, as discussed in Remark II.5 we have that |λ r − λ + | c τ for a small constant c τ > 0. Hence we get that with probability 1 − o(1), which concludes (III.12) and (III.14). Finally, using (III.16), we immediately conclude (III.13) and (III.15). Remark III.6. We make a few remarks here. First, the conditions δ n → ∞ and δ (2) n → ∞ are necessary and sufficient to guarantee that T and T r0 have asymptotic zero type I errors. For any fixed r * −r 0 , the joint distribution of {λ i (W)} 1 i r * −r0+2 can be expressed in terms of the Airy kernel [38]. Although it is hard to get explicit expressions of the limiting distributions of G 1 and G 2 , it is easy to check that both the distributions are supported on the whole positive real line. Consequently, it is necessary to let δ (1) n and δ (2) n diverge. Second, in order to choose a non-trivial δ (2) n satisfying δ (2) n → ∞ and δ (2) n p −2/3 / (λ r0+1 − λ r0+2 ) → 0, we need the following estimate: The condition (III.17) can be guaranteed if H a holds and the (r 0 +1)-th and (r 0 +2)-th singular values of R are non-degenerate. However, we believe that even in the degenerate case, the condition (III.17) still holds. In fact, following [5], [11], we conjecture that the degenerate (r 0 + 1)-th and (r 0 + 2)-th spikes of R will give rise to outliers satisfying that λ r0+1 − λ r0+2 p −1/2 with probability 1 − o(1). To prove this fact, we need to establish the limiting distributions of the outliers of spiked random Gram matrices, and we postpone the study to a future work. In Table I, we report some simulated finite sample critical values of G 1 and G 2 corresponding to type I error rate α = 0.1 for different choices of r * − r 0 , n ∈ {200, 500} and c n = p/n ∈ {0.5, 1, 2} based on 5, 000 Monte Carlo simulations. All the simulations in Section IV will be based on these critical values. I: Critical values for G 1 and G 2 (inside the parentheses) for different combinations of p, n and r * − r 0 under the nominal significance level 0.1. When r * − r 0 = 1, we have G 1 = G 2 , so they share the same critical values. Note that G 2 always has smaller critical values than G 1 . IV. NUMERICAL SIMULATIONS In this section, we design Monte-Carlo simulations to demonstrate the accuracy and power of our proposed statistics for the hypothesis testing problem (I.2) under some general noise structures. By Corollary III.5, we will use the statistics T and T r0 and reject the null hypothesis H 0 of (I.2) if they are larger than the critical values in Table I. For the simulations, we always consider the following scenario: R is of rank r 5, and all the singular values of R are non-degenerate. In the above scenario, we consider the following three noise structures, whose impact on the signal detection is still unknown rigorously in the literature. (I) Z is a doubly-heteroscedastic noise matrix. Specifically, we take Z = A 1/2 N B 1/2 , where N is a p × n white noise matrix with i.i.d. entries of mean zero and variance n −1 , and A and B are two positive definite matrices generated as follows: and U A and U B are two orthogonal matrices generated from the R package pracma. In the simulations, we take p = n −1/4 and s ij = α i β j with α i being i.i.d. random variables uniformly distributed on [1,2] and β j being i.i.d. random variables uniformly distributed on [3,4]. (III) Z = (z ij ) is a noise matrix whose variance matrix S has a banded latent structure. Specifically, we assume that where ν ij are i.i.d. random variables uniformly distributed on [1,2]. In the simulations, we always take r * = 5 and c n ∈ {0.5, 1, 2}. First, under the null hypothesis H 0 in (I.2), we check the accuracy of the statistics under the nominal significance level 0.1. We consider the above settings (I)-(III) under the null hypothesis r 0 = 3, with signal matrix R = 18e 1p e 1n +16e 2p e 2n +14e 3p e 3n . Here, e ip and e in denote the unit vectors along the i-th coordinate axis in R p and R n , respectively. In Figure 1, we report the simulated type I error rates for both the statistics (I.6) and (I.7) in the settings (I)-(III) for the noise matrices. We find that both statistics combined with the critical values in Table I can attain reasonable accuracy even when n = 200. Second, we examine the power of the statistics under the nominal level 0.1 when r 0 = 0 in (I.2). We set the alternative as In Figure 2, we report the simulated power for both the statistics (I.6) and (I.7) as d increases, where we take c n = 2 and the settings (I)-(III) for the noise matrices. We see that both statistics have high power even for a not so large n, n = 200, as long as d is above some threshold. Furthermore, when d is in a certain range, we find that the statistic T r0 in (I.7) has better performance in terms of power than the statistic T in (I.6). Finally, the statistic T r0 starts to have non-zero power for smaller values of d compared to T. This enables us to study a wider range of alternatives in terms of the d value. We expect that this is due to the fact that the statistic T needs a larger critical value to reject H 0 as illustrated in Table I. V. PROOF STRATEGIES In this section, we describe the main strategy for the proof of Theorem III.2. All the technical details can be found in the appendix. From the theoretical point of view, our proof of Theorem III.2 employs the following three step strategy. Step 1: Proving a local law on the Stieltjes transform of the random Gram matrix Q, m Q (z) := p −1 tr(Q − z) −1 . This is needed in order to check the square root behavior of the ESD of Q around the right edge. Table I. Step 2: Establishing the asymptotic Tracy-Widom law for the edge eigenvalues of the Gaussian divisible random Gram matrix Q t in Theorem I.2 for a small t > 0. Step 3: Showing that Q has the same edge eigenvalue statistics as Q t asymptotically. This three step strategy has been widely used in the proof of bulk universality of random matrices [30], [31], [32], [34]. For a more extensive review, we refer the reader to [33] and references therein. However, it has been rarely (if any) used in the study of the edge eigenvalues of random Gram matrices. One of the main reasons is that the above Step 2 for Gram type random matrices-the core of the strategy-was not well-understood previously. Regarding the proof of Theorem III.2, even though the results of Step 1 have been established in [2], [3], Steps 2 and 3 are still missing. For Step 3, we can employ some standard resolvent comparison arguments developed in e.g. [7], [18], [35], [48], [54], [66], [74]. In this paper, we mainly focus on Step 2, which is completed by Theorem I.2. We will provide the formal statement of Theorem I.2 in Theorem V.3. For this purpose, we first need to introduce some new notations. Let Y be a p × n data matrix, and X be an independent p × n random matrix whose entries are i.i.d. centered Gaussian random variables with variance n −1 . Since the multivariate Gaussian distribution is rotationally invariant under orthogonal transforms, for any t > 0 we have that where Y = U 1 W U 2 is a singular value decomposition of Y with W being a p × n rectangular diagonal matrix, Here, are the singular values of Y arranged in descending order. Thus, to study the singular values of Y + √ tX, it suffices to assume that the initial data matrix is W . We assume that the ESD of V := W W has a regular square root behavior near the spectral edge, which is generally believed to be a necessary condition for the appearance of the Tracy-Widom law. Following [51], we state the regularity conditions in terms of the Stieltjes transform of V , Definition V.1 (η * -regular). Let η * be a deterministic parameter satisfying η * := n −φ * for some constant 0 < φ * 2/3. We say V is η * -regular around the right-edge λ + := d i0 for a fixed i 0 ∈ N, if the following properties hold for some constants and for z = E + iη with λ + E λ + + c V and η * η 10, we have Remark V.2. For our setting in Theorem III.2, the index i 0 is equal to r + 1, which labels the first non-outlier eigenvalue of V . The motivation for (i) is as follows: if m(z) is the Stieltjes transform of a density ρ with square root behavior around λ + , i.e., 2) essentially mean that the empirical spectral density of V behaves like a square root function near λ + on any scale larger than η * . The condition η 10 in the definition is purely for definiteness of presentation-we can replace 10 with any constant of order 1. Regarding t as a time parameter, we are interested in the dynamics of the edge eigenvalues of Let ρ w,t be the asymptotic spectral density of Q t , and m w,t be the corresponding Stieltjes transform. It is known that for any t > 0, m w,t is the unique solution to such that Im m w,t > 0 for z ∈ C + [22], [23], [71]. Adopting the notations from free probability theory, we shall call ρ w,t the rectangular free convolution (RFC) of ρ w,0 with Marchenko-Pastur (MP) law at time t. Let λ +,t be the rightmost edge of the bulk component of ρ w,t . By Lemma B.5, we know that ρ w,t has a square root behavior near λ +,t . We introduce the notation which is the so-called subordination function for the RFC. Then, we define the function and the parameter where we have abbreviated that ζ +,t ≡ ζ t (λ +,t ). Here we used the short-hand notation ζ t (λ +,t ) ≡ lim η↓0 ζ t (λ +,t + iη). Now we are ready to give the formal statement of Theorem I.2. Theorem V.3. Suppose W is η * -regular in the sense of Definition V.1 with η * = n −φ * . Suppose t satisfies n ε η * t 2 n −ε for a small constant ε > 0. Fix any k ∈ N, and let f : R k → R be a test function such that for a constant C > 0. Denote the eigenvalues of Q t by λ 1 (t) λ 2 (t) · · · λ p (t). Then, we have that where we recall that µ GOE i are the eigenvalues of GOE as given by (I.8). Since the edge eigenvalues of GOE at ±2 obey the type-1 TW fluctuation [68], [69], by Theorem V.3 and the Portmanteau lemma we immediately obtain that where recall that F 1 is the type-1 TW distribution function. Following the literature, we shall call the evolution of Q t with respect to t the rectangular matrix Dyson Brownian motion, while we call the evolution of the eigenvalues of Q t with respect to t the rectangular Dyson Brownian motion. We remark that the edge statistics of the symmetric Dyson Brownian motion (DBM) have been studied in [51] for Wigner type matrix ensembles. The above Theorem V.3 extends the result there to Gram type matrix ensembles. Before the end of this section, we summarize the basic ideas for the proof of Theorem V.3 and provide some (possibly helpful) heuristic discussions. The proof utilizes the matching and coupling strategy in [13], [51]. First, in order to see the Tracy-Widom limit, we need to show that: (i) the rectangular free convolution (RFC) has a square root behavior near the right edge in the sense of (V.3), and (ii) the edge eigenvalues of Q t distribute according to the RFC on scales n −2/3 . However, at t = 0, the conditions (V.1) and (V.2) are not strong enough for both of these purposes. We need to run the dynamics for an amount of time t 0 to regularize both the RFC and the rectangular DBM. To show (i), we need a detailed analysis of the RFC, which has been done in another paper [19]. In particular, the analysis shows that under the η * -regular assumption, we are able to obtain the square root behavior of RFC once t 0 √ η * . We summarize some key properties of the RFC in Appendix B-A. To show (ii), we need to prove some sharp local laws on the resolvent (Q t0 − z) −1 for z = E + iη with E around the right edge and η n −2/3−ε . These local laws are also proved in [19] and summarized in Section B-B. Next, we consider the rectangular DBM starting with the regular initial data Q t0 (i.e., the evolution of the eigenvalues of Q t0+t ). It is known from the literature that the rectangular DBM satisfies a system of SDEs in equation (C.2), which is the main tool for our proof. We couple it with the system of SDEs for another rectangular DBM of a properly chosen sample covariance matrix, whose Tracy-Widom law is known from the literature and whose asymptotic ESD matches that of Q t0 around the right edge. Under this coupling, we will show that after shifted by respective right edges, the differences between the edge eigenvalues of the two rectangular DBMs are much smaller than n −2/3 if we run them for an amount of time t 1 so that n −1/3 This key result is summarized in Theorem C.1. Here, t 1 t 0 is required so that the RFC does not change much from t 0 to t 0 + t 1 . In particular, the right edge λ +,t and the scaling factor γ n (t) remain approximately constant throughout the evolution. On the other hand, the condition t 1 n −1/3 is essential because the "relaxation time to equilibrium" of the coupled DBM is of order n −1/3 at the right edge, which we will explain below. To prove Theorem C.1, it suffices to study the differences between the two coupled rectangular DBMs, denoted by {λ i (t)} and {µ i (t)}, respectively. For this purpose, we consider an interpolating process z i (t, α) for 0 α 1 (cf. equation (C.6)), which is a rectangular DBM with initial data z i (0, α) = αλ i (0) + (1 − α)µ i (0). Note that z i (t, 0) = µ i (t) and z i (t, 1) = λ i (t), so we only need to control ∂ α z i (t, α) for 0 α 1. In the proof, we find that it is more convenient to work with the singular values y i (t, α) := z i (t, α) and its shifted (by the right edge) version y i (t, α). Then, it suffices to control ∂ α y i (t, α) by analyzing a system of SDEs given by equation (C.35). However, for the analysis, we have to cut off the effect of bulk eigenvalues away from the edge, because the η * -regular condition only describes the edge behavior of the initial data. Hence, similar to [14], [51], we localize the analysis by introducing to the SDEs of y i (t, α) a short-range approximation (cf. equations (C.37)-(C.39)), whose solutions are denoted by y i (t, α). Through a careful analysis, we find that the bulk eigenvalues indeed have negligible effect and the differences | y i (t, α) − y i (t, α)| are much smaller than n −2/3 (cf. Lemma C.11). Now, armed with the above preparation, it remains to control ∂ α y i (t, α), which turns out to satisfy a deterministic parabolic PDE in (C.60). Using the local laws for (Q t0 − z) −1 , we can show that the eigenvalues of Q t0 satisfy a rigidity estimate (see Lemma B.11), which implies that the initial data { y i (t, α)} has an q norm bounded by n −2/3+ε for any q 4 and small constant ε > 0. The last piece is then to prove an energy estimate for this PDE, which is summarized in Proposition C.16. Roughly speaking, Proposition C. 16 shows that the ∞ norm of the solution at time t is smaller than the q norm of the initial data by a factor of order n −1/3 t −1 . Consequently, as long as t 1 n −1/3+δ for a constant δ > 0 and ε is chosen small enough, the ∞ norm of the solution at time t 1 is much smaller than n −2/3 . Combining all the above pieces shows that the eigenvalues of Q t satisfy (V.8) for t = t 0 + t 1 . We can see from the above arguments that there are two conditions that lead to a lower bound for t: t > t 0 √ η * to ensure a regular square root behavior of the RFC and sharp local laws for Q t0 ; t > t 1 n −1/3 to ensure the "closeness" of the two coupled rectangular DBMs. Since we have assumed 0 < φ * 2/3 in Definition V.1, we only need to take t √ η * . In fact, in the application to the proof of Theorem III.2, we will take φ * = 2/3 so that we run the rectangular DBM for an amount of time t n −1/3 . Finally, we discuss the comparison argument for Step 3 of the proof of Theorem III.2. First, it requires a moment matching condition, as is well-known in the random matrix theory literature. More precisely, we will construct another random Gram matrix, say Y = (y ij ), with independent entries that have the same mean r ij but different variances Var(y ij ) = s ij −t/n. Then, the rectangular matrix DBM Y + √ tX has the same mean matrix R and variance matrix S as Y . Now, applying Theorem V.3 shows that the edge eigenvalues (denoted by λ i,t ) of Y + √ tX satisfy the Tracy-Widom law around the right edge (denoted 13 by λ +,t ) of the corresponding RFC. It remains to show that the limiting law of the (shifted and rescaled) edge eigenvalues This uses a standard resolvent comparison argument in the literature, and the key technical input is the local law for the resolvent of (Y + √ tX)(Y + √ tX) , which is given in Appendix B-B. While the resolvent comparison argument is almost the same as the ones in e.g., [18], [54], it only gives that p 2/3 (λ i − λ +,t ) satisfy the Tracy-Widom law. We still need to show that the difference between the right edges λ + and λ +,t is much smaller than the Tracy-Widom fluctuation scale n −2/3 . By analyzing the Stieltjes transform of the RFC, we will see (cf. equation (A.39)) that for any small constant ε > 0, Since we need to control the second and third terms on the right-hand side, we have to take n −1/3+δ t n −δ for a constant δ > 0. To summarize, for the above argument to work, we need that n −1/3+δ t n −δ ∧ min s ij . In particular, taking a smaller t means relaxing the lower bound on s ij , so that we can handle a more general class of random Gram matrices. On the other hand, we have seen a lower bound t √ η * ∨ n −1/3 for Step 2. Therefore, in the proof of Theorem III.2, we will take (almost) optimal parameters: η * = n −2/3 and t = n −1/3+δ . This also leads to the lower bound on s ij in (II.3). APPENDIX A PROOFS OF THEOREM III.2, COROLLARY III.3 AND COROLLARY III.4 We will use the following notion of stochastic domination, which was first introduced in [25] and subsequently used in many works on random matrix theory. It simplifies the presentation of the results and their proofs by systematizing statements of the form "ξ is bounded by ζ with high probability up to a small power of n". be two families of nonnegative random variables, where U (n) is a possibly n-dependent parameter set. We say ξ is stochastically dominated by ζ, uniformly in u, if for any fixed (small) ε > 0 and (large) D > 0, for large enough n n 0 (ε, D), and we will use the notation ξ ≺ ζ to denote it. Throughout this paper, the stochastic domination will always be uniform in all parameters that are not explicitly fixed, such as the matrix indices and the spectral parameter z. (ii) We say an event Ξ holds with high probability if for any constant D > 0, P(Ξ) 1 − n −D for large enough n. The following lemma collects basic properties of stochastic domination, which will be used tacitly in the following proof. Lemma A.2 (Lemma 3.2 of [10]). Let ξ and ζ be two families of nonnegative random variables, U and V be two parameter sets and C > 0 be a large constant. We introduce the following bounded support condition, which has been used in a sequence of papers to improve the moment assumption, see e.g. [18], [20], [54], [74]. Definition A.3 (Bounded support condition). We say a random matrix Y satisfies the bounded support condition with φ n if where φ n is a deterministic parameter satisfying that n −1/2 φ n n −c φ for some small constant c φ > 0. Whenever (A.1) holds, we say that Y has support φ n . We define the following spectral domains: for some small constants c 0 , ϑ > 0, Finally, we define the distance to the rightmost edge as Then, the following local law has been proved in [2]. Lemma A.4 (Theorem 2.6 of [2]). Assume that Y is a p × n random matrix with real independent entries satisfying (I.5) and that for any fixed k ∈ N, for some constant C k > 0. Moreover, suppose that the variance matrix S satisfies Assumption II.1, and the mean matrix is R = 0. Then there exists a constant c 0 > 0 such that the following averaged local laws hold for any (small) constant ϑ > 0. For any z ∈ D(c 0 , ϑ), we have that where m(z) is defined in (III.2) and g 1 (z) is defined in (A.6), and for any z ∈ D out (c 0 , ϑ), we have a stronger estimate Both of the above estimates are uniform in the spectral parameter z. Remark A.5. Strictly speaking, the estimate (A.12) was not proved in [2]. However, its proof is standard by combining the results in [2] with a separate argument for z ∈ D out (c 0 , ϑ); see e.g. the proof of (2.20) in [27]. As a consequence of (A.11) and (A.12), we obtain the following rigidity estimate in Lemma A.6 for the eigenvalues of Q 1 near the right edge λ + . We define the classical location γ j of the j-th eigenvalue as where ρ was defined in (III.2). In other words, γ j 's are the quantiles of the asymptotic spectral density ρ of Q 1 . Note that under the above definition, we have γ 1 = λ + . Combining Lemma A.6 with the Cauchy interlacing theorem, we immediately obtain the following result when R is non-zero and satisfies Assumption II.4. Lemma A.7. Assume that Y is a p × n random matrix with real independent entries satisfying (I.5) and (A.10). Suppose that the variance matrix S satisfies Assumption II.1 and the mean matrix R satisfies Assumption II.4. Denote the eigenvalues of Y Y by λ 1 λ 2 · · · λ p . Then there exists a constant c 0 > 0 such that the following statements hold for any small constant ϑ > 0. From (A.10) and Markov's inequality, we get that the matrix Y in Lemma A.4 has support max i,j s 1/2 ij . Now combining the analysis of the vector Dyson equation (III.1) in [2] with the arguments for local law in [18], we can relax the moment condition (A.10) to a weaker bounded support condition. Lemma A.8. Assume that Y is a p × n random matrix with real independent entries satisfying (I.5). Suppose that the variance matrix S satisfies Assumption II.1 and the mean matrix R satisfies Assumption II.4. Moreover, assume that Y satisfies the bounded support condition (A.1) with φ n n −c φ for a small constant c φ > 0. Then there exists a constant c 0 > 0 such that the following estimates hold for any small constant ϑ > 0. (1) Averaged local law: For any z ∈ D(c 0 , ϑ), we have that and for z ∈ D out (c 0 , ϑ), we have a stronger estimate (2) Entrywise local law: For any z ∈ D(c 0 , ϑ), we have that where Π is defined in (A.8). All of the above estimates are uniform in the spectral parameter z. Proof. With the stability analysis of equation (III.1) in [2, Section 3], we can repeat the same proofs for Lemma 3.11 of [18] and Theorem 3. Proof of Theorem III.2. Using the estimates in Lemma A.8, we can repeat the proof for [18, Theorem 2.7] almost verbatim to conclude (III.7) and the following universality result as n → ∞: for any (x 1 , x 2 , . . . , x k ) ∈ R k , where P G denotes the law for Y = (y ij ) with independent Gaussian entries satisfying (I.5). To conclude (III.5) and (III.6), it remains to show that ( 2/3 p 2/3 (λ i+r − λ + )) 1 i k has the same asymptotic distribution as (p 2/3 (µ GOE i − 2)) 1 i k in the Gaussian case. For simplicity of notations, we only write down details of the proof for the r = 0 case, which is based on Theorem V.3, Lemma A.4 and Lemma A.6. The argument for the r > 0 case is similar and will be discussed at the end of the proof. Let t 0 = n −1/3+ε0 for a small constant ε 0 < ε * , where recall that ε * is the constant in (II.3). Then, we pick the initial data matrix W to be a p × n random matrix with independent Gaussian entries satisfying Ew ij = 0, Ew 2 ij = s ij − t 0 /n. Let X be an independent p × n matrix with i.i.d. Gaussian entries of mean zero and variance n −1 . Then, we have that We regard W + √ tX as a rectangular matrix DBM starting at W , and at time t 0 it has the same distribution as Y . We now fix the notations for the proof. First, in light of (A.23), we denote the eigenvalues of Q := (W + √ t 0 X)(W + √ t 0 X) by λ 1 λ 2 · · · λ p . We define its asymptotic spectral density ρ and the corresponding Stieltjes transform m(z) as in (III.2). Moreover, let λ + be the right edge of ρ, and γ j be the quantiles of ρ defined as in (A.13). We denote the variance matrix of W by S w = (s ij − t 0 /n : 1 i p, 1 j n), and let M w (z) = (M w,1 (z), · · · , M w,p (z)) : C + → C p be the unique solution to the vector Dyson equation such that Im M w,k (z) > 0, k = 1, 2, · · · , p, for any z ∈ C + . Then, we define M w (z) := p −1 k M w,k (z), which is the Stieltjes transform of the asymptotic spectral density of W W , denoted by ρ w . We denote the right edge of ρ w by λ +,w , and define the quantiles of ρ w as γ j,w := sup Finally, following the notations in Section V, we denote and the eigenvalues of W W by d 1 d 2 · · · d p . Then, we define m w,t as in (V.4), and let λ +,t be the rightmost edge of the rectangular free convolution ρ w,t . We take η * = n −2/3+ε1 for a small enough constant 0 < ε 1 < ε 0 . We first verify that m V is η * -regular in the sense of Definition V.1. Notice that W is also a random Gram matrix satisfying the assumptions of Lemma A.4. Denoting z = E + iη and κ = |E − λ +,w |, by (A.11) and (A.12) we have that for λ +,w − c 0 E λ +,w and n −2/3+ϑ η 10, and for λ +,w E λ +,w + c 0 and n −2/3+ϑ η 10, Moreover, as a consequence of the square root behavior of ρ +,w around λ +,w as given by (III.3), it is easy to show that for any z = E + iη satisfying that λ +,w − c 0 E λ +,w + c 0 and 0 η c −1 0 for a small enough constant c 0 > 0. In this paper, given two sequences of positive values a n and b n , we use a n ∼ b n to mean that there exists a constant C > 0 so that C −1 a n b n Ca n . Finally, using (A.14) we get that |d j − γ j,w | ≺ j −1/3 n −2/3 , (A.29) SI.6 for any j such that λ +,w − c 0 /2 < γ j,w λ +,w . Combining the above estimates (A.26)-(A.29), we obtain that for some constants 0 < c V < c 0 /2 and C V > 0, the following estimates hold on a high probability event Ξ: for d 1 − c V E d 1 and η * η 10, Thus, on event Ξ, m V is η * -regular. Then, applying Theorem V.3 to Q = (W + √ t 0 X)(W + √ t 0 X) , we conclude that there exists a parameter γ n ∼ 1 such that for any fixed k ∈ N, where d ∼ means that the two random vectors have the same asymptotic distribution. Now, to conclude the proof, it remains to show that We recall that λ + is the right edge of the asymptotic density ρ, which by definition is also the rectangular free convolution of ρ w with MP law at time t 0 . On the other hand, for a given W , λ +,t0 is the right edge of ρ w,t , which is the rectangular free convolution of ρ w,0 := p −1 p i=1 δ di with MP law at time t 0 . Hence λ +,t0 and λ + are different quantities, but we can control their difference using (A.26), (A.27) and (A.29). Recalling the notation in (V.5), we denote and Finally, we briefly discuss the proof for the r > 0 case. In fact, its proof uses the same argument as above, except that we need to replace Lemma A.6 with Lemma A.7 and apply Theorem V.3 with i 0 = r + 1. For example, the equation (A.30) above should be replaced by γ n p 2/3 (λ i+r − λ +,t0 ) We omit the details. Finally, we complete the proofs of Corollaries III.3 and III.4 using Theorem III.2. Proof of Corollary III.3. In [74], the following edge universality result was proved under the assumptions of this corollary: for all (x 1 , x 2 , . . . , x k ) ∈ R k , where P G denotes the law for N with i.i.d. Gaussian entries of mean zero and variance n −1 . In particular, the condition (III.8) is not necessary if A or B is diagonal. Note that if N is Gaussian, then using the rotational invariance of multivariate Gaussian distribution, we can reduce Q = Y Y to a random Gram matrix satisfying (I.5) with R = 0 and variance matrix S = ((a i b j )/n). Furthermore, notice that (III.9) is stronger than (II.3) and equivalent to (A3) of Assumption II.1. Hence Y Y satisfies the assumptions of Theorem III.2 with r = 0, which immediately concludes the proof. SI.8 Remark A.9. Regarding Example II.6, suppose there are some spikes in the eigenvalue spectrum of A and B such that a 1 · · · a r a r+1 + τ and b 1 · · · b s b s+1 + τ for some r, s ∈ N and a small constant τ > 0. Then it is easy to check that min inf and the condition (II.10) cannot hold for all n. Hence the condition (II.10) rules out the existence of outliers. But the condition (II.10) sometimes is too strong because it does not allow for any spikes or isolated eigenvalues in the eigenvalue spectrum of A and B. (Here by an isolated eigenvalue of A, we mean an a i such that a i+1 + τ a i a i−1 − τ for some 1 i p and a small constant τ > 0. For the isolated eigenvalues of B, we have a similar definition.) On the other hand, in [20] we have found that a spike of A or B gives rise to an outlier only when it is above the BBP transition threshold. In fact, the following weaker regularity condition was used in [20], [74]. For m(z) in (III.1), we define another two holomorphic functions . Then, we say that the spectral edge λ + is regular if for some constant τ > 0, This condition not only allows for isolated eigenvalues of A and B, but also allows for zero a i 's or b j 's, that is, the lower bounds in (III.9) can be relaxed to some extent. Compared with conditions (II.10) and (II.11), the condition (A.41) is less explicit and harder to check, but it appears more often in the random matrix theory literature. Remark A.10. We make a few remarks on the technical assumptions (III.8) and q n 1/3+c φ in Corollaries III.3 and III.4, respectively. First, as mentioned in the proof of Corollary III.3, we need to use the edge universality result (A.40) from [74], where the vanishing third moment condition (III.8) is needed (see the discussion below Theorem 3.6 in [74]). More precisely, a continuous self-consistent comparison argument is used in [74] to show that the non-Gaussian case is close to the Gaussian case in the sense of limiting distributions of edge eigenvalues. For the comparison argument to work, we need to match the third moment of y ij with that of a standard Gaussian random variable, which leads to the condition (III.8). However, we believe that (III.8) is not necessary and can be removed with further theoretical development. Second, we believe that the condition q n 1/3+c φ in Corollary III.4 can be weakened to q n 1/6+c φ . In fact, following the arguments in [43], we expect that (A.42) can be sharpened to for some deterministic shift δ(q) = O(q −2 ). As long as q n 1/6+c , the term q −4 will be much smaller than the Tracy-Widom scale n −2/3 , and the Tracy-Widom law around λ + +δ(q) can be established. However, when q n 1/6 , the limiting distribution of the second largest eigenvalue (i.e., the largest edge eigenvalue) of the Erdős-Rényi graph will become Gaussian [41], [42]. We conjecture that a similar phenomenon also occurs for the model in Corollary III.4. Since the above directions are not the focus of this paper, we will pursue them in future works. APPENDIX B RECTANGULAR FREE CONVOLUTION AND LOCAL LAWS In this section, we collect some basic estimates on the rectangular free convolution ρ w,t and its Stieltjes transform m w,t for an η * -regular V = W W as in Definition V.1. Furthermore, we will state an (almost) sharp local law on the resolvent of Q t = (W + √ tX)(W + √ tX) , and a rigidity estimate on the rectangular DBM {λ i (t) : 1 i p}. These estimates will serve as important inputs for the detailed analysis of the rectangular DBM in Section C below. Most of the results in this section were proved in [19] under more general assumptions on X, and we will provide the exact reference for each of them. Without loss of generality, throughout this section, we assume that i 0 = 1. The general case with i 0 > 1 will be discussed in Remark B.15 below. SI.9 A. Properties of rectangular free convolution For simplicity, we denote b t (z) := 1 + c n tm w,t (z). It is easy to see from (V.4) that b t satisfies the following equation Recalling ζ t defined in (V.5), the equation (B.1) can be also rewritten as Recall that ρ w,t is the asymptotic probability density associated with m w,t , and let µ w,t be the corresponding probability measure. Moreover, we denote the support of µ w,t by S w,t , with a right-most edge at λ +,t . We first summarize some basic properties of these quantities, which have been proved in previous works [22], [23], [71]. Lemma B.1 (Existence and uniqueness of asymptotic density). The following properties hold for any t > 0. (i) There exists a unique solution m w,t to equation (V.4) satisfying that Im m w,t (z) > 0 and Im zm w,t (z) > 0 for z ∈ C + . (ii) For all x ∈ R \ {0}, lim η↓0 m w,t (x + iη) exists, and we denote it by m w,t (x). The function m w,t (x) is continuous on R \ {0}, and the measure µ w,t has a continuous density ρ w,t given by , lim η↓0 ζ t (x + iη) exists, and we denote it by ζ t (x). Moreover, we have Im ζ t (z) > 0 for z ∈ C + . (iv) For any z ∈ C + , we have Re b t (z) > 0 and |m w,t (z)| (c n t|z|) −1/2 . (v) The interior Int(S w,t ) of S w,t is given by The following lemma characterizes the right-most edge of S w,t . Using ζ t in (V.5) and the definition of b t , we can rewrite the equation (B.2) as Φ t (ζ t (z)) = z, where Φ t is defined in (V.6). We recall that by definition In [71], the authors characterize the support of µ ω,t and its edges using the local extrema of Φ t on R. where we have chosen the branch of the solution such that Lemma B.1 (iv) holds. Plugging (B.5) into (B.2), we find that (z, b t ) is a solution to (B.2) if and only if (z, ζ t ) is a solution to Since the two equations Φ t (ζ t (x)) = x and F t (x, ζ t (x)) = 0 are equivalent, from Lemma B.2 we can obtain the following characterization of the edges of S w,t . Now we use Lemma B.3 to derive an expression for the derivative ∂ t λ +,t , which will be used in the analysis of the rectangular DBM in Section C. Taking derivative of (B.6) with respect to t and using (B.7), we get that for z = λ +,t and ζ +,t := ζ t (λ +,t ), where we denoted F (t, z, ζ) ≡ F t (z, ζ). From this equation, we can solve that where we used (B.2) in the second step. Lemma B.4 (Lemma 3.7 of [19]). Suppose V = W W is η * -regular and t satisfies (B.10). Then, we have ζ +,t λ + and The following lemma describes the square root behavior of the asymptotic density ρ w,t . We also need to control the derivative ∂ z m w,t (z). First, note that with the definition of m w,t , we can get the trivial estimate Moreover, we claim the following estimates. SI.11 Lemma B.6 (Lemma 3.20 of [19]). Suppose V = W W is η * -regular and t satisfies (B.10). Consider any z = E + iη with κ := |E − λ + | 3c V /4 and 0 η 10. If κ + η t 2 , then we have that If κ + η t 2 , we have that for E λ +,t , |∂ z m w,t (z)| (κ + η) −1/2 , (B. 17) and for E λ +,t , Finally, in Section C, we will need to compare the edge behaviors of two free rectangular convolutions satisfying certain matching properties. Specifically, let t 0 = N −1/3+ω0 for some constant 0 < ω 0 < 1/3. We consider two probability measures ρ 1 and ρ 2 having densities on the interval [0, 2ψ] with ψ ∼ 1 being a positive constant, such that for some constant c ψ > 0 the following properties hold: and Let ρ 1,t and ρ 2,t be the free rectangular convolutions of the MP law with ρ 1 and ρ 2 , respectively. Moreover, the Stieltjes transform of ρ i,t , denoted by m i,t , satisfies a similar equation as in (B.2): For i = 1, 2, let λ +,i (t) be the right edge of ρ i,t , and denote ζ +,i (t) := ζ i,t (λ +,i (t)). Due to the matching condition (B.19), we can show that ζ +,1 (t) and ζ +,2 (t) are close to each other with a distance of order o(t 2 ) for t t 0 . The following matching estimates will play an important role in constructing the short-range approximation of the rectangular DBM in Section C-B. B. Local laws In this section, we state the local laws and rigidity estimates for the rectangular DBM considered in this paper. We first consider t satisfying (B.10). Define the following (p + n) × (p + n) symmetric block matrix Definition B.9 (Resolvents). We define the resolvent of H t as For Q 1,t := (W + √ tX)(W + √ tX) and Q 2,t := (W + √ tX) (W + √ tX), we define the resolvents We denote the empirical spectral density ρ 1,t of Q 1,t and its transform by For any constant ϑ > 0, we define the spectral domain where recall that λ +,t is the right-edge of ρ w,t . The following theorem gives the local laws on the domain D ϑ . Theorem B.10 (Theorem 2.7 of [19]). Suppose V = W W is η * -regular, and t satisfies (B.10). For any constant ϑ > 0, the following estimates hold uniformly in z ∈ D ϑ : As a consequence of this theorem, we can obtain the following rigidity estimate for the eigenvalues λ 1 λ 2 · · · λ p of Q 1,t near the right edge λ +,t . We define the quantiles of ρ w,t as in (A.13): Lemma B.11. Suppose the local laws (B.30) and (B.31) hold. Then, for any j such that λ +,t − c V /2 < γ j λ +,t , we have Proof. The estimate (B.33) follows from the local laws (B.30) and (B.31) combined with a standard argument using Helffer-Sjöstrand calculus. The details are already given in [28], [35], [66]. Then, we present the local laws for the case where W already satisfies a local law. SI.13 We denote the rectangular free convolution of ρ c with MP law at time t by ρ c,t , and its Stieltjes transform by m c,t . We also denote the right edge of ρ c,t by λ c,t and define κ c := |E − λ c,t |. Then we define the following spectral domain Then, we have the following local law on the domain D ϑ,c . Theorem B.13 (Theorem 2.10 of [19]). Suppose Assumption B.12 holds. For any fixed constants ϑ, δ > 0, the following estimates hold uniformly in z ∈ D ϑ,c and 0 t n −δ : • for E λ c,t , we have Again using Theorem B.13, we can prove the following rigidity estimate for the eigenvalues of Q 1,t near the right edge λ c,t . We define the quantiles γ c j as in (B.32) but with ρ w,t replaced by ρ c,t . Lemma B.14. Suppose the local laws (B.35) and (B.36) hold. Then, for any j such that λ c,t − c V /2 < γ c j λ c,t , we have Proof. The estimate (B.37) follows from the local laws (B.35) and (B.36) combined with a standard argument using Helffer-Sjöstrand calculus. The details are already given in [28], [35], [66]. Remark B.15. We now briefly discuss how to handle the general case with i 0 > 1. When i 0 > 1, the i 0 − 1 outliers will give rise to several small peaks of ρ w,t around the spikes d i , 1 i i 0 − 1. We can exclude them and only consider the bulk component of ρ w,t with a right edge λ +,t that is close to d i0 . Then, all the results in this section still hold for the i 0 > 1 case with λ + := d i0 except that c V needs to be chosen sufficiently small so that the spectral domains D ϑ and D ϑ,c are away from the spikes d i , 1 i i 0 − 1, by a distance of order 1 and j will be restricted to j i 0 in Lemmas B.11 and B.14. APPENDIX C PROOF OF THEOREM V.3 This section is devoted to the proof of Theorem V.3. For simplicity of presentation, we only provide the detailed proof for the i 0 = 1 case without outliers. The general case with i 0 > 1 will be discussed in Remark C.2 below. In the proof, we fix two time scales t 0 = n ω0 /n 1/3 , t 1 = n ω1 /n 1/3 , (C.1) for some constants ω 0 and ω 1 satisfying 1/3 − φ * /2 + ε/2 ω 0 1/3 − ε/2 and 0 < ω 1 < ω 0 /100. The reason for choosing these two scales is the same as the one in [51]. That is, we first run the DBM for t 0 amount of time to regularize the global eigenvalue density, and then for the DBM from t 0 to t 0 + t 1 , we will show that the local statistics of the edge eigenvalues converge to the Tracy-Widom law. Since t 1 t 0 , for the time period t 0 t t 0 + t 1 the locations of the quantiles defined in (B.32) remain approximately constant. The eigenvalue dynamics of Q t = (W + √ tX)(W + √ tX) with respect to t is described by the rectangular Dyson Brownian motion defined as follows. Let B i (t), i = 1, · · · , p, be independent standard Brownian motions. For t 0, we define the process {λ i (t) : 1 i p} as the unique strong solution to the following system of SDEs [14, Appendix C]: with initial data In other words, the initial data is chosen as the eigenvalues of the regularized matrix Q t0 , and γ w is chosen to match the edge eigenvalue gaps of Q t0 with those of the Wigner matrices. Here we recall that the asymptotic density ρ w,t is given by (B.13), while the Wigner semicircle law has density π −1 (2 − x) + + O((2 − x) + ) around 2. The system of SDEs (C.2) for SI.14 the rectangular DBM is defined in a way such that for any time t > 0, the process {λ i (t)} has the same joint distribution as the eigenvalues of the matrix We shall denote the rectangular free convolution of the empirical spectral density of √ γ w V with MP law at time γ w t 0 + t by ρ λ,t , which gives the asymptotic ESD for γ w Q t0+t/γw . Moreover, we use m λ,t to denote the Stieltjes transform of ρ λ,t . It is easy to see that the right edge of ρ λ,t is given by where recall that λ +,t denotes the right edge of ρ w,t at time t. Note that the scaling factor γ w is fixed throughout the evolution, but the right edge evolves in time. We would like to compare the edge eigenvalue statistics of the rectangular DBM {λ i (t)} with those of a carefully chosen deformed Wishart matrix. We define a p × p sample covariance matrix UU , where U is a random matrix of the form U := Σ 1/2 X . Here X is a p × n random matrix with i.i.d. Gaussian entries of mean zero and variance n −1 , and Σ = diag(σ 1 , · · · , σ p ) is a diagonal population covariance matrix. Recall that the asymptotic ESD of UU , denoted as ρ µ,0 , is given by the multiplicative free convolution of the Marchenko-Pastur law and the ESD of Σ, which is also referred to as the deformed Marchenko-Pastur law [58]. We choose Σ such that ρ µ,0 matches ρ λ,0 near the right edge E λ (0), that is, ρ µ,0 (x) satisfies that for x around E λ (0). Note that there are only two parameters to match, i.e. the right spectral edge and the curvature of the spectral density at the right edge, but there are a lot of degrees of freedom in Σ for tuning to ensure that (C.3) holds. Now we define a rectangular DBM with initial data {µ i } being the eigenvalues of UU . More precisely, for t 0 we define the process {µ i (t) : 1 i p} as the unique strong solution to the following system of SDEs: with initial data µ i (0) := µ i (UU ). For any t > 0 the process {µ i (t)} has the same joint distribution as the eigenvalues of the matrix (U + √ tX)(U + √ tX) , which is still a sample covariance matrix with population covariance Σ + tI. In particular, by [53] we know that the edge eigenvalues of {µ i (t)} obey the Tracy-Widom distribution asymptotically. We will denote the rectangular free convolution of ρ µ,0 with MP law at time t by ρ µ,t , which gives the asymptotic ESD for (U + √ tX)(U + √ tX) . Furthermore, we denote the Stieltjes transform of ρ µ,t by m µ,t , and the right edge of ρ µ,t by E µ (t). Note that we have E µ (0) = E λ (0) by (C.3). The main result of this section is the following comparison theorem. Theorem C.1. Fix any integer k ∈ N. Under the assumptions of Theorem V.3, there exists a constant ε > 0 such that With Theorem C.1, we can conclude Theorem V.3. Remark C.2. We now make some remarks about the general case with i 0 > 1. Its proof is almost the same as that for the i 0 = 1 case, except that we need to apply some standard arguments in the study of DBM regarding the reindexing of the eigenvalues and the padding with dummy particles. More precisely, in equation (C.5), we should control Then, in defining the two rectangular DBMs, we add to the initial data of the SDEs some dummy particles, which are away from the edge eigenvalues by a distance of order N C for a large constant C > 0. These dummy particles have a negligible effect on the evolution of edge eigenvalues, and hence are irrelevant to our final results. But they allow us to take the difference λ i+i0−1 − µ i for all 1 i p. We refer the reader to equations (3.10)-(3.12) of [51] for more details. A. Interpolating processes To estimate the difference λ i (t) − µ i (t), we study the following interpolating processes for 0 α 1: with the interpolated initial data z i (0, α) := αλ i (0) + (1 − α)µ i (0). Correspondingly, we denote the Stieltjes transform of the ESD of {z i (t, α)} by Note that by Lemma B.5, due to the choice of γ w and (C.3), we have that for a sufficiently small constant τ > 0. Let γ µ,i (t) and γ λ,i (t) be the quantiles of ρ µ,t and ρ λ,t defined as Here to get (C.10), we used a standard stochastic continuity argument to pass from fixed times t to all times. Roughly speaking, taking a sequence of fixed times t k = 10t 1 · k/n C for a large constant C > 0, by Lemma B.11 and a simple union bound we get that sup Then, we can show that with high probability, the difference |z i (t, 0) − z i (t k , 0)| + |z i (t, 1) − z i (t k , 1)| is small enough for all t k t t k+1 using a simple continuity estimate. We refer the reader to Appendix B of [50] for more details. Combining (C.8) and (C.9), we can get the following simple control on the quantiles near the edge. Proof. For simplicity, we denote x := E µ (0) − γ µ,i (0) and y := E λ (0) − γ λ,i (0). Without loss of generality, we assume that x y. Note that by the square root behaviors of ρ µ,0 and ρ λ,0 near the right edges, it is easy to get that x ∼ y ∼ i 2/3 n −2/3 for i 2. Now using (C.8) and (C.9), we obtain that which gives |y 3/2 − x 3/2 | x 5/2 /t 2 0 . From this estimate, we get that |y − x| x 2 /t 2 0 , which concludes the proof together with the facts x ∼ i 2/3 n −2/3 and E λ (0) = E µ (0). Next, we will construct a collection of measures that match the asymptotic densities of the interpolating ensembles and have well-behaved square root densities near the right edge. Our main goal is that for each 0 α 1, we have a density which matches the distribution of {z i (0, α)} approximately, and with which we can take a rectangular free convolution for any 0 t t 1 . By inverse function theorem, we can calculate that Combining it with (C.8), we immediately find that , 0 E τ t 2 0 , (C.14) for a sufficiently small constant τ > 0, where E + (0, α) is the right edge of ρ(E, α). We now construct a (random) measure This measure is defined in a way such that its Stieltjes transform is close to m 0 (z, α) in (C.7). Moreover, the motivation behind this definition is as follows. We need a deterministic density that behaves well around the right edge in order to use the results in Section B. But we do not have any estimate on the density far away from the edge. Hence for the remaining eigenvalues that are away from the right edge by a distance of order 1, we just take δ functions. Although the sum of delta measures is random, its effect on deterministic quantities that we are interested in is negligible. Let ρ t (E, α) be the rectangular free convolution of dµ(E, α) with the MP law at time t. Moreover, we denote its Stieltjes transform by m t (z, α) and its right edge by E + (t, α). Some key properties of ρ t (E, α) and m t (z, α) have been given in Section B. In particular, we know that ρ t (E, α) has a square root behavior near E + (t, α). Although ρ t (E, α) is random, with the results in Section B we can provide a deterministic control on it. Lemma C.4. Let ε, τ > 0 be sufficiently small constants. For 0 E τ n −2ε t 2 0 , we have that for any constant D > 0, Moreover, for a small constant c τ > 0 we have that where we introduced the short-hand notation γ i (t, α) Proof. The estimates (C.15) follows directly from (B.24). The estimate (C.16) follows from (C.15) using the same argument as in the proof of (C.12). With the eigenvalues rigidity (C.10) and the construction of dµ(E, α), we can verify that |m 0 (z, α) − m 0 (z, α)| satisfies Assumption B.12. Then, by Lemma B.14, we have the following rigidity estimate of {z i (t, α)}. As before, we define the quantiles γ i (t, α) by Lemma C.5. There exists a constant c * > 0 so that Proof. This estimate follows from Lemma B.14 combined with a standard stochastic continuity argument in t. SI.17 Using (B.9) and (B.14), we can calculate that where we used the notations α). In the proof, we will also need to use the following function defined for E ∈ [−τ, τ ] for a small enough constant τ > 0: Next, we prove some matching estimates for the function Ψ t (E, α) in Lemma C.6. The proof of this lemma explores a rather delicate cancellation in Ψ t (E, α). Remark C.7. Later we will only consider the dynamics after t = n −C for some large constant C > 0, so that the n −D terms in (C.20) and (C.21) are negligible as long as D is large enough. Plugging it into (C.29), we conclude (C.26). The estimate (C.27) can be proved in the same way. In later proof, we will also need to study the evolution of the singular values y i (t, α) := z i (t, α). It is easy to see that the asymptotic density for y i (t, α) is given by SI.19 Similarly we can define f λ,t and f µ,t . Moreover, the quantiles of f t (E, α) are exactly given by γ i (t, α). Now with Lemma C.4 and Lemma C.5, we can easily conclude the following lemma. B. Short-range approximation As in [51], we will build a short-range approximation for the interpolating processes {z i (t, α)}, which is based on the simple intuition that the eigenvalues that are far away from the edge have negligible effect on the edge eigenvalues. It turns out that it is more convenient to study the SDEs for singular values y i (t, α). By Ito's formula, we get that for 1 i p, dt. (C.33) Note that the diffusion term now has a constant coefficient. For convenience, we introduce the shifted processes Clearly, we have that z i (t, α) ∼ y i (t, α). We see that y i (t, α) obeys the SDE where ∂ t E + (t, α) is given by (B.9). We now define a "short-range" set of indices A ⊂ [1, p] × [1, p]. Let A be a symmetric set of indices in the sense that (i, j) ∈ A if and only if (j, i) ∈ A, and choose a parameter := n ω , where ω > 0 is a constant that will be specified later. Then we define for i * := c * n, where c * is the constant as appeared in Lemma C.5. It is easy to check that for each i, the set {j : (i, j) ∈ A} consists of consecutive integers. For convenience, we introduce the following short-hand notations where we recall that γ i (t, α) and γ i (t, α) are defined below (C. 16) and (C.32), respectively. Finally, we denote where c V > 0 is a small constant depending only on c V . SI.20 Let ω a > 0 be a constant that will be specified later. The short-range approximation to y is a process y defined as the solution to the following SDEs for t n −C0 with the same initial data (recall Remark C.7) where C 0 is an absolute constant (for example, C 0 = 100 will be more than enough). For 1 i n ωa , the SDEs are for n ωa < i i * /2, the SDEs are for i * /2 < i p, the SDEs are Corresponding to (C.34), we denote We now choose the hierarchy of the scale parameters in the following quantities: t 0 = n −1/3+ω0 , t 1 = n −1/3+ω1 , = n ω , and n ωa . In fact, we will choose the constants ω 0 , ω 1 , ω and ω a such that for some constant C > 0 that is as large as needed. Here the purpose of the scale is to cut off the effect of the initial data far away from the right edge, since y i (0, α = 1) and y i (0, α = 0) only match for small i. Moreover, by choosing scale ω a ω 0 , we can make use of the matching estimates in Lemma C.6 to show that the drifting terms in the SDEs with 1 i n ωa are approximately α independent. Next, we show that y i (t, α) are good approximations to y i (t, α). Before that, we recall the semigroup approach for first order parabolic PDE. Let Ω be a real Banach space with a given norm and L(Ω) be the Banach algebra of all linear continuous mappings. We say a family of operators {T (t) : t 0} in L(Ω) is a semigroup if T (0) = id, and T (t + s) = T (t)T (s) for all t, s 0. For a detailed discussion of semigroups of operators, we refer the readers to [8]. Definition C.10. For any operator W ∈ L(R p ), we denote U W as the semigroup associated with W, i.e., W is the infinitesimal generator of U W . Moreover, we denote U W (s, t) as the semigroup from s to t, that is, U W (s, s) = id and ∂ t U W (s, t) = W(t)U W (s, t), for any t s. For the rest of this subsection, we prove the following short-range approximation estimate. Lemma C.11. With high probability, we have that for any constant ε > 0, i | n C with high probability. (C.64) Next, we define a long range cut-off of u. Fix a small constant δ v > 0 and let v be the solution to the following homogeneous equation Then, we have the following proposition, which essentially states that the u i 's with indices far away from the edge have a negligible effect on the solution. Proposition C.13. With high probability, we have One can see that Proposition C.13 is an immediate consequence of the following finite speed of propagation estimate, whose proof is postponed to Section C-D. Lemma C.14. For any small constant δ > 0, we have that for a 3 n 2δ and b 3 n δ , sup n −C 0 s t 10t1 U L ab (s, t) + U L ba (s, t) n −D with high probability, for any large constant D > 0. Remark C. 15. In fact, we have U L ab (s, t) 0 and U L ba (s, t) 0 by maximum principle. More precisely, If v i (s) 0 for all i at time s, we claim that v i (t) 0 for all i at any time t s. To see this, at any time t ∈ [s, t], suppose v j (t ) = min{v i (t ) : 1 i p} is the smallest entry of v(t ). Then with (C.61), we can check that ∂ t v j (t ) = (Bv(t )) j 0, i.e. the smallest entry of v will always increase. Hence the entries of v can never be negative at any time t s. Another key ingredient is the following energy estimate. We postpone its proof until we complete the proof of Theorem C.1. Here we have fixed the starting time point to be n −C0 , but the same conclusion holds for any other starting time by the semigroup property. SI.26 Proposition C. 16. For any small constant δ 1 > 0, consider a vector w ∈ R p with w i = 0 for i 3 n δ1 . Then, for any constants ε, η > 0 and fixed q 1, there exists a constant C q > 0 independent of ε and η such that for all 2n −C0 t 2t 1 , With all the above preparations, we are now ready to give the proof of Theorem C.1. Finally, using Proposition C.16 with q = 4, we find that Inserting it into (C.68) and further into (C.67), we conclude the proof. The proof of Proposition C.16 is almost the same as the one for Lemma 3.11 in [51], so we only give an outline of it. Proof of Proposition C. 16. The proof relies on Lemma C.14 and the estimates in the following lemma. Lemma C.17. Fix a constant 0 < δ 1 < ω − ω 1 . Let w ∈ R p be a vector such that w i = 0 for i 3 n δ1 . For any constants η, ε > 0, there is a constant C > 0 independent of ε and η, and a constant c η > 0 such that the following estimates hold with high probability for all n −C0 s t 5t 1 : U L (s, t)w 2 n Cη+ε c η n 1/3 (t − s) and (U L (s, t)) w 2 n Cη+ε c η n 1/3 (t − s) Now, we complete the proof of Proposition C.16. Fix constants 0 < δ 1 < δ 2 < ω − ω 1 . We define the indicator function X 2 (i) = 1 {1 i 3 n δ 2 } and let X 2 be the associated digonal operator. For any v ∈ R p with v 1 = 1, we decompose that where we have abbreviated U L ≡ U L (n −C0 , t). For the second term, with Lemma C.14, we obtain that w, (U L ) (1 − X 2 )v n −100 w 1 v 1 n −99 w 2 v 1 with high probability. For the first term, with Lemma C.17 and Cauchy-Schwarz inequality, we get that for any constant η > 0, By 1 -∞ duality and using t 2n −C0 , we find that Consequently, by the semigroup property, we find that U L (n −C0 , t)w ∞ = U L (2t/3, t)U L (n −C0 , 2t/3)w ∞ C(η) n Cη+ε n 1/3 t where we used Lemma C.17 again in the last step. Finally, the estimate (C.66) for general q follows from the standard interpolation argument. D. Proof of Lemma C.14 Finally, in this section, we prove the finite speed of propagation estimate, Lemma C.14. For simplicity of notations, we shift the time such that the starting time point is t = 0. We first prove a result for fixed s. Lemma C. 18. Fix a small constant 0 < δ < ω − ω 1 . For any a 3 n δ , b 3 n δ /2 and fixed 0 s 10t 1 , we have that for any large constant D > 0, sup t:s t 10t1 U L ab (s, t) + U L ba (s, t) n −D , with high probability. We postpone its proof until we complete the proof of Lemma C.14. We need to use the following lemma in order to extend the result in Lemma C.18 to all 0 s t 10t 1 simultaneously. Lemma C. 19. Let u ∈ R p be a solution of ∂ t u = Lu with u i (0) 0 for 1 i p. Then, for 0 t 10t 1 , we have Proof. Summing over i and using i (Bu) i = 0, we get that We now bound (C.62) and (C.63). Using (C.30) and Lemma C.11, we have that with high probability, E − E + (t, 0) + ( E + (t, 0) − y i (t, α)) 2 E + n −2/3+2ω , 1 i n ωa , E ∈ I c i (t, 0). Together with the estimate ρ t (E + (t, 0) − E, 0) ∼ √ E, we get that for 1 i n ωa , We can get the same bound for (C.63). Then, applying Gronwall's inequality to − Cn 1/3−ω i u i ∂ t i u i 0, we can conclude the proof. Now, we can complete the proof of Lemma C.14. SI.28 Proof of Lemma C.14. Fix any constant 0 < ε < δ, a 3 n 2δ and b 3 n δ . By the semigroup property, we have U L bi (n −C0 , t) = j U L bj (s, t)U L ji (n −C0 , s) U L ba (s, t)U L ai (n −C0 , s). (C.71) By Lemma C. 19, we find that i U L ai (n −C0 , s) 1/2. Moreover, by Lemma C.18 we have that U L ai (n −C0 , s) n −100 for any i 3 n δ+ε . This implies that there exists an i * 3 n δ+ε such that U L ai * (n −C0 , s) (4n) −1 . However, by Lemma C.18 we have that U L bi * (0, t) n −D for any large constant D > 0. Thus picking i = i * in (C.71), we get that U L ba (s, t) n −D+2 . This finishes the proof for the estimate on U L ba (s, t). The estimate on U L ab (s, t) can be proved in a similar way. It remains to prove Lemma C.18. The strategy was first developed in [14], and later used in [50], [51] to study the symmetric DBM for Wigner type matrices. Our proof is similar to the ones for [50,Lemma 4.2] and [51, Lemma 4.1], so we will not write down all the details. For the rest of the proof, we only consider times with t < τ . We will show that with a suitable choice of ν, we actually have τ = 10t 1 with high probability. We now deal with each term in (C.72)-(C.75). First, (C.72) is a dissipative term, so it only decreases the size of F (t). By Corollary C.12, we see that ψ ( y i − γ q ) = 0 when i > C 3 n δ for a large enough constant C > 0. Moreover, if i C 3 n δ and (i, j) ∈ A, then j C 3 n δ for some constant C > 0 depending on C . Thus the nonzero terms in (C.73) must satisfy that i, j C 3 n δ for a large enough constant C > 0. Then, by Corollary C.12, for i, j C 3 n δ satisfying (i, j) ∈ A, we have | y i − y j | 2 n −2/3+δ/3 . The term (C.75) can be easily bounded as (C.75) ν 2 n + ν −2 n 1/3+2δ/3 F (t)dt. Then, by the definition of F (t) and Markov's inequality, we obtain that U L iq * (0, t) n −D for any large constant D > 0 if i 3 n δ /2 and q * 3 n δ . The proof for U L q * i is the same by setting ψ → −ψ.
2020-08-11T01:00:33.554Z
2020-08-10T00:00:00.000
{ "year": 2020, "sha1": "1fdbcaec632210e43146046e7a58661e1f4bfea0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "1fdbcaec632210e43146046e7a58661e1f4bfea0", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
19615811
pes2o/s2orc
v3-fos-license
Accurate Characterization of the Pore Volume in Microporous Crystalline Materials Pore volume is one of the main properties for the characterization of microporous crystals. It is experimentally measurable, and it can also be obtained from the refined unit cell by a number of computational techniques. In this work, we assess the accuracy and the discrepancies between the different computational methods which are commonly used for this purpose, i.e, geometric, helium, and probe center pore volumes, by studying a database of more than 5000 frameworks. We developed a new technique to fully characterize the internal void of a microporous material and to compute the probe-accessible and -occupiable pore volume. We show that, unlike the other definitions of pore volume, the occupiable pore volume can be directly related to the experimentally measured pore volumes from nitrogen isotherms. ■ INTRODUCTION The internal void volume is an important characteristic of microporous materials, as it will determine their permeability to guest molecules, the adsorption capacity, and many other properties that can be engineered for the industrial applications that involve the use of these material, such as gas separation, 1 gas storage, 2 catalysis, 3 or drug delivery. 4 The field of microporous materials used to be dominated by zeolites, but recently, studies on new classes of microporous materials have been published. Examples include metal organic frameworks (MOFs), 5 covalent organic frameworks (COFs), 6 zeolitic imidazolate frameworks (ZIFs), 7 porous polymer networks (PPNs), 8 etc. For each of these classes, a large number of different materials can be obtained by combining different ligands and nodes, leading to millions of frameworks, each with different topologies, pore shapes, and chemistries. For example, at present, over 10000 MOFs and related porous materials have been synthesized, 9 and large databases of computationally predicted structures are rapidly expanding. 10−12 All the main applications for porous materials involve the adsorption of guest molecules in the pores. For this reason, it is of critical importance to correctly characterize the pore volumes of these materials as this is the first, and often the only, step to characterize a material. The internal void volume of a porous material can be determined computationally from the crystal structure. 13,14 This theoretical value of the pore volume can be compared with the experimental pore volume derived, for example, from the nitrogen uptake at low temperature. 15 The comparison of the two values can give some insight into the characteristics of the synthesized crystal. For example, if the experimentally measured void volume is smaller than the computed one, this can be symptomatic of an incomplete desolvation (solvent molecules still trapped inside the pore), limited permeability at the surface, or defects in the crystal. In addition, deviations of the theoretical pore volume from the experimental one can also indicate that the synthesized material is a poor representation of the ideal crystal structure. In this paper, we review a number of different methods employed to compute the void fraction. 13,14,16−18 We show that, because of the different assumptions, each method computes a (slightly) different portion of the volume. For some particular cases, these differences can be large and, more importantly, the theoretical pore volume cannot be compared with the experimental pore volume. One of the reasons for these differences is that the definition of pore volume depends on the type of probe that is used to compute it. To address this issue, we introduce the "probe-accessible and -occupiable volume". It represents the internal free space of the material where a spherical probe can have access and that it can occupy. We will highlight why this measurement can be meaningfully compared with experimental data. To illustrate the importance of this concept of probeoccupiable volume, we introduce a simple but representative model of a microporous material to test our algorithm. Then we investigate the discrepancies in the values of the volume as computed by different methods for a set of more than 5000 three-dimensional MOFs from the Cambridge Structure Database (as refined in the CoRE MOF database 19 ). Finally, we demonstrate some of the practical consequences by considering a sample of 10 structures, for which we can directly compare the computed pore volume with available experimental data. ■ METHODS Experimental Measurement of the Pore Volume. The internal free volume of a microporous material can be experimentally measured by determining the maximum loading of a gas in the pores of the material. Nitrogen is commonly used for this purpose because of its small size and because it weakly interacts with the framework. In addition, its normal boiling point is sufficiently low (77 K) that condensation at the exterior of the pores is avoided before the full saturation inside the pores. The pore volume is obtained under the assumption of validity of the Gurvich rule: 20,21 the density of the saturated nitrogen in the pores is assumed equal to its liquid density regardless of the shape of the internal void network and, because of the weak interactions, regardless of the chemistry of the framework. The pore volume (v pore ) and the void fraction (θ) are computed from where v pore is commonly expressed in cubic centimeters per gram of crystal, n N 2 ads,satd is the specific amount of nitrogen adsorbed (g of nitrogen/g of crystal), and ρ N 2 liq and ρ cryst are the densities of the liquid nitrogen (0.808 g/cm liq 3 ) and of the material, respectively. The commonly used protocol to determine the pore volume involves measuring the nitrogen uptake just before it starts to condense outside the material, i.e., 0.9P/P 0 , 21 with P 0 being the saturation pressure of the probing gas (1 atm for pure N 2 ). To compare this pore volume with a theoretical value obtained from the crystal structure, it is important to realize that this experimentally measured value does not consider all the small interstices between the atoms where the nitrogen molecule cannot fit, nor the nonaccessible pores, i.e., the pores connected only by channels too narrow for a nitrogen molecule to enter. Computational Methods To Assess the Pore Volume from the Unit Cell. To compute the pore volume of a microporous crystal from the knowledge of the atomic structure of the unit cell, there are a number of different methods that are currently employed. 13,14,16−18 Each one computes slightly different portions of the full internal volume, as shown in Figure 1. Here we propose a list of precise definitions to distinguish the volume computed with each method. For all these definitions, the pore volume can be further characterized either as accessible (Ac, part of an accessible network) or as nonaccessible (NAc, isolated pocket). (1) Geometric pore volume (Gm). The Gm is defined as all the volume of the unit cell which is not overlapping with the atoms of the crystal. In Figure 1, this is the nonblack area. (2) Probe center pore volume (PC). The PC is defined as the volume that the center of a spherical probe can occupy. In Figure 1, this is the sum of the dark green area (for pores that are accessible from the outside) and dark orange area (for pores that are nonaccessible from the outside). (3) Helium pore volume (He). In the definition of the PC volume, we assume hard-sphere interactions between the probe atoms and the atoms of the pore. In the definition of the helium pore volume, these hard-core interactions are replaced by a more realistic intermolecular potential, which makes this volume dependent on the temperature assumed for the calculation. In Figure 1, the He volume is represented by the same colors as the PC volume (dark green and dark orange). (4) Probe-occupiable pore volume (PO). This is a definition which we introduce here to ensure that the theoretical pore volume matches the pore volume obtained experimentally from the nitrogen isotherms. The experimental definition assumes that we can take the bulk density of the gas and compute the volume from the number of adsorbed gas molecules per unit volume. This volume, however, has no notion of atoms and should be defined as the entire volume enclosing all the adsorbed gas atoms. Therefore, in Figure 1, this volume has to include the light green (for accessible pores) and light orange (for nonaccessible pores) areas in addition to the dark green and dark orange areas. If we have a system with large pores, the difference between the Gm and PO volumes is small, but for micropores, however, this difference can be significant. These pore volumes can be multiplied by the density of the material to be converted to the corresponding void fractions. The frameworks are assumed rigid, i.e., considering the atoms frozen in their crystallographic positions. For the geometric pore volume (Gm), we assume that the atoms can be approximated as spheres with a conventional radius, depending on the atom type and which represents their electron cloud, i.e., the van der Waal (vdW) radius. The analytical calculation of the Gm pore volume needs to consider all the many-body overlaps between the atoms. Consequently, the most efficient solution to obtain the geometric pore volume is to perform a Monte Carlo test. A number of points, randomly displaced in the unit cell or taken on a 3D grid, are evaluated: if a point is overlapping with an atom, i.e., the distance of the point with that atom is less than its vdW radius, then a value of 0 is Qualitative two-dimensional model of the unit cell of a microporous material, permeable to a spherical probe (red). Each color corresponds to a different category of volume. In the table, the color coding is explained and a summary of which portions of the volume are considered for each method is given: geometric pore volume (Gm), accessible and nonaccessible probe center pore volume (Ac-PC, NAc-PC), accessible and nonaccessible probe-occupiable pore volume (Ac-PO, NAc-PO), and solvent-free Connolly volume. assigned to that point. A value of 1 is assigned otherwise. Therefore, the Gm void fraction θ Gm of the crystal from N sample points is obtained as Consequently, the geometric pore volume can be obtained by dividing the void fraction by the density of the framework (eq 2). In this measurement, the volume inside the large pores is summed together with all the small interstices in the framework, which are too narrow to be effectively occupied by a guest molecule. Hence, the value computed in this way will always be an upper bound for the volume that a probe can effectively access. The probe center pore volume (PC), often named simply "pore volume", 14,17 considers the shape of the probe used for the measurement, conventionally spheres with a radius of 1.32 Å for helium and 1.86 Å for nitrogen. 19,22 In this definition, it is important to recall that even the nitrogen molecule is treated as a spherical probe, as shown in Figure 2. For this calculation, the same Monte Carlo test is performed, but this time the radius of the framework's atoms is taken as the sum of the atomic radius plus the probe radius. The obtained void fraction then represents the portion of the volume which is occupiable by the centers of the probe ( Figure 3). It is also important to note that the PC pore volume for a probe of zero radius corresponds to the Gm pore volume. A third solution is to compute the helium pore volume (He). Similarly to the Gm pore volume, a collection of sampling points are considered, but instead of assigning a value of 0 or 1 depending on the overlap with atoms, this time the Boltzmann factor (BF), related to the insertion of a helium atom, is computed: E int is the energy of interaction of the helium atom with the atoms of the framework, as computed using, for example, the Lennard-Jones potential (see the Supporting Information). Similarly to the previous cases, the void fraction θ He (and therefore the pore volume) is computed as the average over all the sample points: It is worth noting that this measurement is influenced by the force field and the temperature used. It is therefore important to use a consistent choice to compare different sets of results. 17 We need to stress that the He void fraction, in the way it is measured, does not correspond to the amount of helium that can saturate in the pores. The physical meaning of the He void fraction is linked to the probability of a single helium atom to be adsorbed in the framework at a certain temperature, which is chosen to be 298 K by convention. 17 At this point, it is important to recall that none of the previously summarized methods to compute the pore volume exactly match with the pore volume we obtain from the nitrogen isotherms. To arrive at a definition of pore volume that can be directly compared to experiments, we introduce the probe-occupiable pore volume (PO), and we propose an algorithm to compute it. We use the term "occupiable" to define the portion of the space that can be spanned by the probe, which should not be confused with the term "accessible" (Ac), which defines the pores where the probe can have access. Accessible versus Nonaccessible Channels. In these Monte Carlo simulations, we are probing a number of points within the unit cell to measure the void fraction (and therefore the pore volume) of the bulk material. However, it is also important to know if the detected free space is accessible from the outside, i.e., if a cavity forms a multidimensional network where a guest molecule can enter at the solid/gas interface and diffuse. The same analysis allows detection of whether a solvent molecule is able to exit the pores and a synthesized crystal can be effectively desolvated. This concept of accessibility is obviously related to the size of the molecule, represented as a spherical probe, which we are interested to evaluate. Once we compute the PC volume, we can further categorize this internal space as accessible (Ac-PC) or nonaccessible (NAc-PC) by considering whether it composes a multidimensional network along the periodic boundaries. This is illustrated in the two-dimensional example of Figure 3: the central channel (A) is accessible to the probe, while the other one (B) is not, because the PC pore volume does not form a continuous path. The accessibility test can be performed by doing a percolation analysis along the edges obtained from the Voronoi decomposition 24 or analyzing a grid of points. 25,26 The same concept can be applied to compute the Ac-PO (as presented in the next section) or the Ac-He pore volume. In the second case, one needs to first assume an energy cutoff for the helium−framework interactions, which defines the regions that are diffusively inaccessible on an experimental time scale (e.g., 15 k b T). Then one must consider the regions of the volumes where the interaction energy is lower than the cutoff to perform a percolation analysis. 27 For what concerns the Gm volume, the calculation considers a dimensionless probe, and therefore, we do not have any practical interest in analyzing its accessibility. Algorithm To Compute the Occupiable Pore Volume. In this section, we propose an algorithm to obtain the experimental pore volume from our definition of the accessible and occupiable volume (Ac-PO) and in general to fully characterize the internal volume of a microporous material. . Two-dimensional example of the probe center pore volume calculation. The periodic unit cell is duplicated in the x directon. The radius of the framework's atoms (black) is expanded by the radius of the red probe (light green and light orange). The remaining area is what we define as the probe center pore volume (dark green and dark orange). The framework is composed by two channels: channel A (green), which is accessible, and channel B, which is nonaccessible (orange). Channel B is too narrow for the probe to pass from one side to the other and can be referred to as an isolated pocket. (1) Let us consider a set of N sample points, randomly selected within the unit cell. (2) For each point, we compute its distance to the framework's atoms: if this distance is smaller than the atomic radius, the sample point is categorized as "overlap"; if it is larger than the sum of the atomic and probe radii, it is categorized as PC. For each point assigned to the PC volume, we compute the distance δ between the point and the surface of the PC volume, defined as with d being the distance to the closest atom, r probe the radius of the spherical probe, and r atom the vdW radius of the closest atoms of the framework (Figure 4). In addition, we use a percolation algorithm 14 to further classify the sample point as Ac-PC or NAc-PC. (3) For each sample point left, we compute the distance for all the Ac-PC marked points, and if one of these distances is closer to the Ac-PC surface than the probe radius, or the uncategorized point will be considered as part of the now defined "accessible extended volume" (light green in Figure 1). The inclusion of δ in eq 7 improves the speed and the accuracy of the algorithm (at the same number of sample points), because in this way also the internal points of the Ac-PC volume give some information on the position of its surface. (4) The same test is performed for the NAc-PC points: in the case of success, uncategorized points will be marked as belonging to the "nonaccessible extended volume" (light orange in Figure 1). (5) If none of the previous tests are true, the sample point belongs to what we define as the "narrow volume" (pink in Figure 1). It follows that the PO volume is given by the summation of the probe center and the extended volume. Figure 1 presents all the different categories of volume with color coding for an illustrative twodimensional model. With these definitions, we marked as "narrow" the entire volume that cannot be touched by the probe because it is hindered by the framework. This can be the case for a narrow channel (pink, Figure 1) or the small interstices between the atoms of the crystal (pink, Figure 4). Moreover, the overlap volume added to the narrow volume gives what is commonly defined in biochemistry as the "solvent-free volume" or "Connolly" volume 18 (Figure 1). Computational versus Experimental Pore Volumes. Now that we have fully characterized the pore volume inside a microporous framework, we can couple the computational results with experimental measurements. Under the assumption of the Gurvich rule, the experimental 77 K nitrogen's pore volume can be compared with the Ac-PO pore volume computed from the unit cell, using a spherical N 2 probe. The nitrogen's NAc-PO pore volume could also be measured experimentally with smaller probing molecules, e.g., helium, 28 or with positron annihilation lifetime spectroscopy (PALS). 29 The measurements with these techniques are not as frequently used. An alternative to nitrogen is argon as the probing molecule at 87 K. Despite the higher cost of Ar, it can be preferred due to the smaller size and the enhanced diffusion rate at 10 K higher temperature. 30 By selecting for the calculations a probe radius that corresponds to the gas used in the experiments, we are able to directly compare our theoretical calculations with the experimental data. We stress once more that for these methods the thermal vibrations of the atomic positions are not taken into account, and for the Ac-PO calculation, we use hard-sphere potentials for which the effective volume does not depend on the temperature. These assumptions hold for the experimental conditions (i.e., 77 K for nitrogen adsorption). Moreover, we do assume that the crystal structure does not change upon adsorption of nitrogen (e.g., pore swelling or ligand rotation). For cases where the diameter of the channel is very similar to the diameter of the probe, further investigations are needed. 31 A small distortion of the framework or a different choice of the parameters can drastically change the amount of Ac and NAc volume detected, an effect which has similarly been shown in the context of noble gas uptake. 32 Software and Parameters. In this section, we illustrate how the different pore volumes are determined in the different software packages that compute pore volumes. The Poreblazer package 13 computes the Gm and He pore volumes using sample points lying on a grid with a 0.2 Å bin size. The Zeo++ package 14 gives the Gm and PC volumes, the first one being obtained by setting the radius of the spherical probe to 0. In this software, the number of sample points specified in the input is randomly displaced in the unit cell. The PLATON package 16 computes the PO volume using a grid of points. Points belonging to the PC pore volume are first detected, and then their neighbor points are considered. Contrary to Zeo++, this software does not distinguish between Ac and NAc volumes. Also, one should pay attention to the terminology: in this software, the authors define as "accessible" volume what here we define as "occupiable" volume. The Raspa package 33 (which is mainly used for Monte Carlo and molecular dynamics simulations) provides the He pore volume considering a specified number of sample points in random positions of the unit cell. The algorithm we proposed in this work to compute the Ac-PO volume and fully characterize the internal pore volume has been implemented as an extension of Zeo++. 14 In our calculations, the He volume is computed at 298 K (25°C), which is the typical temperature condition of most previous calculations. 17 We used the Lennard-Jones potential to describe the dispersion interactions, applying the Lorentz−Berthelod mixing rules and considering a cutoff distance of 12.8 Å; beyond that, the potential is set to 0. Parameters for the framework and for helium were taken from the universal force field (UFF) 34 and from Hirschfelder, 35 respectively. Concerning the "hard sphere" calculations (Gm, PC, and PO) and for all the software packages (Poreblazer, Zeo++, and PLATON), the Lennard-Jones σ values from UFF were used as the diameter of the framework atoms, to be consistent with the He calculations. A kinetic radius of 1.86 Å was considered for nitrogen. 22 ■ RESULTS AND DISCUSSION 3D Model for the Full Characterization of the Pore Volume. To illustrate the difference between the various approaches, we applied our algorithm on a three-dimensional model which is able to represent qualitatively the characteristics of a microporous material, inspired by the two-dimensional example reported in Figure 1. The model has one accessible pore and one nonaccessible pore, with a narrow channel (i.e., with a diameter smaller than the probe's diameter) connecting the two. The model is built with a large number of spheres lying on a grid to represent the framework, leaving free space that corresponds to pores and channels ( Figure 5, top). In this simplified model of a porous framework, we can really distinguish between all the different categories of internal volume listed in Figure 1: the result from the analysis with 500000 sample points is shown in Figure 5 (bottom) using the same color coding for the points. To assess the convergence of the method, we run our algorithm for different numbers of sample points. From the results shown in Figure 6, it is immediately evident how the conventionally computed void fraction based on the Ac-PC method is considerably smaller than the void faction computed with the Ac-PO method. The Ac-PO calculation is converged to 0.1% of the void fraction with 10 points per cubic angstrom. Within our algorithm, to measure the PO void fraction, we need first to accurately locate the surface of the PC volume and expand this volume by the length of the probe radius. To minimize the error associated with a poorly sampled PC surface, one should increase the number of sample points, albeit with a significant computational cost. Nevertheless, in real frameworks, we can consider it reasonable to use a convergence within 1% of the void fraction to compare the calculated values with experimental data. Comparison of Different Pore Volume Definitions with Experimental Data for HKUST-1. The triclinic unit cell structure of HKUST-1 (CSD code FIQCEN) was considered to compute the void fraction with the different methods. Water solvent molecules were removed from the original deposited structure. 19 No NAc volume was detected. The resulting void fraction and computational time are reported as a function of the number of samples per cubic angstrom that were used for the calculation (Table 1). We use as the experimental value for the void fraction 0.678, 21 which is the highest value we could find in the literature for the desolvated crystal. Lower values were reported in the literature, from 0.590 to 0.660. 36−40 The computed Ac-PO void fraction converges to a value which is close to the experimental result, while the Ac-PC void fraction is significantly smaller. The PO void fraction computed with the CALC_SOLV routine in PLATON is 0.654: this result was obtained in 165 s with a minimum grid spacing (0.14 Å). These settings give 365 samples per cubic angstrom, and it is the most accurate sampling that the program can manage. The Gm void fraction of 0.708 is similar to the Ac-PO value, meaning that the percentage of narrow volume is negligible. On the other hand, the He calculation gives a value of 0.764, which overestimates the experimental void fraction. It is surprising to note that using a different parametrization for the Lennard-Jones interactions, i.e., UFF's 34 instead of Hirschfelder's 35 parameters for helium, we obtain an He void fraction of 0.947, which disagrees with the experimental and Ac-PO values. This evidence motivated a deeper analysis of the physical and mathematical meaning of the He calculation. Helium Void Fraction. The He calculation is very commonly used to compute the void fraction. 41,42 As we demonstrated in the previous section, its value depends strongly on the force field parameters used to model the helium−framework interactions, and it can lie far off the experimental value. Therefore, we analyze the underlying mathematical reason for this variability. First, we study the case of a helium atom interacting with a carbon atom, using the Hirschfelder−UFF parameters to represent their interaction at different distances. The potential and the Boltzmann factor (BF) for different He−C distances are shown in Figure 7. We can now compare the He calculation to the Gm calculation (in this diatomic model, the Gm and Ac-PO volumes are equivalent). For the He calculation, the BF is the value assigned for every He−C distance, while, for the Gm calculation, we assign a value of 0 for a He−C distance inferior to the carbon's radius (equivalent to half the Lennard-Jones's σ for carbon) and a value of 1 elsewhere (see the dashed blue line in Figure 7). Therefore, the void fraction is the integration of these values over the entire volume considered. He and Gm coincide exactly in the case when the two integrals are equal, i.e., when there is a match between the cyan and purple areas in Figure 7. The BF depends on the set of parameters used and on the temperature assumed in the calculation. Indeed, the common choice of the temperature of 298 K is just a convention, and its variation can drastically affect the He calculation, as shown in Figure 7. Moreover, the He void fraction is not strictly restricted to be smaller than 1, since also the BF can take values larger than 1, especially for the framework's atom with a large Lennard-Jones ε parameter. In UFF, for example, the ε values for aluminum, silicon, and phosphorus are ca. 5, 4, and 2.5 times the carbon's value, which may give unrealistic contributions larger than 1 for part of the pores. To see for which types of pores the Gm and He void fractions show the largest differences, we extended our analysis to cylindrical and spherical pores and a reticular structure. We modeled the framework with a smeared continuous distribution of carbon atoms. The details are reported in the Supporting Information. Figure 8 shows the comparison between the Gm and He void fractions in these models. We observe for all three pore shapes that the Gm void fraction is greater than the He void fraction for small pores, while for bigger pores the He void fraction becomes greater. This is due to the fact that for smaller pores the BF for helium is always less than 1, because of the unfavorable interaction between the particle and the framework. For bigger pores, the BF can assume values larger that 1, and in such cases, the He void fraction systematically overestimates the experimental void fraction. A similar trend for the He vs Gm curve is observed for the three types of pores in Figure 8, with the main difference being the value of the intersection with the bisector, which is therefore dependent on the geometry of the pore. The Ac-PO volume is expected to be similar to the Gm volume, with the notable difference that it collapses to 0 for small pores, i.e., for < + L r r 2( ) probe atom (8) CoRE MOF Screeening. Our model calculations show that the differences between the He and the Gm void fractions are not negligible and can be interesting to see how these model calculations compare with the void fractions for the experimental MOF structures. A set of 5109 MOF structures were investigated from the CoRE MOF database: 4764 frameworks were modified by the authors (solvent removal and other adjustments described in the paper), 19 and the remaining 345 frameworks were downloaded directly from the Cambridge Structural Database, 43 without any further manipulation. The results of computing the He and Gm void fractions for these structures are shown in Figure 9. For most materials, the trend is mostly similar to the reticular model presented in the previous section. One can notice that for many materials the void fraction computed using the He method is higher than the Gm void fraction, when the Gm method should compute an upper bound value for the void fraction. The most extreme example for this overestimation is the structure LOFZUB: 44 this framework contains aluminum and phosphorus, which have a particularly high Lennard-Jones ε. On the other side, a few frameworks appear to have the opposite trend, showing a moderate Gm void fraction but a lower He void (highlighted in yellow in Figure 9). Interestingly, Figure 7. One-dimensional representation of the Lennard-Jones potential and the associated Boltzmann factor as a function of the C−He distance (system shown on the top). The Boltzmann factor (blue solid line) function is compared to the factor associated with the occupiability of the space, i.e., 1, everywhere outside the carbon's van der Waals radius (blue dashed line). In the bottom figure, the sensitivity of the Boltzmann factor to the arbitrary value of the temperature is investigated. Notice that doubling or halving the temperature corresponds to respectively halving or doubling the value of ε for the Lennard-Jones interaction. all of them have a similar chemistry; i.e., the ligands of these structures are based on CN and CC bonds. These kinds of ligands are particularly thin and simple, resulting in weaker dispersion forces, which explains the low He void fraction. Using our algorithm, we computed the Ac-PO void fraction for all the frameworks considering a probe of 1.86 Å (N 2 ) and using 100000 sample points. The results are compared with the Gm and He void fractions in Figure 10. As expected, the value of the Ac-PO void fraction is always smaller than that of the Gm void fraction. This behavior is more pronounced for very dense materials, where the atoms of the framework create many small interstices (narrow volume) that are excluded for the calculation of the Ac-PO void fraction. Also, for many structures, the void fraction collapses to 0, meaning that, under the assumption of a rigid framework, these crystals are completely impermeable to the probing sphere. The material labeled SETPEO is a prominent example: the 0.71 geometric void fraction of this material can be decomposed to a 7% narrow volume, with 64% of the volume nonaccessible to the nitrogen probe (the 29% remaining is the volume occupied by the atoms). For this material, we can expect, if not a complete impermeability, a slow diffusion of nitrogen inside the activated crystal. Moreover, methanol is used as the solvent for the synthesis, and given the size of methanol, we can expect the impossibility of a complete desolvation, as effectively reported. 45 If we compare the He and nitrogen Ac-PO volume fractions, it is interesting to note the systematic overestimation of the pore volume which affects the He method. There are three reasons for this: the helium probe is smaller, the nonaccessible volume is not excluded, and, most important, it is possible for the BF to be higher than 1. The structures with the opposite trend, where the void fraction is underestimated by the He method, are again the ones characterized by CN and CC ligands (shown in Figure 9) . Comparison with Experimental Data for 10 MOFs. We studied in detail 10 different MOFs (including HKUST-1) to obtain some insights into the practical consequences of the differences in pore volume that are computed by the different methods and their agreement with experimental data. 21,46−54 All the frameworks investigated have accessible channels for nitrogen, and no NAc pore volume was detected. Figure 11 shows that the PO method leads to the best agreement among the different methods. These results emphasize that the value for the PC pore volume (sometimes simply defined as "pore volume") leads to a significant underestimation of the experimental pore volume. Another consideration is that for these 10 structures the total Gm void fraction is close to the PO void fraction, meaning that in these samples the narrow volume is a negligible percentage of the Gm pore volume. The He void fraction is close to the experimental value if we use Hirschfelder's Lennard-Jones parameters for helium, noticing however a systematic but relatively small overestimation. Nevertheless, the same calculation employing the He parameters from UFF shows a much larger overestimation of the void fraction, even with nonphysical values greater than 1 for SNU-30 and UTSA-62. In four materials, the experimental volume is more than 10% lower than the computed value (PCN-46, SNU-30, UTSA-34, and UTSA-64). We attribute this difference to some incomplete desolvation or pore shrinking after the removal of the solvent. At this point, it is important to note that the computational pore volumes are based on structures from the CoRE MOF database in which solvent molecules are removed computationally, keeping the rest of the crystal structure unchanged. 19 In some cases, this procedure is unrealistic, and the most evident example is SNU-30, where the computed void fraction is 8 times the measured value. The authors of its synthesis already reported a big discrepancy between the experimental and computed surface areas, which was attributed to the shrinking of the evacuated pores. ■ CONCLUSIONS In the present work, we compared different methods that are used to compute the pore volume of a crystalline microporous material from its crystal structure. We show that these methods use different definitions of the pore volume, and we show that in particular for micropores these differences can be quite significant. These volumes are referred to in this work using a consistent nomenclature, i.e., the geometric (Gm), the helium (He), the probe center (PC), and the probe-occupiable (PO) methods. For the last two, it is meaningful to further identify the volume as accessible (Ac) or nonaccessible (NAc). The main conclusion of this work is that the accessible probe-occupiable (Ac-PO) pore volume gives the closest representation of the experimentally measured pore volumes for all types of pores. The other methods show systematic deviations. The geometric (Gm) calculation leads to a value for the pore volume which is an upper limit for this quantity, while the probe center (PC) calculation considerably underestimates the experimental value. The helium (He) void fraction was shown to be very dependent on the parameters and on the reference temperature assumed for the calculation. In addition, we have presented a novel algorithm to fully characterize the internal volume of a crystal and assess its Ac- . Void fraction as computed with the different methods shown and compared with experimental data. The structures were computationally desolvated as reported in the CoRE MOF database. 19 A list with the references for the experimental values is provided in the Supporting Information. PO pore volume. This extension is now implemented in the freely available Zeo++ code (www.zeoplusplus.org). The algorithm takes into account both the solvent accessibility and solvent occupability of the internal pore cavity, and therefore, its result can be meaningfully compared with the measurement of the pore volume, as obtained from the nitrogen uptake. The comparison between the experimental data and the Ac-PO void fraction allows detection of discrepancies due to low crystallinity, poor desolvation, and pore shrinking in the real material. ■ ASSOCIATED CONTENT * S Supporting Information The Supporting Information is available free of charge on the ACS Publications website at DOI: 10.1021/acs.langmuir.7b01682. Description of the 3D spherical, cylindrical, and reticular models, references of the experimental measurement for the 10
2018-04-03T04:07:43.942Z
2017-06-21T00:00:00.000
{ "year": 2017, "sha1": "71c4ab07476d93b3143d1a86e2cee58b7a606217", "oa_license": "acs-specific: authorchoice/editors choice usage agreement", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acs.langmuir.7b01682", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b2310ab82bd4602ee9c3650ebee5de7c1bdeabf6", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
266261422
pes2o/s2orc
v3-fos-license
Multi-Omics Analysis to Understand the Effects of Dietary Proanthocyanidins on Antioxidant Capacity, Muscle Nutrients, Lipid Metabolism, and Intestinal Microbiota in Cyprinus carpio Proanthocyanidins (Pros), a natural polyphenolic compound found in grape seed and other plants, have received significant attention as additives in animal feed. However, the specific mechanism by which Pros affect fish health remains unclear. Therefore, the aim of this study was to investigate the potential effects of dietary Pro on common carp by evaluating biochemical parameters and multi-omics analysis. The results showed that Pro supplementation improved antioxidant capacity and the contents of polyunsaturated fatty acids (n-3 and n-6) and several bioactive compounds. Transcriptomic analysis demonstrated that dietary Pro caused an upregulation of the sphingolipid catabolic process and the lysosome pathway, while simultaneously downregulating intestinal cholesterol absorption and the PPAR signaling pathway in the intestines. Compared to the normal control (NC) group, the Pro group exhibited higher diversity in intestinal microbiota and an increased relative abundance of Cetobacterium and Pirellula. Furthermore, the Pro group had a lower Firmicutes/Bacteroidetes ratio and a decreased relative abundance of potentially pathogenic bacteria. Collectively, dietary Pro improved antioxidant ability, muscle nutrients, and the diversity and composition of intestinal microbiota. The regulation of lipid metabolism and improvement in muscle nutrients were linked with changes in the intestinal microbiota. Introduction Aquaculture plays a vital role in supplying high-quality protein and essential micronutrients for human consumption, contributing to human health and overall well-being.However, in intensive aquaculture production, several factors, such as high stocking density, excessive feeding, and fluctuating conditions, have made it more susceptible to disease outbreaks [1].To mitigate economic losses, various veterinary drugs, especially antibiotics and chemical agents, are extensively utilized in aquaculture for disease prevention and treatment [2].Despite their effectiveness, the use of veterinary drugs is increasingly limited due to their adverse effects on the environment and human health [3].As consumers are becoming more concerned about organic and environmentally friendly food, the utilization of natural products such as plant extracts or probiotics in aquaculture has been proposed as a possible solution [4,5].In recent years, medicinal plants and their extracts have received considerable attention as eco-friendly and efficient alternatives to chemical agents [6].They have been found to offer various beneficial effects to aquatic animals, such as stress reduction, growth promotion, appetite stimulation, and immunity improvement [7].These effects are attributed to the presence of active compounds such as polysaccharides, alkaloids, tannins, saponins, flavonoids, or essential oils [6,8].Furthermore, medicinal plants and their derivatives are commonly used as diet additives in aquaculture due to their ease of preparation, low cost, and minimal adverse effects on both the fish and the environment [4,9]. Proanthocyanidins (Pros), also known as condensed tannins, are a type of natural polyphenolic compound distributed in various plant parts, including stems, leaves, flowers, seeds, and fruit [10].They are polymerized by flavan-3-ol subunits (i.e., catechin and epicatechin) and produced by the flavonoid biosynthetic pathway [11].In recent years, there has been a growing interest in the research into Pros due to their attractive nutritional properties and perceived health benefits.Pros possess a variety of biological activities, such as antioxidative, free radical scavenging, anti-inflammatory, immune-stimulating, anti-viral, and cardio-protective features [12].An in vitro study showed that Pros have the strongest antioxidant activity among more than 100 types of representative phenolic compounds [13].They have also been found to improve intestinal permeability and increase microbial diversity in response to diet-induced unfavorable changes in the intestine [14,15].Moreover, Pros can alleviate cardiovascular diseases [16], ameliorate the pesticide rotenoneinduced mitochondrial respiration anomalies [17], and inhibit inflammation by modulating the NF-κB pathway [18].Therefore, it is recommended to moderately enhance the intake of Pro-enriched food, as this may contribute to the prevention of chronic diseases and improve health conditions in both humans and animals [19]. In aquaculture, Pros have been investigated as dietary supplements, and their positive effects on various fish species have been demonstrated.Dietary Pro was reported to promote growth and improve serum biochemistry parameters related to health status in tilapia (Oreochromis niloticus) [20].Wang et al. [21] showed that the weight gain, feed utilization, and growth performance of juvenile American eel (Anguilla rostrata) were increased when Pros were incorporated into their diet.In another study, Pros alleviated cadmium toxicity in pearl gentian grouper (Epinephelus fuscoguttatus female × Epinephelus lanceolatus male) via enhancing antioxidant ability [22].Furthermore, Pros have been demonstrated to attenuate growth retardation and low immunoglobulin level induced by histamine in American eels [23], as well as mitigate hepatic lipid accumulation and inflammation caused by a high-fat diet in grass carp (Ctenopharyngodon idella) [24].It is apparent that previous studies in aquatic animals have primarily focused on investigating the impact of Pros on growth and biochemical parameters.However, there is a scarcity of data regarding alterations in muscle nutrients, lipid metabolism, and intestinal microbiota in cultured fish after Pros feeding, particularly the lack of multi-omics analysis data to elucidate the underlying mechanism of Pros' actions. With the rapid advancement of molecular biology and bioinformatics, multi-omics methodologies are gaining recognition as powerful tools for understanding the biological processes of aquatic organisms and their interactions with environmental factors.These methodologies, including genomics, transcriptomics, proteomics, metabolomics, and microbiomics, can provide comprehensive insights into the intricate workings of biology systems.In response to environmental stressors, multi-omics analysis has revealed new pathways in fish that could be pivotal in understanding how these organisms adapt and survive [25,26].Moreover, multi-omics analyses have facilitated the elucidation of crucial mechanisms involved in fish diseases, contributing to the treatment options and comprehension of pathogenic processes [27].For example, Li et al. employed both transcriptomic and proteomic analyses to elucidate the beneficial impact of resveratrol on lipid metabolism disorders induced by a high-fat diet in red tilapia [28].Extensive research evidence suggests that multi-omics approaches are capable of revealing a wider array of mechanisms in the fields of toxicology, pharmacology, physiology, and pathology of fish. Common carp (Cyprinus carpio), a globally distributed and consumed species, is commonly used as an experimental animal in various research fields including pharmacology, toxicology, pathology, and nutrition.In aquaculture, it is frequently utilized for screening medicinal plant extracts and assessing their pharmacological effects [29,30].Therefore, in Measurement of Antioxidant Parameters The blood was centrifuged at 5000 r/min for 10 min at 4 • C to obtain the serum.The liver, gill, muscle, and intestine tissues were homogenized with 9 times (w/v) normal saline at 4 • C. In serum, liver, gill, muscle, and intestine tissues, 9 samples from each group were used to detect the antioxidant parameters, including malondialdehyde (MDA), superoxide dismutase (SOD), glutathione peroxidase (Gpx), total antioxidant capacity (T-AOC), and glutathione (GSH).Kits for measuring T-AOC, SOD, and GSH were supplied by Nanjing Jiancheng Bioengineering Institute (Nanjing, China).The Gpx kit was provided from Suzhou Grace Biotechnology Co., Ltd.(Suzhou, China), while the MDA kit was ordered from the Beyotime Institute of Biotechnology (Nantong, China).SOD activity was measured at an absorbance of 450 nm utilizing the WST-1 method, with calculation based on the production of WST-1 formazan.The T-AOC level was assessed using the FRAP assay at 593 nm, which quantifies the reduction of the Fe 3+ to the Fe 2+ form.The GSH content was assessed based on the intensity of the yellow color produced by its reaction with 5,5 -dithiobis(2-nitrobenzoic acid) (DTNB).The activity of GPx was assayed utilizing cumene hydroperoxide (Cum-OOH) as the substrate, with 5,5 -dithiobis(2-nitrobenzoic acid) (DTNB) serving as the chromogenic agent.The formation of MDA was evaluated using thiobarbituric acid (TBA) as a reactive substrate at 532 nm.The protein content in liver, gill, muscle, and intestine tissues was quantified using the bicinchoninic acid (BCA) assay and the OD value was measured at 562 nm.During the data analysis, the NC (normal control) group served as the control. Determination of Amino Acids and Fatty Acids in Muscle The wet muscle tissue (100 mg) was added to 10 mL of 6 mol/L hydrochloric acid solution.The mixture was then subjected to hydrolysis at 110 • C for 22 h.Following hydrolysis, the resulting hydrolysate was filtered into a 50 mL volumetric flask.Next, 1.0 mL of the filtrate was transferred into a 15 mL test tube and concentrated under reduced pressure at 40 • C.After drying, the concentrated mixture was dissolved using 1.0 mL of sodium citrate buffer (pH 2.2).Finally, the solution was filtered through a 0.22 µm filter membrane to analyze amino acids using automatic amino acid analyzer (HITACHI, Japan). The muscle tissue (200 mg) was hydrolyzed by adding 10 mL of HCl (8.3 M) at 70-80 • C for 40 min.The resulting hydrolysate was used for total lipid extraction by adding 30 mL of a mixture of diethyl ether and petroleum ether (1:1, v/v) according to the method in the national standard of China (GB5009.168-2016)[33].To convert the extracted lipid into methyl esters, it was subjected to methyl esterization using a 14% boron trifluoride-methanol solution at 45 • C for 20 min.The fatty acid methyl esters (FAMEs) were then analyzed using a gas chromatography instrument (Agilent 7890A, Agilent Technologies, Santa Clara, CA, USA) equipped with an HP-88 Agilent column (100 m × 0.25 mm × 0.20 µm).The injector and detector temperatures were set at 250 • C and 260 • C, respectively.The fatty acid composition was identified by comparison with 37 kinds of FAME standards (Sigma, St. Louis, MO, USA). Non-Targeted Metabolome Sequencing in Muscle The muscle metabolites were extracted using a solution (acetonitrile: methanol = 1:1) via centrifugation (12,000 rpm, 15 min, and 4 • C) from the NC group (named MNC, 3 pooled samples) and the 0.8 g/kg Pro group (named MPro, 3 pooled samples).The resulting supernatant was analyzed using an UHPLC system (Vanquish, San Jose, CA, USA) and a QE-MS (Orbitrap MS, San Jose, CA, USA).The raw data were converted into the mzXML format and metabolite annotation was performed using an in-house MS2 database (BiotreeDB, V2.1).To improve metabolite coverage, the metabolites were detected in both positive and negative ion modes.Principal component analysis (PCA) was used to evaluate the preliminary differences between groups of samples.Orthogonal projection to latent structures-discriminant analysis (OPLS-DA) was applied to distinguish the metabolomics profile of the two groups.The OPLS-DA model was further evaluated through cross-validation and permutation tests.Differential metabolites between the NC and 0.8 g/kg Pro groups were identified by comparing the VIP score of the OPLS-DA model using the following threshold values: VIP score ≥ 1 and p-value < 0.05 (t-test).The differential metabolites were mapped to the KEGG database to identify significantly enriched metabolic pathways. Transcriptome Sequencing in Intestines Total RNA was extracted from the intestinal tissue of the NC (4 pooled samples) and 0.8 g/kg Pro (4 pooled samples) groups using the TRIzol reagent kit (Invitrogen, San Diego, CA, USA) according to the manufacturer's instructions.The mRNA was reversetranscribed into cDNA after enrichment and fragmentation.The purified double-stranded cDNA was used to construct a library via PCR amplification, which was sequenced using Illumina Nova 6000 system (Gene Denovo, Guangzhou, China). To obtain clean data, the raw data were filtered using fastp (version 0.18.0).After the removal of residual ribosomal RNA, the clean data were mapped to the reference genome of Cyprinus carpio (NCBI: GCF_000951615.1) using HISAT2.2.4.PCA was performed to evaluate the distance relationship between samples.DESeq2 software (version 3.0) was used to identify the differentially expressed genes (DEGs) between the NC and Pro groups using the following threshold values: FDR < 0.05 and |log FC| ≥ 1.To identify biological functions and key signaling pathways, the DEGs were mapped onto the GO (gene ontology) and KEGG databases.Furthermore, we performed a gene set enrichment analysis (GSEA) to discover distinctive pathways and GO terms between the NC and Pro groups, and threshold values for significance were set as a |normalized enrichment score (ES)| > 1, a nominal p-value < 0.05, and an FDR < 0.25. The transcriptome sequencing was further validated via quantitative real-time PCR (qPCR) analysis, with the specific primers utilized in this study listed in Table S2.Total RNA was isolated from intestinal tissue using RNAiso Plus reagent (Takara, Beijing, China).The RNA was then used to synthesize cDNA via reverse transcription using the PrimeScript™ RT reagent kit (Takara).The cDNA served as a template to perform qPCR using a TB Green Premix Ex Taq II kit (Takara, RR820A).The resulting Cq value was used to calculate the relative expression of each gene using the 2 −∆∆Cq method, with β-actin used as the housekeeping gene. 16S rRNA Sequencing in Intestinal Bacteria Microbial DNA from the intestinal content of fish in the NC (4 pooled samples) and 0.8 g/kg Pro (4 pooled samples) groups was isolated using HiPure Stool DNA Kits (Meiji Biotechnology, Guangzhou, China) in accordance with the manufacturer's protocols.The target region of 16S rDNA was amplified by PCR using V3-V4 region primers (341F: CCTACGGGNGGCWGCAG, 806R: GGACTACHVGGGTATCTAAT).The amplicons were purified using AMPure XP Beads (Axygen, Union City, CA, USA), quantified using Real-Time PCR System (ABI, Foster City, CA, USA), and sequenced on an Illumina platform. The raw data were subjected to a series of preprocessing steps, including merging, filtering, dereplication, denoising, and chimera removal, using the DADA2 R package (version 1.14).Following these procedures, the resulting clean tags were utilized to output the ASVs.The representative ASV sequences were classified into bacterial taxonomy using the RDP classifier (version 2.2) with reference to the SILVA database.After ASV annotation, the abundance statistics of each taxonomy were visualized using Krona (version 2.6).Alpha indices, including Chao1, Shannon, and Simpson, were calculated using the QIIME software (version 1.9.1), and the difference in these indices between the NC and 0.8 g/kg Pro groups was assessed by the Wilcoxon rank test.Principal coordinates analysis (PCoA) based on weighted Unifrac distances was plotted in R project, and the Anosim test was conducted using the Vegan package (version 2.5.3). Statistical Analysis SPSS was used to analyze the data in this study and the results are presented as the mean ± standard error of the mean (SEM).Differences in antioxidant parameters and growth parameters among groups were analyzed using ANOVA, followed by the LSD test.Differences in amino acid and fatty acid composition between the NC and Pro groups were analyzed using a t-test.The correlation between qPCR data and RNA-seq data was determined using the Pearson test.The level of significance was set at p < 0.05. Common Carp Growth Performance During the experiment, one fish died in each of the NC, 0.2 g/kg Pro, and 0.8 g/kg Pro groups, while no fish died in the 0.4 g/kg Pro group (Table 1).After 10 weeks of feeding, there was a significant increase in the final weight and specific growth rate, but a significant decrease in the feed conversion ratio, in the groups fed 0.4 and 0.8 g/kg Pro compared to the NC group (p < 0.05; Table 1).The results are expressed as the mean ± SEM.Different letters in the same line indicate significant differences among groups (p < 0.05).Specific growth rate (SGR) = 100 × [Ln (average final weight) − Ln (average initial weight)]/number of days, feed conversion ratio (FCR) = food consumption/biomass increment, and survival ratio = final number of fish/initial number of fish. Antioxidant and Lipid Peroxidation Parameters in Different Tissues Antioxidant capacity was assessed by measuring the levels of MDA, SOD, T-AOC, GSH, and Gpx in different tissues (Figure 1).In the serum (Figure 1A), following Pro treatment, MDA content exhibited a declining trend, with significant reductions observed in the groups supplemented with 0.4 and 0.8 g/kg of Pro compared to the NC group (p < 0.05).Conversely, Gpx activity displayed an increasing trend, and it was significantly enhanced in the 0.4 and 0.8 g/kg Pro-supplemented groups relative to the NC group (p < 0.05).The GSH content also showed an increasing trend and a significant increase was observed in the 0.8 g/kg Pro-supplemented group relative to the NC group (p < 0.05).However, the SOD and T-AOC levels were not impacted by dietary Pro supplementation (p > 0.05). In the liver (Figure 1B), following Pros administration, levels of SOD, Gpx, and T-AOC uniformly demonstrated an upward trend.For SOD and Gpx activities, there were significant differences in the groups supplemented with 0.4 and 0.8 g/kg Pro compared to the NC group (p < 0.05).Additionally, the T-AOC level was significantly higher in the 0.8 g/kg Pro-supplemented group than in the NC group (p < 0.05).However, there was no change in MDA and GSH levels among the different groups (p > 0.05). In the muscle (Figure 1C), Pro treatment improved the levels of SOD, T-AOC, and GSH and lowered the MDA content.Notably, there was a marked decrease in the MDA content and a marked increase in the levels of SOD, T-AOC, and GSH in the groups fed 0.4 and 0.8 g/kg Pro compared with the NC group (p < 0.05).A similar increase in SOD level was also observed in the 0.2 g/kg Pro-treated group (p < 0.05).However, Gpx activities were not significantly changed by Pro treatment. In the gills (Figure 1D), Pro treatment inhibited MDA formation but enhanced GSH production.Significant changes were observed in the 0.4 and 0.8 g/kg Pro treatments compared to the NC treatment (p < 0.05).Moreover, the levels of SOD, T-AOC, and Gpx were not altered by dietary Pro feeding (p > 0.05). In the intestines (Figure 1E), the content of GSH was significantly increased in the three Pro-treated groups compared to the NC group (p < 0.05).However, no significant differences were observed in other parameters among the different treatments (p > 0.05).The results are expressed as the mean ± SEM.Different letters in the same line indicate sign differences among groups (p < 0.05).Specific growth rate (SGR) = 100 × [Ln (average final we Ln (average initial weight)]/number of days, feed conversion ratio (FCR) = food consumpti mass increment, and survival ratio = final number of fish/initial number of fish. Antioxidant and Lipid Peroxidation Parameters in Different Tissues Antioxidant capacity was assessed by measuring the levels of MDA, SOD, T GSH, and Gpx in different tissues (Figure 1).In the serum (Figure 1A), following Pro ment, MDA content exhibited a declining trend, with significant reductions obser the groups supplemented with 0.4 and 0.8 g/kg of Pro compared to the NC grou 0.05).Conversely, Gpx activity displayed an increasing trend, and it was significan hanced in the 0.4 and 0.8 g/kg Pro-supplemented groups relative to the NC grou 0.05).The GSH content also showed an increasing trend and a significant increas observed in the 0.8 g/kg Pro-supplemented group relative to the NC group (p < However, the SOD and T-AOC levels were not impacted by dietary Pro supplemen (p > 0.05). In the liver (Figure 1B), following Pros administration, levels of SOD, Gpx, a AOC uniformly demonstrated an upward trend.For SOD and Gpx activities, there significant differences in the groups supplemented with 0.4 and 0.8 g/kg Pro compa the NC group (p < 0.05).Additionally, the T-AOC level was significantly higher in t g/kg Pro-supplemented group than in the NC group (p < 0.05).However, there w change in MDA and GSH levels among the different groups (p > 0.05). In the muscle (Figure 1C), Pro treatment improved the levels of SOD, T-AOC GSH and lowered the MDA content.Notably, there was a marked decrease in the content and a marked increase in the levels of SOD, T-AOC, and GSH in the grou 0.4 and 0.8 g/kg Pro compared with the NC group (p < 0.05).A similar increase in level was also observed in the 0.2 g/kg Pro-treated group (p < 0.05).However, Gpx ties were not significantly changed by Pro treatment. In the gills (Figure 1D), Pro treatment inhibited MDA formation but enhanced production.Significant changes were observed in the 0.4 and 0.8 g/kg Pro treatment pared to the NC treatment (p < 0.05).Moreover, the levels of SOD, T-AOC, and Gpx not altered by dietary Pro feeding (p > 0.05). In the intestines (Figure 1E), the content of GSH was significantly increased three Pro-treated groups compared to the NC group (p < 0.05).However, no sign differences were observed in other parameters among the different treatments (p > Amino Acid and Fatty Acid Composition in Muscle There were 17 amino acids identified in the muscle tissue of common carp, includi 9 essential amino acids (EAAs) and 8 non-essential amino acids (NEAAs) (Table S3).EA Amino Acid and Fatty Acid Composition in Muscle There were 17 amino acids identified in the muscle tissue of common carp, including 9 essential amino acids (EAAs) and 8 non-essential amino acids (NEAAs) (Table S3).EAAs consisted of Thr, Val, Met, Ile, Leu, Phe, Lys, His, and Arg, whereas NEAAs included Asp, Ser, Glu, Gly, Ala, Gys, Tyr, and Pro.However, there were no differences in the levels of these amino acids between the NC and 0.8 g/kg Pro groups. Eleven fatty acids were detected in the muscle tissue of common carp, including two saturated fatty acids (SFA) and nine unsaturated fatty acids (NFA) (Table 2).The levels of C16:0, C18:0, C18:3n3, C20:1, C20:2, C22:1n9, and C20:4n6 were significantly higher in the 0.8 g/kg Pro group than in the NC group (p < 0.05).Similarly, Pro treatment exhibited higher levels of total polyunsaturated fatty acids (PUFA), including n-3 and n-6 PUFA (p < 0.05).Additionally, the n-6/n-3 ratio was slightly reduced, while the PUFA/SFA ratio was slightly increased in the Pro group compared to the NC group. Metabolomics Analysis in Muscle One sample from both the NC group and the 0.8 g/kg Pro group was excluded during metabolome sequencing due to sample degradation; thus, the metabolomic analysis was conducted with three pooled samples from each group.Quality control (QC) analysis showed that the sequencing data were acceptable based on the well-clustered QC samples (Figure S2).In the muscle, we identified 3086 metabolites in positive ion mode and 3025 metabolites in negative ion mode (Figure S3A,B).The unsupervised PCA displayed a clear separation between the NC and 0.8 g/kg Pro groups in both positive and negative ion modes (Figure S3C,D).The OPLS-DA results also showed that the metabolites between the NC and 0.8 g/kg Pro groups exhibited different classifications in both positive and negative (Figure S3E,F) ion modes.In addition, cross-validation and permutation test results revealed that the OPLS-DA model for the metabolites was reliable (Figure S3G,H).Dietary 0.8 g/kg Pro supplementation resulted in an increase in 53 metabolites and a decrease in 21 metabolites in positive ion mode, while it led to an increase in 78 metabolites and a decrease in 17 metabolites in negative ion mode, compared to the NC group (Figure 2A,B).In positive ion mode (Figure 2C), the differential metabolites primarily belonged to lipids and lipid-like molecules ( 16), phenylpropanoids and polyketides (8), and organic acids and derivatives (7).In negative ion mode (Figure 2D), the differential metabolites were mainly organoheterocyclic compounds (34), lipids and lipid-like molecules (13), and phenylpropanoids and polyketides (4).KEGG enrichment analysis revealed that the differential metabolites were primarily associated with α-linolenic acid (α-LA) metabolism (q value = 0.024), glycerophospholipid (GP) metabolism (q value = 0.025), arachidonic acid (ARA) metabolism (q value = 0.025), and biosynthesis of UFA (q value = 0.038) (Figure 2E). Transcriptomic Analysis in Intestines The transcriptomic analysis was performed using four pooled samples from each group.After filtering, transcriptome sequencing obtained a total of 5,392,441,669-7,005,632,038 bp of clean reads (Table S4).The quality control results indicated that the sequencing data obtained from the intestines of the NC and 0.8 g/kg Pro groups were highly reliable (Table S4).The PCA demonstrated a distinct separation between the NC and 0.8 g/kg Pro groups, indicating that Pro treatment had a significant impact on the gene expression patterns in the intestinal tissue (Figure S4A).A total of 1909 DEGs were identified between the NC and 0.8 g/kg Pro groups, with 1035 upregulated genes and 874 downregulated genes in the Pro group compared to the NC group (Figure S4B,C). To gain a deeper understanding of the effects of Pro treatment on biological function in the intestines, we conducted a GO enrichment analysis, focusing on three main categories: molecular function, cellular component, and biological process (Figure S5A).In the biological process, the DEGs were highly associated with lipid metabolic process (p.adjust < 0.001) and anion transport (p.adjust < 0.001) (Figure S5B).In the molecular function, the DEGs were primarily involved in exopeptidase activity (p.adjust < 0.001) and organic acid transmembrane transporter activity (p.adjust < 0.001) (Figure S5C).In the cellular component, the DEGs were primarily enriched in lysosome (p.adjust < 0.001) and vacuole (p.adjust < 0.001) (Figure S5D). In the lipid metabolic process, sphingolipid catabolic process, glycolipid catabolic process, and lipid catabolic process were significantly affected by dietary 0.8 g/kg Pro supplementation (p.adjust < 0.001; Figure 3A).Furthermore, GSEA showed that these processes exhibited a high enrichment in 0.8 g/kg Pro group.Specifically, the sphingolipid catabolic process and glycolipid catabolic process showed statistically significant differences (Figure 3B).However, intestinal cholesterol absorption exhibited significantly lower enrichment in 0.8 g/kg Pro treatment (Figure 3A,B).In GO terms related to ion transport, specifically, organic anion transport, carboxylic acid transport, anion transmembrane transport, and carboxylic acid transmembrane transport were noticeably altered by 0.8 g/kg Pro treatment (Figure 3C).Further analysis using GSEA confirmed that these processes exhibited a higher enrichment in the 0.8 g/kg Pro group, with a significant difference in organic anion transport, anion transmembrane transport, and carboxylic acid transmembrane transport (Figure 3D). To further investigate the key signaling pathways, we performed KEGG enrichment analysis using the DEGs (Figure S6).The DEGs were found to be enriched in five KEGG A classes: metabolism, organismal systems, cellular processes, genetic information processing, and environmental information processing (Figure S6A).The top 10 pathways were primarily associated with lysosome and metabolism function, specifically, sphingolipid metabolism and glycosphingolipid biosynthesis (Figure S6B,C).In the lysosome pathway, 34 genes were upregulated and 2 genes were downregulated following 0.8 g/kg Pro treatment (q < 0.0001; Figure 4A).GSEA confirmed the strong enrichment of the lysosome pathway in the 0.8 g/kg Pro group (Figure 4B).Further, we validated the expression of key genes involved in the lysosome pathway, including ctsα, galcα, gba, asah1b, acp5b, and pla2g15, using qPCR (Figure 4C), and the data revealed a clear positive correlation with the RNA-seq results (r = 0.864, p = 0.027; Figure 4D).To further investigate the key signaling pathways, we performed KEGG enrichment analysis using the DEGs (Figure S6).The DEGs were found to be enriched in five KEGG A classes: metabolism, organismal systems, cellular processes, genetic information processing, and environmental information processing (Figure S6A).The top 10 pathways were primarily associated with lysosome and metabolism function, specifically, sphingolipid metabolism and glycosphingolipid biosynthesis (Figure S6B,C).In the lysosome pathway, 34 genes were upregulated and 2 genes were downregulated following 0.8 g/kg Pro treatment (q < 0.0001; Figure 4A).GSEA confirmed the strong enrichment of the lysosome pathway in the 0.8 g/kg Pro group (Figure 4B).Further, we validated the expression of key genes involved in the lysosome pathway, including ctsα, galcα, gba, asah1b, acp5b, and pla2g15, using qPCR (Figure 4C), and the data revealed a clear positive correlation with the RNA-seq results (r = 0.864, p = 0.027; Figure 4D).It is important to highlight that 0.8 g/kg Pro treatment had a significant impact on the PPAR signaling pathway (q = 0.0029, Figure 5A).Specifically, 11 genes including pparα (a master in the PPARα signaling pathway) were downregulated, while 1 gene was upregulated, following 0.8 g/kg Pro treatment.Meanwhile, the GSEA results also indicated that the PPAR signaling pathway tended to be downregulated in the 0.8 g/kg Pro group (Fig- It is important to highlight that 0.8 g/kg Pro treatment had a significant impact on the PPAR signaling pathway (q = 0.0029, Figure 5A).Specifically, 11 genes including pparα (a master in the PPARα signaling pathway) were downregulated, while 1 gene was upregulated, following 0.8 g/kg Pro treatment.Meanwhile, the GSEA results also indicated that the PPAR signaling pathway tended to be downregulated in the 0.8 g/kg Pro group (Figure 5B).Furthermore, the qPCR results showed a significantly positive correlation with RNA-seq, indicating the credibility of the transcriptome results (r = 0.915, p = 0.004; Figure 5C,D). Intestinal Microbiota Characteristics Microbiota characteristics were analyzed using four pooled samples from each group.Venn diagram analysis indicated that the total number of ASVs in the two groups was 6049 (Figure S7A).The Pro group had a higher total number of ASVs (3618) compared to the NC group (2899) (p > 0.05).Furthermore, there were 468 ASVs that were shared between the NC and 0.8 g/kg Pro groups.At the phylum level, 23 out of the total 29 phyla were found to be common to both the NC and 0.8 g/kg Pro groups (Figure S7B).At the genus level, 167 out of 336 genera were shared between the NC and 0.8 g/kg Pro groups (Figure S7C). Intestinal Microbiota Characteristics Microbiota characteristics were analyzed using four pooled samples from each group.Venn diagram analysis indicated that the total number of ASVs in the two groups was 6049 (Figure S7A).The Pro group had a higher total number of ASVs (3618) compared to the NC group (2899) (p > 0.05).Furthermore, there were 468 ASVs that were shared between the NC and 0.8 g/kg Pro groups.At the phylum level, 23 out of the total 29 phyla were found to be common to both the NC and 0.8 g/kg Pro groups (Figure S7B).At the genus level, 167 out of 336 genera were shared between the NC and 0.8 g/kg Pro groups (Figure S7C). 023, 12, 2095 14 of 26 Indicator species analysis showed that Fusobacteriota has a higher relative abundance, while Proteobacteria, Firmicutes, and Spirochaetota exhibited lower relative abundance in the 0.8 g/kg Pro group, compared to the NC group (Figure 6C).Similarly, at the genus level, the relative abundance of Cetobacterium, Pirellula, alphaI_cluster, Planctopirus, and Pseudorhodobacter were higher, whereas the relative abundance of ZOR0006, Vibrio, Brevinema, and Lactobacillus were lower in the 0.8 g/kg Pro group, compared to the NC group (Figure 6D).Notably, the Firmicutes/Bacteroidetes (F/B) ratio showed a decrease in the 0.8 g/kg Pro group compared with the NC group (Figure 6E).According to α-diversity analysis, we observed a non-significant elevation in the Chao1 and Shannon indices (p > 0.05; Figure 7A,B) and a significant increase in the Simpson's index of diversity (p < 0.05; Figure 7C) in the 0.8 g/kg Pro group compared to the NC group.As for β-diversity, both PCA and PCoA analyses indicated that the samples of the NC and 0.8 g/kg Pro groups showed a distinct cluster (Figure 7D,E).Furthermore, Anosim Indicator species analysis showed that Fusobacteriota has a higher relative abundance, while Proteobacteria, Firmicutes, and Spirochaetota exhibited lower relative abundance in the 0.8 g/kg Pro group, compared to the NC group (Figure 6C).Similarly, at the genus level, the relative abundance of Cetobacterium, Pirellula, alphaI_cluster, Planctopirus, and Pseudorhodobacter were higher, whereas the relative abundance of ZOR0006, Vibrio, Brevinema, and Lactobacillus were lower in the 0.8 g/kg Pro group, compared to the NC group (Figure 6D).Notably, the Firmicutes/Bacteroidetes (F/B) ratio showed a decrease in the 0.8 g/kg Pro group compared with the NC group (Figure 6E). According to α-diversity analysis, we observed a non-significant elevation in the Chao1 and Shannon indices (p > 0.05; Figure 7A,B) and a significant increase in the Simpson's index of diversity (p < 0.05; Figure 7C) in the 0.8 g/kg Pro group compared to the NC group.As for β-diversity, both PCA and PCoA analyses indicated that the samples of the NC and 0.8 g/kg Pro groups showed a distinct cluster (Figure 7D,E).Furthermore, Anosim analysis revealed a significant difference in the microbial community composition between the NC and 0.8 g/kg Pro groups (r = 0.75, p = 0.026; Figure 7F).analysis revealed a significant difference in the microbial community composition between the NC and 0.8 g/kg Pro groups (r = 0.75, p = 0.026; Figure 7F).We further predicted and analyzed nine potential bacterial phenotypes in the NC and 0.8 g/kg Pro groups (Figure 8A).Compared with the NC group, the relative abundance of mobile element containing, facultatively anaerobic, potentially pathogenic, and stress-tolerant bacteria were significantly lower in the intestine of the 0.8 g/kg Pro group (p < 0.05; Figure 8B).Tax4Fun analysis revealed that in the level 2 KEGG pathway, signal transduction, cell motility, and metabolism of terpenoids and polyketides were decreased in the 0.8 g/kg Pro group compared to the NC group (p < 0.05; Figure 8C).At the level 3 KEGG pathway, several pathways were found to be enhanced in the 0.8 g/kg Pro group, including purine metabolism, aminoacyl-tRNA biosynthesis, terpenoid backbone biosynthesis, tetracycline biosynthesis, and primary bile acid biosynthesis (p < 0.05; Figure 8D), suggesting that Pro mediated microbe-microbe and microbe-host interactions.We further predicted and analyzed nine potential bacterial phenotypes in the NC and 0.8 g/kg Pro groups (Figure 8A).Compared with the NC group, the relative abundance of mobile element containing, facultatively anaerobic, potentially pathogenic, and stresstolerant bacteria were significantly lower in the intestine of the 0.8 g/kg Pro group (p < 0.05; Figure 8B).Tax4Fun analysis revealed that in the level 2 KEGG pathway, signal transduction, cell motility, and metabolism of terpenoids and polyketides were decreased in the 0.8 g/kg Pro group compared to the NC group (p < 0.05; Figure 8C).At the level 3 KEGG pathway, several pathways were found to be enhanced in the 0.8 g/kg Pro group, including purine metabolism, aminoacyl-tRNA biosynthesis, terpenoid backbone biosynthesis, tetracycline biosynthesis, and primary bile acid biosynthesis (p < 0.05; Figure 8D), suggesting that Pro mediated microbe-microbe and microbe-host interactions. Interactions between Intestinal Microbes and Lipid Metabolism To investigate the relationship between microbes and host genes in the intestines of common carp after 0.8 g/kg Pro feeding, we conducted a correlation analysis between 75 DEGs enriched in lipid metabolism and 15 microbial taxa at the genus level (Figure 9).As shown in Figure 9A, we observed significant positive correlations (p < 0.05) between the majority of DEGs involved in the lysosome pathway and sphingolipid metabolism and the relative abundance of Gemmobacter, Mycobacterium, Pirellula, and Parabacteroides.Additionally, several notable negative correlations (p < 0.05) were observed between dominant taxa, such as Aeromonas and ZOR0006, and specific genes associated with the lysosome pathway, including cystinosin, lysosomal alpha-mannosidase, and acid ceramidase. We also specifically investigated the correlation between microbiota and the PPAR pathway, and visually depicted the significant gene-microbe correlations (p < 0.05; Figure 9B).The results demonstrated a statistically significant positive correlation between predominant taxa, specifically, Aeromonas and ZOR0006, and the majority of genes in the PPAR pathway.Conversely, a significant negative correlation was observed between Cetobacterium, which was significantly increased in the 0.8 g/kg Pro group, and the majority of genes. In addition to the overall network, Figure 9C illustrates the correlation between representative gene expression and microbial taxa, both of which have previously been linked to host health and thus may be of interest.The pparα, a crucial regulator of lipid metabolism, was significantly negatively correlated with Cetobacterium (r = −0.834,p = 0.01).Similarly, cpt1, a vital enzyme involved in fatty acid oxidation, exhibited a negative correlation with Cetobacterium (r = −0.752,p = 0.031).Furthermore, both gsta (r = 0.874, p = 0.008) and gba (r = 0.949, p < 0.001) showed a positive correlation with Pirllula.CTSA (Cathepsin A) is a key proteolytic enzyme in the lysosome, while GBA (glucosylceramidase) is a crucial enzyme in the catabolic pathway for glucosylceramide, a membrane sphingolipid and a precursor for various glycolipids. Interactions between Intestinal Microbes and Lipid Metabolism To investigate the relationship between microbes and host genes in the intestines of common carp after 0.8 g/kg Pro feeding, we conducted a correlation analysis between 75 DEGs enriched in lipid metabolism and 15 microbial taxa at the genus level (Figure 9).As shown in Figure 9A, we observed significant positive correlations (p < 0.05) between the majority of DEGs involved in the lysosome pathway and sphingolipid metabolism and the relative abundance of Gemmobacter, Mycobacterium, Pirellula, and Parabacteroides.Additionally, several notable negative correlations (p < 0.05) were observed between dominant taxa, such as Aeromonas and ZOR0006, and specific genes associated with the lysosome pathway, including cystinosin, lysosomal alpha-mannosidase, and acid ceramidase. We also specifically investigated the correlation between microbiota and the PPAR pathway, and visually depicted the significant gene-microbe correlations (p < 0.05; Figure 9B).The results demonstrated a statistically significant positive correlation between predominant taxa, specifically, Aeromonas and ZOR0006, and the majority of genes in the PPAR pathway.Conversely, a significant negative correlation was observed between Cetobacterium, which was significantly increased in the 0.8 g/kg Pro group, and the majority of genes. In addition to the overall network, Figure 9C illustrates the correlation between representative gene expression and microbial taxa, both of which have previously been linked to host health and thus may be of interest.The pparα, a crucial regulator of lipid metabolism, was significantly negatively correlated with Cetobacterium (r = −0.834,p = 0.01).Similarly, cpt1, a vital enzyme involved in fatty acid oxidation, exhibited a negative correlation with Cetobacterium (r = −0.752,p = 0.031).Furthermore, both gsta (r = 0.874, p = 0.008) and gba (r = 0.949, p < 0.001) showed a positive correlation with Pirllula.CTSA (Cathepsin A) is a key proteolytic enzyme in the lysosome, while GBA (glucosylceramidase) is a crucial enzyme in the catabolic pathway for glucosylceramide, a membrane sphingolipid and a precursor for various glycolipids. Discussion Pros are a type of polyphenolic compounds known for their potent antioxidant activity.They are abundantly present in various plant sources, such as grape seed, pinto bean, and blueberry.Emerging research has highlighted the important role of Pros in promoting animal growth, maintaining health, and preventing disease.Nevertheless, the effects of Pros on growth performance, muscle quality, and nutrients, as well as intestinal microbiota and function, may vary across different animal species. The Effect of Pros on Growth Performance Pros have reportedly been used as appetite stimulators and growth promoters in cultured fish.Adding Pros to the diet improved weight gain and feed utilization in juvenile American eel, European eel, and tilapia [20,21,34].In addition, dietary Pro has been shown to ameliorate the growth retardation caused by histamine or cadmium stress in American eel, pearl gentian grouper, and tilapia [22,23,35].In line with previous studies, the present study found that the common carp fed with 0.4 and 0.8 g/kg Pro exhibited higher growth performance, suggesting that Pro, as a feed additive, can effectively promote the growth of common carp.At present, the mechanism underlying the beneficial effect of Pros on Discussion Pros are a type of polyphenolic compounds known for their potent antioxidant activity.They are abundantly present in various plant sources, such as grape seed, pinto bean, and blueberry.Emerging research has highlighted the important role of Pros in promoting animal growth, maintaining health, and preventing disease.Nevertheless, the effects of Pros on growth performance, muscle quality, and nutrients, as well as intestinal microbiota and function, may vary across different animal species. The Effect of Pros on Growth Performance Pros have reportedly been used as appetite stimulators and growth promoters in cultured fish.Adding Pros to the diet improved weight gain and feed utilization in juvenile American eel, European eel, and tilapia [20,21,34].In addition, dietary Pro has been shown to ameliorate the growth retardation caused by histamine or cadmium stress in American eel, pearl gentian grouper, and tilapia [22,23,35].In line with previous studies, the present study found that the common carp fed with 0.4 and 0.8 g/kg Pro exhibited higher growth performance, suggesting that Pro, as a feed additive, can effectively promote the growth of common carp.At present, the mechanism underlying the beneficial effect of Pros on fish growth remain unknown, but potential explanations have been mentioned in previous studies.Some studies have suggested that Pros can enhance the activities of intestinal digestive enzymes, thereby improving feed utilization rate [36,37].Furthermore, other studies indicated that Pros could regulate intestinal microbial community composition, maintain intestinal health, and promote nutrient absorption, ultimately enhancing growth [38]. The Effect of Pros on Antioxidant Capacity Animal experiments have shown that Pro treatment decreased the levels of reactive oxygen species (ROS) in different tissues or cells [10,39].Pros also have the potential to improve cellular antioxidant systems, such as SOD, catalase, or Gpx systems [40].The antioxidant properties of Pros could potentially lead to several beneficial effects, including anti-inflammatory, antimicrobial, anticarcinogenic, hypolipemic, and antihyperalgesic activities [10].It has been reported that Pro treatment can effectively prevent the formation of H 2 O 2 , protein oxidation, and DNA damage in cells by enhancing antioxidant defense compounds, such as Gpx, SOD, catalase, and GSH [15].In fish, Pro treatment has been found to increase antioxidant enzymes (e.g., SOD and Gpx) and non-enzymatic antioxidants (e.g., GSH) in the serum of European eels [34] and hybrid sturgeon [36].Consistent with previous studies, our data also showed the increased levels of SOD (in liver and muscle), T-AOC (in liver and muscle), GSH (in serum, muscle, gills, and intestines), and Gpx (in liver) after 0.4 and 0.8 g/kg Pro feeding.These results demonstrate that Pros can significantly improve antioxidant ability in aquatic animals.Interestingly, in the muscle, almost all antioxidant parameters were enhanced after the administration of Pros. The intrinsic antioxidant defense system of fish can be compromised under adverse conditions, resulting in the excessive production of ROS that cause lipid peroxidation via reacting with unsaturated fatty acids.Pros have been shown to scavenge free radicals and inhibit lipid peroxidation in the muscle of finishing pigs [41].Similarly, in hybrid sturgeons, the administration of 50 and 100 mg/kg of Pro was found to effectively inhibit MDA formation [36].Moreover, Pro treatment also mitigated the lipid peroxidation induced by Cd stress in the intestine of pearl gentian grouper [22].In agreement with previous studies, our data showed that lipid peroxidation was inhibited by 0.4 and 0.8 g/kg Pro treatments.These results highlight the antioxidation ability of Pro to avoid lipid peroxidation.It is worth noting that Pro treatment was observed to effectively suppress lipid peroxidation in muscle, potentially leading to an enhancement in meat nutritional quality.Low lipid peroxidation may suppress the degradation of PUFA and vitamins, and formation of harmful substances [42]. The Effect of Pros on Muscle Nutrient Quality The muscle of fish is the primary edible portion and a valuable source of nutriment for humans.Amino acid composition is a vital determinant of fish muscle nutritional quality.The addition of Pros to the diet was found to increase the crude protein content of the body in American eel [21] and tilapia [20], but the impact of Pros on amino acid composition in fish muscle remains unclear.This study showed that the amino acid composition including EAA and NEAA was not significantly changed in the muscle of common carp after Pro treatment.Similar effects of Pro were also observed in finishing pigs [41].However, we found that dietary Pro supplementation significantly changed the levels of some amino acids analogs, such as increased DL-methionine and L-glutamic acid, in the muscle of common carp.DL-methionine is not only a vital source of methionine, but it also serves as a precursor to essential intermediates such as glutathione [43].Glutamic acid was considered as a flavor-related amino acid, which would enhance the flavor of fish flesh [44].Therefore, the increased levels of DL-methionine and L-glutamic acid may have a positive effect on flesh quality in the muscle of common carp. The composition of fatty acids is also a crucial factor in evaluating the nutritional and health benefits of fish muscle.A high percentage of PUFAs in food has been linked to improved fetal development, brain health, and a reduced risk of coronary heart disease in humans [44].In this study, the content of PUFAs, including both n-3 and n-6, in muscle was significantly increased by dietary supplementation of 0.8 g/kg Pro.More specifically, Pro treatment increased the levels of LA, docosapentaenoic acid (DPA), and ARA in the muscle.Similar findings were also observed in the muscle tissue of pigs treated with Pros [40].n-3 PUFAs have been confirmed to possess properties that improve anti-inflammation, antioxidant capacity, and meat nutritional value [45].LA, an essential fatty acid (EFA), exhibits cardiovascular-protective, neuro-protective, anti-osteoporotic, anti-inflammatory, and antioxidative effects [46].ARA is also an EFA maintaining normal health, which plays a vital role in the functioning of all cells, especially in the nervous system, skeletal muscles, and immune system [47].DPA is an important long-chain n-3 PUFA that can serve as a dietary source of eicosapentaenoic acid.It is also known to play a role in improving risk markers associated with cardiovascular and metabolic diseases [48].Here, the increase in the levels of LA, DPA, ARA, and n-3 PUFAs suggests that dietary Pro supplementation may improve the nutritional and health benefits of common carp.Moreover, we found that α-LA metabolism, ARA metabolism, and biosynthesis of UFA pathways were significantly altered by Pro treatment, which may provide an explanation for the increase in the levels of these PUFAs. PC and PE are the most abundant phospholipids in all types of mammalian cells and subcellular organelles, playing a crucial role in regulating lipid, lipoprotein, and energy metabolism [49].They also acts as reservoirs for essential PUFAs like DHA and ARA [50].Furthermore, changes in PC and PE are associated with the formation of volatile flavor compounds in muscle tissue [51].In this study, metabolome analysis revealed an increase in the levels of PC (16:0/15:0), PC (22:5/15:0), and PC (18:0/15:0), but a decrease in the levels of PE (16:0/18:2) and PC (22:6/22:6) in muscle tissue after Pro treatment.We hypothesize that these changes in PC and PE levels may impact the nutritional value and volatile flavor of muscle, but the detailed mechanism regarding this phenomenon still requires further investigation.In addition, the decreased PE may prevent coalescence of the lipid droplets in muscle tissue [52]. Lipids are vital factors that affect meat quality, and excessive lipid content can reduce both meat quality and feed efficiency.Among the various types of lipids, triglycerides (TGs) are the most abundant in the muscle of many fishes [53].Modifying TG content can reduce the risk of cardiovascular disease in consumers [54].Previous studies have suggested that flavonoids can decrease the TG content of meat and increase the proportions of total PUFA in the breast muscle [55].Consistent with these findings, the content of TGs, including TG (18:1/22:2/o-18:0), TG (16:1/24:1/o-18:0), and TG (18:0/22:4/o-18:0), in the muscle was reduced by Pro treatment.This reduction could potentially have positive impacts on the health of consumers who consume carp as a food.Animal models studies have demonstrated that Pros have a positive effect on the TG metabolism, such as reducing plasma TGs and controlling the endogenous liver lipid production (reviewed in [56]).Based on these findings, we hypothesize that this is a possible explanation for the decrease of TG levels in muscle tissue. It is intriguing to note that the levels of 13 bioactive compounds, including 8 flavonoids, 2 isoflavonoids, 2 tannins, and 1 coumarin, significantly increase in the muscle of common carp after being fed a diet containing Pro.These bioactive compounds have been proven to possess health benefits, such as antioxidant, anti-inflammatory, neuroprotective, and hepatoprotective effects, in animals [57].They can also improves meat quality by regulating lipid metabolism and antioxidant capacity [58].The increase of bioactive compounds in common carp muscle may be attributed to the absorption, metabolism, and accumulation of Pro.It has been reported that Pros may undergo direct absorption in the proximal intestinal tract or be absorbed in the gastrointestinal tract after being metabolized by gut microbiota in mammals [59,60].Absorbed Pros and their metabolites can be transported to other organs through the circulation, exerting health-promoting effects [38].However, the mechanism of absorption and metabolism of Pros in fish remains unclear due to significant differences in intestinal structure between fish and mammals. The Effect of Pros on Lipid Metabolism in Intestines Intestinal barrier integrity is crucial for maintaining intestinal health, as it not only aids in nutrient absorption but also shields against harmful substances.Numerous studies have shown that Pros administration can ameliorate intestinal dysbiosis caused by dietary factors [61].They have protective effects against inflammatory response in the intestinal barrier [62], and prevent metabolic syndrome by regulating intestinal function and energy metabolism [63].Moreover, previous studies have reported that Pros administration regulates intestinal lipid homeostasis to improve cardiometabolic disorders [64].Our study found that Pro treatment resulted in a high enrichment of lipid catabolic process, including glycolipid and sphingolipid catabolic processes, suggesting that Pro treatment may enhance lipid catabolic metabolism.Increasing lipid catabolic metabolism may inhibit the accumulation of lipids in the intestines and increase the availability of fatty acids in the body [65,66].This process may also provide a source of energy and help maintain normal metabolic activity in intestines.Dietary polyphenols have been reported to improve glycolipid metabolism disorders in animals [67].For instance, Xu et al., (2019) found that Pro treatment ameliorated intestinal barrier dysfunction induced by a high-fat diet in rats by modulating glycolipid digestion [68].In this study, the upregulation of glycolipid catabolic process may indicate a beneficial effect of Pros in inhibiting cellular glycolipid accumulation.Sphingolipids are one of the most important membrane lipids, participating in the formation of membrane microdomains.However, the abnormal accumulation of sphingolipids in cells has been associated with metabolic disorders [69].In this study, the upregulation in sphingolipid catabolic process may have an influence on cell signaling, cellular homeostasis, and immune regulation in intestines.Notably, the changes in the sphingolipid catabolic process were consistent with alterations in the lysosomal pathway, indicating that sphingolipids were possibly degraded via the lysosomal catabolic pathways [70].Lysosomes, small organelles that contain various hydrolytic enzymes such as proteases, lipases, and nucleases, can promote lipid catabolism and transport [71].Disruption of lysosome function is considered a key factor leading to metabolic derangement and neurodegeneration [72].Therefore, the upregulation of the lysosomal pathway in this study indicates that Pro treatment may contribute to maintaining cellular lipid homeostasis and preventing metabolic diseases. It has been confirmed that Pro has a hypolipidemic effect, and one of its possible mechanisms may be related to the delay of cholesterol and lipid absorption in intestines [56].Pros supplementation reduced cholesterol absorption by increasing the excretion of neutral steroids and bile acids [73,74].Treatment with red wine polyphenolics (containing Pros) has been found to reduce free cholesterol and total cholesterol in Caco-2 cells [75].In this study, Pro treatment caused lower enrichment of gene expression in intestinal cholesterol absorption, suggesting that Pros interfered in this process.Meanwhile, we also observed a significant downregulation in the mRNA level of CYP27A (an important enzyme regulating cholesterol metabolism), suggesting that Pro treatment may exert inhibitory effects on cholesterol metabolism in the intestines. It is worth noting that our results also indicated a downregulation of the PPARα signaling pathway after Pros administration, which may further affect lipid absorption.PPARα has been identified as a key regulator in lipid metabolism, involved in various processes including fatty acid transport, synthesis, and oxidation, as well as lipogenesis [76].In Caco2 cells, Pro treatment repressed pparα and acsl3 gene expression to decrease TG secretion [77].However, contradictory results have also been reported, in which Pro treatment upregulated PPARa and CPT1 and downregulated ACC and SREBP1 to modify intestinal lipid homeostasis [64].In this study, the decreased expression of pparα further inhibited its target genes, such as apoa1, maeB, scd, CD36, acsl, and cpt1, suggesting that lipogenesis, fatty acid transport, fatty acid oxidation, and lipid transport may be suppressed by Pro treatment in the intestines of common carp. The Effect of Pros on Intestinal Microbiota Intestinal microbiota is increasingly being linked to fish health, playing a crucial role in regulating intestinal immunity, nutrient absorption, and overall host health.Dietary Pro has been studied in relation to intestinal microbiota in animals, but the findings are inconsistent and contentious, possibly due to variations in Pros sources or types, as well as differences in animal models [37].In a piglet model, Pro treatment increased the abundance and diversity of intestinal bacteria [78].Similarly, Pro treatment improved the decrease in α-diversity induced by a high fat diet in C57BL/6J mice [79].In this study, the Pro group had a higher number of ASVs, and a significantly higher Simpson's index of diversity in intestinal bacteria, compared to the NC group.Meanwhile, in conjunction with the β-diversity analysis, we speculate that Pro treatment may enhance the diversity of the intestinal microbiota in common carp.Interestingly, we also found that Pro treatment altered the relative abundance in the top four phyla and genus of the intestinal microbiota and resulted in a more balanced distribution among dominant taxa. In animal intestines, the Firmicutes and Bacteroidetes phyla are dominant and play essential roles in promoting host health, boosting immunity, and maintaining homeostasis [80].An elevated ratio of F/B is linked with some metabolic syndrome, such as obesity and chronic inflammation [81].A marked reduction in the ratio of F/B was observed in ovariectomized mice after treatment with Pros (grape seed extract) [82].Moreover, Pro treatment has been shown to alter the gut microbiota by increasing Bacteroidetes and decreasing Firmicutes, effectively alleviating metabolic syndrome induced by a high-fat diet (reviewed by Redondo-Castillejo, et al. [83]).Consistent with previous findings, our data also revealed a significant decrease in Firmicutes abundance and a 2.35-fold increase (but p > 0.05) in Bacteroidota abundance in the Pro group.Meanwhile, the ratio of F/B was significantly decreased by Pro treatment.These data suggest that dietary Pro may prevent metabolic syndrome in the intestines of common carp.In addition, some studies have found that dysbiotic Proteobacteria expansion is associated with epithelial dysfunction and inflammation [83,84].Our results showed a lower Proteobacteria abundance in the Pro group, suggesting that Pro may have potential effects against epithelial dysfunction and inflammation in intestines.Further, we observed the impact of Pro on intestinal microbiota at the genus.Here, compared with the NC group, there was a higher relative abundance in Cetobacterium and Pirellula, and a lower relative abundance in Vibrio, at the genus level.Cetobacterium is a dominant taxa of the intestinal microbiota, being involved in maintaining fish health, enhancing nutrition, and providing protection against pathogenic bacteria [85,86].Pirellula, an important bacterial genus in fish intestines, has the potential to act as a probiotic, positively influencing fish growth [87].Vibrio is also an important and diverse genus of bacteria, being widely distributed in fish intestines and the aquatic environment.However, a specific group within the Vibrio genus consists of several highly pathogenic species, such as V. anguillarum, V. harveyi, and V. alginolyticus, that can cause diseases in aquatic animals [88].Based on previously published results and our data, we hypothesize that the increase in Cetobacterium and Pirellula, coupled with the decrease in Vibrio, in the Pro group may indicate positive effects on the resistance of common carp to pathogenic bacteria infection.Of interest, our phenotype analysis also showed a significant decrease in potential pathogenicity in the Pro group.It is worth noting that the changes of the intestinal microbiota showed a significant correlation with intestinal lipid metabolism.Taken together, it can be deduced that dietary Pro may enhance disease resistance, as well as regulate intestinal lipid metabolism in common carp, by altering the diversity and abundance of intestinal microbiota.However, further research is needed to elucidate the detailed underlying mechanism.In addition, Pro treatment also resulted in changes in some microbiota at the genus level, such as ZOR0006, Brevinema, Planctopirus, and Pseudorhodobacter.However, the potential effects of these changes on the host remain uncertain, as there is limited knowledge regarding these bacteria. Figure 1 . Figure 1.Effects of dietary Pro on antioxidant capacity of Cyprinus carpio after 10 weeks of farmi (A-E) Antioxidant parameters in the serum, liver, muscle, gill, and intestine, respectively.The sults are expressed as the mean ± SEM (n = 9).Different letters above the bars indicate significa differences for each parameter between groups (p < 0.05). Figure 1 . Figure 1.Effects of dietary Pro on antioxidant capacity of Cyprinus carpio after 10 weeks of farming.(A-E)Antioxidant parameters in the serum, liver, muscle, gill, and intestine, respectively.The results are expressed as the mean ± SEM (n = 9).Different letters above the bars indicate significant differences for each parameter between groups (p < 0.05). Figure 2 . Figure 2. Differential metabolites in the muscle of C. carpio between the NC (MNC) and 0.8 g/kg Pro fed groups (MPro).(A,B) Volcano plot of the differential metabolites in positive (Pos) and negativ (Neg) ion modes.(C,D) Numbers and classification of the differential metabolites in Pos and Ne ion modes.(E) Main KEGG pathways enriched by the differential metabolites in Pos and Neg io modes.(F) Differential metabolites related to lipids and lipid-like molecules (FA, fatty acyls; GP glycerophospholipids; SD, steroids and steroid derivatives; GL, glycerolipids; SP, sphingolipid; an PL, prenol lipid).(G) Differential metabolites related to phenylpropanoids and polyketides.(H) Di ferential metabolites related to amino acids. Figure 2 . Figure 2. Differential metabolites in the muscle of C. carpio between the NC (MNC) and 0.8 g/kg Pro-fed groups (MPro).(A,B) Volcano plot of the differential metabolites in positive (Pos) and negative (Neg) ion modes.(C,D) Numbers and classification of the differential metabolites in Pos and Neg ion modes.(E) Main KEGG pathways enriched by the differential metabolites in Pos and Neg ion modes.(F) Differential metabolites related to lipids and lipid-like molecules (FA, fatty acyls; GP, glycerophospholipids; SD, steroids and steroid derivatives; GL, glycerolipids; SP, sphingolipid; and PL, prenol lipid).(G) Differential metabolites related to phenylpropanoids and polyketides.(H) Differential metabolites related to amino acids. Figure 3 . Figure 3. Effects of dietary Pro supplementation on lipid metabolism and ion transport in the intestines of C. carpio.(A) Differential GO terms related to lipid metabolism.(B) GSEA for the GO terms related to lipid metabolism.(C) Differential GO terms related to ion transport.(D) GSEA for the GO terms related to ion transport. Figure 3 . 26 Figure 4 . Figure 3. Effects of dietary Pro supplementation on lipid metabolism and ion transport in the intestines of C. carpio.(A) Differential GO terms related to lipid metabolism.(B) GSEA for the GO terms related to lipid metabolism.(C) Differential GO terms related to ion transport.(D) GSEA for the GO terms related to ion transport.Antioxidants 2023, 12, 2095 12 of 26 Figure 4 . Figure 4. Changes in the lysosome pathway in the intestines of C. carpio between the NC and Pro groups.(A) DEGs in the KEGG lysosome pathway.(B) GSEA for the KEGG lysosome pathway.(C) Key genes expression in the lysosome pathway measure by qPCR, with values expressed as the mean ± SEM (n = 4), * p < 0.05 and ** p < 0.01.(D) Correlation between the qPCR and RNA-seq data. Antioxidants 2023, 12 , 2095 13 of 26 Figure 5 . Figure 5. Changes in the PPAR signaling pathway in the intestines of C. carpio between the NC and Pro groups.(A) DEGs in the PPAR signaling pathway and possible mechanism regulating lipid metabolism.(B) GSEA for the PPAR signaling pathway.(C) Key genes expression in the PPARα signaling pathway measure by qPCR, with values expressed as the mean ± SEM (n = 4), * p < 0.05 and ** p < 0.01.(D) Correlation between the qPCR and RNA-seq data. Figure 5 . Figure 5. Changes in the PPAR signaling pathway in the intestines of C. carpio between the NC and Pro groups.(A) DEGs in the PPAR signaling pathway and possible mechanism regulating lipid metabolism.(B) GSEA for the PPAR signaling pathway.(C) Key genes expression in the PPARα signaling pathway measure by qPCR, with values expressed as the mean ± SEM (n = 4), * p < 0.05 and ** p < 0.01.(D) Correlation between the qPCR and RNA-seq data. Figure 6 . Figure 6.The proportions of intestinal microbiota in the NC and Pro groups.(A,B) The top 10 phyla and genera of intestinal microbiota in the two groups.(C,D) Differential phyla and genera in intestinal microbiota between the NC and Pro groups (Wilcoxon test, p < 0.05).(E) Firmicutes/Bacteroidetes (F/B) ratio in the intestinal microbiota collected from the NC and Pro groups. Figure 6 . Figure 6.The proportions of intestinal microbiota in the NC and Pro groups.(A,B) The top 10 phyla and genera of intestinal microbiota in the two groups.(C,D) Differential phyla and genera in intestinal microbiota between the NC and Pro groups (Wilcoxon test, p < 0.05).(E) Firmicutes/Bacteroidetes (F/B) ratio in the intestinal microbiota collected from the NC and Pro groups. Figure 8 . Figure 8.The analyses for potential phenotypes and functions of intestinal microbiota in the NC and Pro groups.(A) Relative abundance of nine potential phenotypes of intestinal microbiota.(B) Differential phenotypes of the intestinal microbiota between the NC and Pro groups.(C) Differential pathways at level 2 KEGG between the NC and Pro groups.(D) Differential pathways at level 3 KEGG between the NC and Pro groups.Statistical significance was identified by Welch's t-test with p < 0.05. Figure 8 . Figure 8.The analyses for potential phenotypes and functions of intestinal microbiota in the NC and Pro groups.(A) Relative abundance of nine potential phenotypes of intestinal microbiota.(B) Differential phenotypes of the intestinal microbiota between the NC and Pro groups.(C) Differential pathways at level 2 KEGG between the NC and Pro groups.(D) Differential pathways at level 3 KEGG between the NC and Pro groups.Statistical significance was identified by Welch's t-test with p < 0.05. Figure 9 . Figure 9. Interactions between the intestinal microbes and DEGs related to lipid metabolism.(A) Correlation heatmap of microbe-DEGs.(B) Network plot of significant microbe-gene (PPAR signaling pathway) correlations; red lines indicate positive correlation and blue lines indicate negative correlation.(C) Correlation plots of representative gene-microbe combinations. Table 1 . Effects of dietary Pro on growth performance of Cyprinus carpio after 10 weeks of farming. Table 2 . Hydrolyzed fatty acid composition in muscle of C. carpio fed on normal diet (NC) and Pro-supplemented diet (Pro). SFA, saturated fatty acid; MUFA, monounsaturated fatty acid; PUFA, polyunsaturated fatty acid.All values are expressed as the mean ± SEM (n = 4).* and ** indicate significant differences (p < 0.05 and p < 0.01) between the NC and 0.8 g/kg Pro groups; ns indicates no significant difference.
2023-12-16T17:27:36.403Z
2023-12-01T00:00:00.000
{ "year": 2023, "sha1": "56ef4e8be2e5b105371499916177d916234c51f6", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3921/12/12/2095/pdf?version=1702279380", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0fc688a2134643b98e3977b279c5e98057708bf1", "s2fieldsofstudy": [ "Environmental Science", "Biology", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }