id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
4512664 | pes2o/s2orc | v3-fos-license | Depression and psychodynamic psychotherapy
Depression is a complex condition, and its classical biological/psychosocial distinction is fading. Current guidelines are increasingly advocating psychotherapy as a treatment option. Psychodynamic psychotherapy models encompass a heterogeneous group of interventions derived from early psychoanalytic conceptualizations. Growing literature is raising awareness in the scientific community about the importance of these treatment options, as well as their favorable impact on post-treatment outcomes and relapse prevention. Considering the shifting paradigm regarding treatment of depressive disorder, the authors aim to provide a brief overview of the definition and theoretical basis of psychodynamic psychotherapy, as well as evaluate current evidence for its effectiveness.
Introduction
Depression is considered a frequent and complex condition. According to the World Health Organization, it is expected to be the third leading cause of disability worldwide by 2020. 1 The lifetime prevalence of major depressive disorder (MDD) is estimated at around 2-20%. The Global Burden of Disease Study 2010 2 revealed it as the second most prevalent cause of illness-induced disability, affecting people of all ages and social status, and a major impact factor in social, professional, and interpersonal functioning. Mathers et al. 3 predicted MDD as the leading worldwide cause of disease burden in highincome countries by the year 2030. The decrement in health associated with depression is described as significantly greater than that associated with other chronic diseases. 4 More than 60% of patients with MDD have a clinically significant impairment in their quality of life. 5 Common features of all depressive disorders include the presence of sad or irritable mood, accompanied by somatic and cognitive changes that significantly affect the individual's capacity to function. 6 Overall, depression is characterized by a general feeling of sadness, anhedonia, avolition, worthlessness, and hopelessness. Cognitive and neurovegetative symptoms, such as difficulty in concentrating, memory alterations, anorexia, and sleep disturbances, are also present.
Various known risk factors for depression have been recorded in the literature: female gender, older age, poorer coping abilities, physical morbidity, impaired level of functioning, reduced cognition, and bereavement. Depression has been associated with an increased risk of mortality and poorer treatment outcomes in physical disorders. 7 Although not fully understood, psychological, social and biological processes are thought to overdetermine the etiology of depression; comorbid psychiatric diagnoses (e.g., anxiety and various personality disorders) are common in depressed people. 8 The classical biological/psychosocial distinction, which separates psychotherapy from pharmacotherapy as treatment options for depression, is fading out. Growing evidence from the neuroscientific literature supports similar (and different) changes in brain functioning with these approaches, concluding that both psychotherapy and pharmacotherapy are biological treatments, and that there is no legitimate ideological justification for the decline of the former. 9 Understandably, current treatment guidelines 10,11 for depressive disorders are increasingly advocating psychotherapy as a treatment option, alone or in combination with antidepressant medications.
Considering this shifting paradigm regarding treatment of depressive disorder, the authors aim to evaluate current evidence for the effectiveness of psychodynamic psychotherapy (PDP) in depression. A brief clarification of the definition of PDP and its theoretical basis for understanding depression are also presented.
Methods
A narrative review was performed, including recent and current published papers on PDP and its role as a treatment modality in depressive disorders. Recent empirical studies were also included in order to integrate authors' critical perspectives, supported by classical and contemporary literature. theory, self-psychology, and attachment theory. Treatment goals or focus and setting changes have been reconsidered by contemporary authors. Gabbard 12 described PDP's basic principles as: much of mental life is unconscious; childhood experiences, in concert with genetic factors, shape the adult; the patient's transference to the therapist is a primary source of understanding; the therapist's countertransference provides valuable understanding about what the patient induces in others; the patient's resistance to the therapeutic process is a major focus of therapy; symptoms and behaviors serve multiple functions, and are determined by complex and often unconscious forces; finally, the psychodynamic therapist assists the patient in achieving a sense of authenticity and uniqueness.
PDP operates on an interpretive-supportive continuum. Interpretive interventions enhance the patient's insight about repetitive conflicts sustaining his or her problems. The prototypic insight-enhancing intervention is an interpretation by which unconscious wishes, impulses, or defense mechanisms are made conscious. Supportive interventions aim to strengthen abilities (''ego functions'') that are temporarily not accessible to a patient due to acute stress or that have not been sufficiently developed. Thus, supportive interventions maintain or build ego functions. Supportive interventions include, for example, fostering a therapeutic alliance, setting goals, or strengthening ego functions such as reality testing or impulse control. The use of more supportive or more interpretive (insightenhancing) interventions depends on the patient's needs. 13 Common factors of psychotherapy and specific features of the psychodynamic approach Common factors are currently understood as a set of common elements that collectively shape a theoretical model about the mechanisms of change during psychotherapy. A recent meta-analysis 14 has shed light on strong evidence regarding factors such as therapeutic alliance, empathy, expectations, cultural adaptation, and therapist differences in terms of their importance for psychotherapeutic treatments in theory, research, and practice.
Overall, the influence of common factors in psychotherapies has been estimated at 30% when consider ing the variation in depression outcomes. Nonetheless, other factors, including specific techniques, expectancy, the placebo effect, and extratherapeutic effects, have also been studied. 15 Zuroff & Blatt 16 have concluded that the nature of the psychotherapeutic relationship, reflecting interconnected aspects of mind and brain operating together in an interpersonal context, predicts outcome more robustly than any specific treatment approach per se.
Regarding common factors in PDP, Luyten et al. 15 mentioned the important differences between psychodynamic and other treatments. Comparatively to cognitivebehavioral therapists, psychodynamic therapists tend to place stronger emphasis on certain aspects, namely: affect and emotional expression; exploration of patients' tendency to avoid topics; identification of recurring behavioral patterns, feelings, experiences, and relationships; the past and its influence on the present; interpersonal experiences; the therapeutic relationship; and exploration of wishes, dreams, and fantasies. Along with these features, specific characteristics of a psychodynamic-oriented treatment have been described: a focus on the patient's internal world; a developmental perspective; and a personcentered approach.
Depression from the psychodynamic perspective
Psychodynamic understandings of depressive disorders were first described by Freud, Abraham, and Klein. Freud explored the individual's reactions to an actual loss or disappointment associated to a loved person, or to a loss of an ideal. Plainly, he tried to explain why some people react with a mourning affect (surpassed after a period of time) and others succumb into melancholy (depression, as we now call it). Mourning is the reaction to the loss of a loved one or the loss of an abstraction, which has taken the place of something (a country, freedom, or an ideal, for example), and although it involves significant disruptions from one's normal attitude towards life, it should not be regarded as pathological. Thus, mourning occurs following loss of an external object. Melancholy, on the other hand, arises from the loss of the object's love and is an unconscious process where a remarkable decrease in self-esteem is observed. Culpability is also a feature clearly present in melancholic processes, as the loss of the object comes with feelings of guilt, stressing the ambivalent feelings towards the lost object; not only because the individual knows that he or she attacked (in fantasy or in reality) the lost object, but mostly because he or she desired that very loss (due to the object's unsatisfactory presence and love). Freud clearly outlined the symptoms of melancholy: ''... a profoundly painful dejection, cessation of interest in the outside world, loss of capacity to love, inhibition of all activity, and lowering of the self-regarding feelings to a degree that finds utterance in self-reproaches and self-revilings and culminates in delusional expectations of punishment.'' 17 These features seem to resemble the current DSM definition of depression.
Abraham proposed a specific model for the melancholic process, 18 consisting of a series of explanatory events: after an initial frustration (loss of an object), the subject reacts with externalization of the introjected object and its destruction, thus to an early anal-sadistic stage. Identification with the object -(primary) narcissism -results in its introjection, thus explaining the sadistic vengeance against the object as part of the subject's ego; one's selfdestruction often manifested as suicidal thoughts. Ambivalence plays a key role, as the subject struggles with his own survival and destruction.
Klein later elucidated the importance of the establishment of an internal world in which the lost external object is ''reinstated.'' Thus, in melancholy, there is a regression to an earlier failure to integrate good and bad partial objects into whole objects in the inner world. The depressive individual believes himself omnipotently responsible for the loss, due to his inherent destructiveness, which has not been integrated with loving feelings. Klein argues that pining, mourning, guilt, reparation, possibly delusional thinking, omnipotence, denial, and idealization characterize depression. 19 More recently, Luyten & Blatt 15 commented on these works as ''still clinically relevant'' but ''often over specified, lacking theoretical precision, and too broad to be empirically tested.'' However, these authors stated that unconscious motives and processes still play an important role in recent psychodynamic theories of depression.
Evidence for psychotherapy as a treatment for depressive disorders A meta-analysis of direct comparisons found psychotherapy about as effective as pharmacotherapies for depressive disorders. 20 In another meta-analysis, Cuijpers et al. 21 included 92 different randomized controlled trials (RCTs) and demonstrated the efficacy of psychotherapy in comparison with pharmacotherapy -equal in the shortterm and superior in the long-term, regarding relapse prevention. Different forms of psychotherapy have been compared, with no clear differences observed or, when so, with certain methodological specificities pointed out. 22 Nevertheless, the effectiveness of many well-recognized interventions has been regarded as possibly overestimated, considering that most evidence is based on symptom reduction. 23 A comprehensive meta-analysis 24 has highlighted the effectiveness of Interpersonal Psychotherapy (which has its structure and theoretical roots in PDP) in depression, as compared to other psychotherapies and vs. combined treatment, as well its role in preventing onset or relapse after successful treatment.
Extensive literature supports the efficacy of psychotherapy as an established treatment for MDD, stating its effectiveness and comparableness to that of antidepressant medications. The significance of these findings and possibility of publication bias have also been object of attention from the scientific community. A recent analysis stated an excess of significant findings relative to what would be expected for studies of psychotherapy's effectiveness for MDD. 25 On this subject, Driessen et al. 26 found clear indications of study publication bias among U.S. National Institute of Health-funded clinical trials that examined the efficacy of psychological treatment for MDD, ascertained through direct empirical assessment. Through these data, the authors concluded that psychological treatment, like pharmacologic treatment, may not be as efficacious as the published literature would indicate.
Cuijpers et al. 27 published a meta-analysis on the effects of psychotherapies on remission, recovery, and improvement of MDD in adults. The response rate for the analyzed psychotherapies was 48% (vs. 19% in control conditions), and there was no significant difference between types of psychotherapy.
Evidence for psychodynamic psychotherapy as a treatment for depressive disorders Shedler 28 presented five independent meta-analyses showing that the benefits of PDP not only endure, but also increase with time (including after treatment end). Patients reported significant symptom reductions, which held up over time, and increased mental capacities, which allowed them to continue maturing over the years. Additionally, Shedler presented several studies demonstrating that it is the psychodynamic process that predicts successful outcome in cognitive therapy, rather than the pure cognitive aspects of treatment -i.e., non-psychodynamic psychotherapies may be effective because the more skilled practitioners utilize techniques that have long been central to psychodynamic theory and practice.
Leichsenring et al. 22 conducted an empiric review of supported methods of PDP in depression and suggesting a unified protocol for the psychodynamic treatment of depressive disorders. The authors found a twofold risk for poor outcome in depression when patients were diagnosed with a comorbid personality disorder. However, several studies were found to have methodological limitations, such as taking a personality disorder diagnosis in account as a primary object of treatment, sample size differences, and divergent results, largely depending on the personality cluster identified. The findings of these authors contradict repeated claims that PDP is not empirically supported.
A subsequent systematic review by Leichsenring 29 identified and included a total of 47 RCTs providing evidence for PDP in specific mental disorders; it stated the efficacy of PDP compared to cognitive-behavioral therapy (CBT) (but not to other forms of psychotherapy) in MDD, and concluded that several RCTs provide evidence for the efficacy of PDP in depressive disorders (including comparisons with control groups, waiting-list condition at the end of treatment, group therapy, pharmacotherapy, and brief supportive therapy).
Varying results have also been observed according to treatment duration -specifically, short-term (STPDP) vs. long-term psychodynamic psychotherapy (LTPDP) as applied in patients with depressive disorders. One recent meta-analysis 30 evaluated the efficacy of a specific STPDP (experiential dynamic therapy) within multiple psychiatric disorders, and found the largest effect on depressive symptoms. A meta-analysis from the Cochrane Collaboration 31 studied the effects of STPDP for common mental disorders across several studies, including 23 RCTs. It showed significantly greater improvement in the treatment groups as compared to controls, with most improvement maintained on medium-and long-term follow up.
Another meta-analysis by Leichsenring et al. 32 examined the comparative efficacy of LTPDP in complex mental disorders in RCTs fulfilling specific inclusion criteria (therapy lasting for at least a year or 50 sessions; active comparison conditions; prospective design; reliable and valid outcome measures; treatments terminated). It concluded that LTPDP is superior to less intensive forms of psychotherapy in complex mental disorders.
More recently, Driessen et al. 33 published a meta-analysis of 54 studies highlighting STPDP outcomes in symptom reduction and function improvement during treatment. They found either maintained or further improved gains at follow-up, and stated that the efficacy of STPDP compared to control conditions and outcomes on depression did not differ from that of other psychotherapies.
A recent review 34 provided evidence towards maintained effects with both modalities as a treatment option for depression, emphasizing their moderate (rather than large) effects. PDP is noted as a preferred alternative to pharmacotherapy in depressive disorders; nevertheless, the authors highlight the high frequency of studies involving psychotherapy in combination with medication -or adding to the effectiveness of medication. In comparison with CBT, PDP is described as neither largely nor reliably different. No single type of PDP was found particularly efficacious within its different forms. Regarding LTPDP, its cost-effectiveness and early stage are mentioned when describing its value, especially in more complex and chronic cases of depression.
Discussion
An extensive, growing body of literature confirms that the classical divergence in treatment approaches for depressive disorders is fading. Psychotherapy has been found as efficacious as pharmacotherapy, with different results regarding its superiority in short-term and long-term relapse prevention. 20,23 Moreover, a systematic review has elucidated the potential benefits of a change in intervention design in depression, switching the paradigm from a symptom-oriented one to more rehabilitation-and functioning-oriented therapies. 23 These results are in agreement with Westen et al, 35 who presented evidence that treatments focusing on isolated symptoms or behaviors (rather than personality, emotional, and interpersonal patterns) are not effective in sustaining even narrowly defined changes.
The large number of publications in this topic has drawn the attention of the scientific community, prompting systematic analyses with increasing complexity and the creation of specific protocols for psychotherapeutic intervention, bearing in mind the importance of structured interventions by qualified clinical staff.
Although it would stray from the primary scope of this review, it is worth highlighting the growing number and relevance of published neuroscientific literature that reports neuroimaging and neurochemical changes exerted by psychotherapeutic interventions, 9 specifically PDP. 36 The effectiveness of PDP has been found difficult to isolate due to its limitations as a measurable intervention, which has led to the proposition of unified protocols both to facilitate training and to improve the status of evidence. 22 The quality of PDP trials published from 1974 to 2010 was assessed in a review paper 37 which concluded that the existing RCTs of PDP mostly show superiority of PDP to an inactive comparator. Studies concerning longer-term treatments are scarce but highly relevant, as they focus on important individual aspects like chronic mood problems, which often result from a combination of depression, anxiety, and significant personality and relational problems. 15 While these aspects are simple to clarify, few studies have taken them into account. Further RCTs could provide new evidence on the effectiveness of PDP, as well as facilitate its clear integration among the range of standard treatment options to consider for depressive disorders.
One important related aspect refers to the training of future therapists in PDPs: institutes are mostly small and independent, and lack the necessary resources to conduct expensive or large-scale studies.
This narrative review presents certain limitations. Only recent published studies or systematic reviews were included. Due to practical reasons, only English-language publications were included, which may have left out important published findings. Publication bias may also be a factor, perhaps resulting in studies or systematic reviews that only showed positive or equal results for PDP treatments. However, we emphasize the importance of gathering and comparing recent findings and systematic reviews with classical published works in the field of PDP.
In conclusion, despite its controversial history, PDP's influence in the psychiatric panorama is definitely increasing. The effectiveness of PDP has been demonstrated in various studies which have compared it with other treatment modalities. In recent years, the body of empirical evidence supporting said effectiveness has grown, and, more recently, meta-analyses have confirmed the role of PDP in the treatment of depressive disorders.
Many advances have been made in to enable highquality scientific research in this complex, layered field. Nonetheless, contemporary authors continue to claim the importance of early conceptualizations of the psychodynamic perspective toward depression and depressive disorders.
Disclosure
The authors report no conflicts of interest. | 2017-08-30T20:16:41.089Z | 2017-06-01T00:00:00.000 | {
"year": 2017,
"sha1": "a0b81d338ac01829ed20590392fbe87ea836495b",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.br/pdf/rbp/v40n1/1516-4446-rbp-1516444620162107.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1558856b9368329bd05bbaa8c608f4c7c8f048c4",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
263058210 | pes2o/s2orc | v3-fos-license | Insider Trading in Germany - Do Corporate Insiders Exploit Inside Information?
Our study focuses on the question whether corporate insiders in Germany exploit inside information while trading in their company's stock. In contrast to prior international studies, which are not able to link insider transactions to a formal definition of inside information, we relate insider transactions to subsequent releases of inside information via ad-hoc news disclosures. We find evidence that corporate insiders as a group seem to trade on inside information. Moreover, members of the supervisory board seem to be most active in exploiting inside information since they realize exceptionally high profits with their frequent front-running transactions.
Introduction
The question whether corporate insiders exploit inside information while trading in their company's stock attracts the attention of academia and the public alike. 1 Moreover, the answer to this question is also crucial for regulatory authorities, since on a capital market there is a loser for each winner. In particular, if corporate insiders exploit inside information, profits received by corporate insiders reduce the returns of all other uniformed traders (including the market maker). 2 As a consequence, un-1 In 2005, according to its annual report, the German regulatory authority Bundesanstalt für Finanzdienstleistungsaufsicht (BaFin) investigated 54 cases related to suspected insider trading. E.g., several managers at DaimlerChrysler were suspected to exploit inside information prior to the resignation of the former CEO Jürgen Schrempp (Handelsblatt, August 29, 2005). However, the probably most prominent suspicion was about the former Co-CEO of the European Aeronautic Defence and Space Company (EADS), Noël Forgeard, who sold together with his children stocks and stock options for a seven digit profit just a few weeks before EADS disclosed severe difficulties in the production of the airplane A380 (Handelsblatt, June 21, 2006). 2 Admittedly, this discussion highlights the disadvantages of insider trading exclusively and thus gives an incomplete picture. The reader should be aware that there exists a large body of informed investors might refrain from trading on the capital market. Thus, a well developed capital market requires an effective insider regulation to protect uninformed investors. In order to analyze the effectiveness of insider trading regulations in Germany, our study basically addresses three questions. First, we analyze whether German corporate insiders earn abnormal profits while trading in their company's stock. Second, we use a distinct property of German law, i.e., the companies' obligation to reveal inside information through ad-hoc news disclosures, to examine whether profits realized by corporate insiders seem to be due to the exploitation of inside information or not. Finally, we explore which group of insiders seems to be most active in trading on inside information: the one which is best informed about a company's prospects (i.e., senior managers) or the one which is probably least closely watched by the regulator (i.e., family members of senior managers and directors). Today, insider literature which emphasizes the beneficial role of insider trading. E.g., Manove (1989) and Leland (1992) favor the permission of insider trading to increase informational efficiency of security prices. regulations prohibit the exploitation of inside information on capital markets in nearly all developed countries. 3 In Germany, since 1994 § 14 WpHG (Security Trading Act) prohibits the exploitation and transmission of inside information. According to German law, inside information can be described as any specific information which is not subject to public knowledge and which, if it became publicly known, would likely have a significant effect on the stock price of the respective company ( § 13 WpHG). Moreover, to enhance market efficiency and to avoid information asymmetry § 15 WpHG requires an immediate public disclosure (ad-hoc announcement) of any inside information (as defined in § 13 WpHG) by the respective company. Additionally, as corporate insiders (i.e., senior managers, directors and their family members) may possess superior information about the company, since July 1, 2002 § 15a WpHG requires companies to report corporate insiders' transactions to the public as well as to the regulatory authority, the Bundesanstalt für Finanzdienstleistungsaufsicht (BaFin). 4 Trading activities of corporate insiders have been subject to a large number of studies. One strand of literature focuses on the announcement day of insider transactions and explores if uninformed outsiders can benefit by mimicking insider transactions (e.g., Jaffe 1974;Seyhun 1986; Rozeff and Zaman 1988;Bettis, Vickrey and Vickrey 1997;and Fidrmuc, Goergen, and Renneboog 2006). Remarkably, the literature finds that even uninformed outsiders can earn abnormal profits using publicly available information, at least when transaction costs are ignored.
Another strand of literature is motivated by the question whether corporate insiders earn abnormal profits by trading in company's stock and thus may use their foreknowledge about their firms' prospects (e.g., Lorie and Niederhoffer 1968;Jaffe 1974;Finnerty 1976;Seyhun 1986;Eckbo and Smith 1998;Jeng, Metrick, and Zeckhauser 2003;and Lakonishok and Lee 2001). The international literature documents that insiders earn high abnormal profits while trading in company's stocks. 5 While there exist numerous studies focusing on insider trading in the US and the UK, until now not much research has been conducted on the German capital market. This may be due to the fact that until July 1, 2002 corporate insiders did not have to reveal their trades to the regulatory authority. 6 However, recent studies for Germany (see, e.g., Stotz 2006;Klinge, Seifert, and Stehle 2005;Betzer and Theissen 2005) confirm the finding that corporate insiders earn significant profits. In particular, Stotz (2006) examined insider profits as well as the market reaction at the announcement day documenting positive abnormal returns for insiders as well as for outsiders in the first year following the implementation of the reporting obligation in Germany. In addition, he shows that German corporate insiders act as contrarian investors. Klinge, Seifert, and Stehle (2005) also analyze the market reaction around the announcement and trading day focusing mainly on determinants of the market reaction at the announcement day. Furthermore, they focus on the relevance of analyzing non-overlapping observations. Betzer and Theissen (2005), amongst other issues, relate the magnitude of market reaction to the ownership structure of the firm and thus contribute to the corporate governance literature. Finally, Betzer and Theissen (2007) find that prices are distorted in the period between the trading and the reporting date and thus propose a regulation which requires an immediate disclosure of insider transactions.
Although most prior studies routinely attribute abnormal profits to insider's superior knowledge and therefore to a potential exploitation of inside information, a final assessment is anything but trivial. On the one hand, profits of insiders could indeed originate in the exploitation of inside information.
On the other hand, short-term profits documented for corporate insiders could at least partly triggered by price-pressure caused by outsiders who blindly mimic the trades of insiders in a herd-like manner, even though the insiders traded just for liquidity considerations and not on inside information. Therefore, recent studies have tried to link trading activities of insiders to their foreknowledge of important corporate events, including bankruptcy (Seyhun and Bradley 1997), dividend initiations (John and Lang 1991), seasoned equity offerings (Karpoff and Lee 1991), stock repurchases (Lee, Mikkelson, and Partch 1992), takeover bids (Seyhun 1990), and earnings announcements (Elliott, Morse, and Richardson 1984;Noe 1999;and Ke, Huddart, and Petroni 2003). These studies basically find that insiders trade upon forthcoming corporate news. Thus, the evidence suggests that insiders exploit inside information. Unlike the cited studies which focus on a particular type of corporate news disclosure, Givoly and Palmon (1985) analyze the connection between insider trading and a large variety of news reports published in the Wall Street Journal subsequent to the insider trading day. They conclude that insiders do not seem to exploit inside information as their profits are not associated with the disclosure of specific news. Although the cited studies investigate the connection between insider trading and important corporate events, they have a decisive shortcoming. They are not able to link insider trading to a formal definition of inside information. Our paper contributes to the literature in several ways. First, distinct from most studies on insider trading which focus on capital markets with a long history of insider regulation like Anglo-Saxon markets, we analyze the German market and thus provide evidence for a market with a relatively new legislation. Second, unlike prior studies which were unable to link insider trading to a formal definition of inside information, the fact that in Germany any inside information has to be disclosed via an ad-hoc news announcement offers a unique opportunity to evaluate whether corporate insiders front-run on inside information. Third, the attitude to exploit inside information may vary in different types of insiders. 7 In Germany, three different groups of insiders have to report their trading records to the BaFin. In particular, members of the executive board (senior managers), which are involved in dayto-day business operations, are obliged to report their transactions to the BaFin. In addition, trading of members of the supervisory board (directors), which are usually not involved in day-to-day business operations, must also be reported to the BaFin.
Last, the group of other insiders, which mainly consists of family members of senior managers and 7 A related question has been addressed under the label "information hierarchy hypothesis" by Seyhun (1986).
directors, has to reveal their trading in company's stock. To the best of our knowledge, the question whether the group of insiders which is best informed about company's prospects (i.e., senior managers) or the group which is probably least closely watched by the regulator (i.e., other insiders) is most active in trading on inside information, is basically unexplored. With respect to our first research question which deals with the profitability of insider transactions, our results confirm the findings of prior study for Germany (see, e.g., Stotz 2006;Klinge, Seifert, and Stehle 2005;Betzer and Theissen 2005). Corporate insiders in Germany are able to identify profitable investment situations and thus realize substantial profits by trading in company's stock. Considering a 20-day period subsequent to the trading day, stocks traded by insiders are associated with significant cumulative abnormal returns (CARs): 4.38% for purchases and -1.47% for sales. In consequence, German insiders earn higher profits compared to their Anglo-Saxon counterparts. 8 Concerning our second research question, we find that insiders as a group seem to be engaged in the exploitation of inside information on the buy side as they earn significantly higher profits with those transactions which are shortly succeeded by an ad-hoc news disclosure of the respective company. With respect to our third research question, we document that trading activity prior to ad-hoc news announcements differs between the types of insiders. We find the group of directors to be most active in purchasing prior to ad-hoc news disclosures. In contrast, senior managers are less active in front-running on corporate news as they rarely purchase company's stock prior to an ad-hoc news disclosure. Finally, and most importantly, we show that directors and the group of other insiders earn exceptionally high profits with their purchases which front-run on corporate news disclosures and thus seem to exploit inside information. In contrast, senior managers seem to be aware that they are subject to the scrutiny of the supervisory authority as they do not realize superior returns with their rare transactions succeeded by a corporate news disclosure. The remainder of this paper is structured as follows. Section 2 describes the legal background of insider trading in Germany, whereas section 3 addresses the database, provides some descriptive statistics, and discusses the methodology. Section 4 presents the results concerning our three research questions. Finally, section 5 concludes.
Legal Background
Since 1934, rule 10b-5 of the Security Exchange Act prohibits the exploitation of inside information by corporate insiders in the United States. A corresponding framework for the German capital market was passed as late as in 1994. Since then, § 14 WpHG (Security Trading Act) prohibits the exploitation and transmission of any inside information. According to § 14 WpHG (Security Trading Act) "…it is prohibited to make use of inside information to acquire or dispose of insider securities for own account or for the account or behalf of a third party". 9 However, as it might be hard to identify which information qualifies as an inside information on which trading is prohibited, § 13 WpHG contains a legal definition of inside information. In particular, § 13 WpHG defines inside information as "… any specific information about circumstances which are not subject to public knowledge (…), which, if it became publicly known, would likely have a significant effect on the stock exchange or market price of the insider security." Moreover, § 15 WpHG requires exchange traded firms to disclose any inside information immediately to the public (ad-hoc announcement). § 15 WpHG demands that an "immediate public disclosure is required from an issuer of financial instruments (…) regarding all inside information which directly concerns that issuer…". Firms usually use special service providers which transmit the information to the market to fulfill these obligations. In 2002, German insider surveillance was extended to corporate insiders' transactions in securities of their company (Directors' Dealings). Since July 1, 2002, it is not only prohibited for corporate insiders to trade on inside information, but they also have to report trades in securities of their company. According to § 15a WpHG, members of the executive board and the supervisory board of exchange listed companies, as well as their family members are obliged to report transactions in companies' securities to 9 Please note that the WpHG is originally written in German. The English translations of paragraphs are taken from the homepage of BaFin.
their company and to the German financial supervisory authority BaFin. Trading activities have to be reported without delay. Additionally, the firm has to publish the trading record on its web site or in a financial newspaper. Unlike in the US or UK, transactions carried out by former board members and large shareholders are not covered by the German insider law and therefore do not have to be reported. Furthermore, no report is required if the total amount of all transactions in a 30-day period does not exceed € 25,000. In 2004, § 15a was amended. Since October 30, 2004, persons discharging managerial responsibilities are also obliged to report their transactions. The reporting period for trading activities was specified to occur within five business days. The lower limit, which does not require a disclosure, was also reduced to € 5,000 per person in a calendar year. Furthermore, companies are now required to maintain lists of persons which have access to inside information ( § 15b WpHG).
Data and descriptive Statistics
Our empirical analysis covers insider transactions in German stocks between July 1, 2002 and April 30, 2005, which were reported to the BaFin. For each observation the respective database provided by the BaFin contains the company's name, the International Securities Identification Number (ISIN) of the reporting company, the name and type of the reporting insider (e.g., a member of the executive board), the trading and announcement day, the kind of transaction (e.g., a purchase of a stock), the number of securities traded, the stock price at which the transaction was executed, and the publishing media.
To check and complement the database we match the information contained in the original database with statements from the company's annual reports and information published on the company's web site and other financial web sites. 10 The Deutsche Gesellschaft für Ad-hoc Publizität (DGAP) and euro-adhoc are the main providers which transmit ad-hoc news to the market. 11 We use their databases to identify ad-hoc news releases subsequent to the trading day. We extract data on dividend and stock splits adjusted stock returns from Datastream. 12 As our study focuses on the German legislation and the German market we only cover trades in stocks with a German ISIN (DE-ISIN). The original database contains 6,328 transactions carried out by insiders in 416 different firms. In a first step, we exclude duplicate and incomplete entries. Like previous studies on insider trading (e.g., Finnerty 1976;Gregory, Matatko, Tonks and Purkis 1994;Friederich, Gregory, Matatko, and Tonks 2002;Hillier and Marshall 2002;and Korczak, Korczak and Lasfer 2007), we focus on open-market transactions and exclude trades associated with corporate events.
In particular, we exclude trades associated with exercise of options, security lending, changes in the capital structure, and takeover bids. In addition, transactions among insiders are also excluded. In 1,577 cases, the database includes two or more transactions of the same insider in the same stock on a given day. This is the case if an insider trades more than once on the same day or if the broker executes the order in two or more pieces. We aggregate these partial executions and multiple trades of the same individual in the same security on a given day. Furthermore, we dropped 136 observations due to incomplete return data. In 125 cases firms disclose ad-hoc news on the transaction day itself. As mentioned before, we use ad-hoc news disclosures to link insider trading to a potential exploitation of inside information. As we do not have information about the exact trading time, we could not determine whether the corporate insider traded prior to the respective ad-hoc news disclosure. Thus, these transactions were excluded from the sample. In order to avoid a double-counting of observations in each group of insiders which could trigger biased test statistics, we aggregate trades by different insiders on the same day. If two members of the executive board of BMW purchase the BMW stock on the same day (e.g., manager A purchases for 1 Mio. € and manager B purchases for 2 Mio. €), we treat the two original transactions as a new transaction with the combined transaction volume (e.g., 3 Mio. €).
Employing this procedure, we lose 270 transactions in our final sample. In order to distinguish between groups, we finally exclude the 152 transactions where more than one group of insiders traded company's stock on a specific day. E.g. if a member of the executive board and a member of the supervisory board trade in the same stock at the same day, both transactions are excluded from the final sample, since these transactions would distort inference about differences in CARs between groups. Table 1 shows the generation of our final sample which consists of 2,657 insider transactions in 344 different firms. Thereof, 633 transactions are succeeded by a subsequent ad-hoc news disclosure in the following 20 trading days. With respect to the information revealed by public ad-hoc news disclosures, we find that 324, and thus about 50% of the respective ad-hoc news deal with the release of quarterly and annual results. 13 The remaining adhoc news refer to changes in the executive or supervisory board (44), changes in the lines of business operations like M&A, business expansion or restructuring (73), changes in the capital structure of the company (72), or deal with other information like patents or, in case of pharmaceutical company, information about drug tests, changes in the ownership structure and litigation issues (120). The latter group also contains ad-hoc news with multiple reasons like, e.g., a combined release of an earnings announcement and a dismissal of an executive manager. Moreover, ad-hoc disclosures can either be scheduled or unscheduled. Whereas scheduled news like the release of earnings figures are known by market participants in advance, the remaining ad-hoc categories are typically unexpected and thus unscheduled. Table 2 shows that the number of transactions on the buy and sell side is rather balanced. In particular, purchases account for about 53% of all insider trades (1,402 out of 2,657). With respect to the insider's position, we find members of the executive board and members of the supervisory board to trade most frequently. In particular, members of the executive board (members of the supervisory board) account for 749 (470) purchases and 468 (536) sales transactions. They correspond to about 46% (38%) of all transactions. Consequently, the group of other insiders trades least frequently.
Besides, the group of other insiders is the only group where the number of sales (251) exceeds the number of purchases (183). 13 Unlike in the UK in Germany there exists no trading ban prior to quarterly earnings announcements and the announcements of annual results. is about two to three times larger than the volume for purchases € 19,350 € (€ 336,971). Consequently, although the number of sales is lower than the number of purchases, sales account for 68% of the total trading volume. Moreover, all groups of insiders are net sellers. As in most empirical studies the distribution of firm size is skewed. The mean market capitalization of a traded firm is € 1,644 million and thereby highly exceeds the median market capitalization which equals € 43 million.
Methodology
The purpose of our study is to measure the shortterm profits of insiders which trade in their company's stock. In accordance with most studies on insider trading, we measure these profits in an event study framework. We measure abnormal returns, i.e., returns that deviate from the normal return, subsequent to the insider trading day by applying standard event-study methodology outlined by MacKinley (1997 Abnormal returns for any given point in time and stock are defined as the difference between realized and normal returns. In order to estimate these expected normal returns, we choose the market model as surveyed by Brown and Warner (1985). 14 the CDAX, adjusted by the estimated OLS parameters. As a consequence, abnormal returns derived from the market model are adjusted for individual stock's risk and the market return. To calculate the market reaction for more than one day we cumulate abnormal returns for the respective period.
In order to test for statistical significance of abnormal returns (ARs) and cumulative abnormal reurns (CARs) we first apply a parametric test, the standardized cross-sectional test proposed by Boehmer, Musumeci, and Poulsen (1991). This test has shown to be superior to the traditional t-test in the presence of event-induced increases in variances. However, parametric tests are sensitive to asymmetrically distributed returns (e.g., Brown and Warner 1985;and Corrado 1989). Thus, we also employ the nonparametric rank test based on Corrado (1989) to test for robustness. This type of test is correctly specified independently from the skewness of cross-sectional distribution of abnormal returns.
Insider Profits
First, we address the question whether corporate insiders do earn abnormal returns by trading in their company's stock. Stotz (2006) and Betzer and Theissen (2005). The latter study reports within a two-year investigation period from July 1, 2002 to June 30, 2004, cumulative abnormal profits of 3.60 % for a 20-day period subsequent to the trading day. We therefore confirm their finding that insider profits on the buy side are somewhat higher for the German market than those documented for the US and the UK. E.g., for the US, Seyhun (1986) reports a cumulated abnormal return for the 20-day period subsequent to the trading day of 1.10%, whereas Friederich, Gregory, Matatko, and Tonks (2002) document a respective profit of 1.96% for the UK. Interestingly, from the perspective of the efficient market hypothesis, the price reaction is strikingly slow. In particular, after a period of five trading days subsequent to the insider transaction, only about 37% of the total increase within the 20-day event window is incorporated in stock prices (1.62% compared to 4.38%). The respective fraction for the tenday period is about 61% (CAR[0;+10] equals 2.66%), an almost linear adjustment to the cumulative abnormal return at the end of the event window. The rather slow adjustment in stock prices might be explained by legal aspects concerning reporting obligations. As discussed before, corporate insiders have to announce their trading records to the regulatory authority BaFin shortly after they have executed their order. Our data reveals that the median (mean) time period between the trading and the announcement day is three (nine) trading days for purchases. Thus, since insider transactions are closely followed by many investors, it may trigger a wave of transactions in the same direction by outsiders, thereby generating abnormal returns subsequent to the trading day. 16 With respect to sale transactions, a different picture emerges. The immediate price reaction CAR[0;+1] shows to be significantly positive with 0.26%. Thus, stock prices do not reflect the negative information immediately. However, if one looks at the 20 trading days after the transaction, stocks sold by insiders drop by -1.47%. Although this moderate decline in stock prices does not necessarily yield economically significant profits for insiders when direct and indirect transactions costs are taken into account (see, e.g., Keim and Madhavan (1998); Berkowitz and Logue (2001), for the different components of transaction cost), the cumulative abnormal return is 16 Please note that the finding of a slow price adjustment is documented in several other studies. See, e.g., Givoly and Palmon (1985); Seyhun (1986); Bettis, Vickrey, and Vickrey (1997); Jeng, Metrick, and Zeckhauser (2003) for the US; Friederich, Gregory, Matatko, and Tonks (2002) for the UK; and Klinge, Seifert, and Stehle 2005;and Stotz (2006) for Germany. Boehmer, Musumeci, and Poulsen (1991). +++, ++, + indicate statistical significance at the 1%-, 5%-, 10%-level according to the nonparametric rank test based on Corrado (1989).
The finding that insiders realize greater profits with their purchases than with their sales is also frequently documented in the literature. 17 Unlike purchases, which are primarily motivated by the desire to realize profits, sales might be triggered by other considerations. First, basically only sales are motivated by diversification objectives and therefore might be non information-driven. For instance, many senior managers are strongly invested with their human capital in their firm and often have large holdings of company's stock. In addition, senior managers are increasingly compensated by stock option programs which allocate a substantial part of their personal wealth to their firm. Our data reveal that corporate insiders reduce their exposure in the company after substantial price increases. In particular, insiders sell stocks which yield a highly sig-17 See, e.g., Bettis Vickrey, and Vickrey (1997); Lakonishok and Lee (2001);and Jeng, Metrick, and Zeckhauser (2003) for the US; Friederich, Gregory, Matatko, and Tonks (2002) for the UK. In contrast, almost symmetrical profits on both buy and sell side are found by, e.g., Seyhun (1986); Givoly and Palmon (1985); Klinge, Seifert, and Stehle (2005); Betzer and Theissen (2005); and Stotz (2006). In particular, Betzer and Theissen (2005) find an almost symmetrical market reaction on the buy and sell side for the German market. In particular, they report a CAR[0;20] of -3.53%. Differing results could, however, be explained by a differing sample period as well as differences in analyzed transactions as Betzer and Theissen (2005) also include sale transactions from exercised stock options.
nificant positive CAR[-20;-1] of 8.64% in the 20 trading days prior to the insider trading day. Thus, corporate insiders seem to time their selling after significant price run-ups. Second, another non information-driven reason which is more prevalent for sales than for purchases is liquidity. If a corporate insider wants to buy a new mansion or Learjet, she might prefer to sell some corporate stocks, especially if they recently went up in prices. Moreover, sales may be motivated by tax considerations.
Do Corporate Insiders Exploit Inside Information?
A decisive prerequisite to answer the question whether corporate insiders exploit inside information is the identification of those transactions which may exploit inside information. In an ideal world, one could directly observe the information set of an insider at the transaction day. Unfortunately, in reality this information is unobservable. Thus, one has to find an observable proxy for inside information. Probably the best way to formally identify trades which are likely to be based on inside information, is to link corporate insider trading to ad-hoc news disclosures subsequent to the insider trading day. As mentioned before, German firms are required to disclose any inside information ( § 13 WpHG) to the public via an ad-hoc announcement ( § 15 WpHG). Those ad-hoc announcements deal with corporate events which are likely to have a significant effect on the stock price like, e.g., changes in the executive board structure, earnings announcements, and merger activities. Thus, insider trading prior to ad-hoc news disclosures is a first indicator for an exploitation of inside information since corporate insiders are likely to know at least the tendency of the ad-hoc news prior to theirdisclosure. For instance, it is hard to believe that a senior manager is not continuously informed about the performance of her firm or is not involved in and informed about takeover or merger proceedings. ad-hoc news disclosure in the suspected period [+1;+20], separately. In addition, the respective fraction for the total sample and the difference between the groups and the total sample are displayed in the table. Panel A shows the respective statistics for purchases, whereas Panel B refers to sales.
As far as purchases are concerned, we find that a fraction of about 22.68% (318 out of 1,402) of all transactions is succeeded by an ad-hoc disclosure in the suspected period. Looking at the different groups of insiders, we find that senior managers are least often engaged in transactions which are succeeded by corporate news disclosures. The fraction of purchases with a subsequent ad-hoc news disclosure is only 20.16%. A different picture emerges for directors. With respect to trading prior to ad-hoc news disclosures, 26.38% of the purchases carried out by directors are executed shortly prior to an adhoc disclosure. In contrast to the findings for purchases, we do not find any group of insiders to be particularly engaged in trading prior to ad-hoc news disclosure on the sell side. However, trading prior to ad-hoc news disclosures is neither a necessary nor a sufficient condition to evaluate whether corporate insiders exploit inside information. On the one hand, insiders could trade rather frequently prior to ad-hoc disclosures which, however, have a negligible value. In this case, corporate insiders should not realize superior profits with front-running transactions. On the other hand, insiders could exploit inside information even though they trade less frequently prior to ad-hoc news. For instance, an insider could refrain from trading prior to the subsequent release of ad-hoc news disclosures which have low value but front-run on rare and exceptionally relevant information. If this scenario occurs, insiders would realize superior profits even though they do not trade particularly often prior to ad-hoc news disclosures. Thus, our criterion to detect an exploitation of inside information refers to the profitability of trading. If insiders systematically purchase stocks prior to positive ad-hoc news and sell stocks prior to negative ad-hoc news, they should, ceteris paribus, earn higher profits with these transactions compared to the remaining transactions. On the contrary, if insiders do not condition their trading decisions on the information content of subsequent adhoc news releases and, thus, purchase stocks as often prior to good as before bad news, 18 the average profits of transactions with subsequent ad-hoc news disclosure should be similar to profits of transactions without subsequent news disclosure. As a consequence, we feel confident to accuse insiders of This exploiting inside information if transactions of insiders, which are succeeded by an ad-hoc news disclosure of the respective company in the subsequent 20 trading days, are associated with higher profits compared to the remaining transactions without an ad-hoc news disclosure. In the following, we will refer to those transactions as unethical. Table 5 (see following page) displays cumulative abnormal returns for several short-term periods subsequent to the insider trading day for purchases (Panel A) and sales (Panel B). The first vertical panel refers to all transactions. Herein, the first column addresses transactions with an ad-hoc news disclosure in the subsequent 20 trading days (News). The second column refers to transactions without a subsequent ad-hoc news disclosure (No News) and the third vertical panel displays the difference in means between the first two columns. The second through fourth vertical panels display group-specific results. The first columns in those panels display the mean returns for transactions with a subsequent ad-hoc news disclosure for each group of insiders, whereas the second column displays the difference in means between the respective group-specific returns and the profits of all transactions without a subsequent news disclosure. With respect to purchases, we find that insiders as a group earn particularly high profits with their front running transactions. Corporate insiders earn an abnormal profit of 7.38% within the 20 trading days after front-running on ad-hoc news disclosures. 19 For transactions without a subsequent ad-hoc news disclosure, we document a respective value of mere 3.50%. Moreover, the difference in mean profits between trades which front-run on corporate news disclosure and the remaining transactions without a subsequent ad-hoc news disclosure is statistically significant starting with CAR[0;+5] onward. Thus, corporate insiders as a group seem to purchase company's stock systematically prior to positive 19 Our results further reveal that trading prior to both scheduled and unscheduled ad-hoc news is associated with high returns. In particular, trading prior to scheduled (unscheduled) news is associated with a cumulative abnormal return in the period [0;+20] of 5.13% (9.39%), Please note that Betzer and Theissen (2005) document a similar return for trading prior to scheduled news. In particular, they report 5.26% for purchases which are executed two months prior to an annual or interim earnings announcement, and the month prior to a quarterly earnings announcement (blackout period according to UK legislation).
Univariate Analyses
news. This reasoning is also supported if one analyzes the abnormal returns on the disclosure day of the ad-hoc news subsequent to insider purchases. Whereas 62.58% of the ad-hoc news subsequent to purchases of corporate insiders is associated with positive abnormal returns, only a minority of 37.42% trigger a negative market reaction at the day of the ad-hoc news disclosure. Similar to our findings for purchases, profits associated with sale transactions which front-run on corporate news are considerably higher for all analyzed periods. E.g., according to the first column of Panel B of Table 5 the CAR[0;+20] is double the magnitude for sales which front-run on subsequent news releases compared to the remaining transactions. However, the differences in means between sale transactions with and without subsequent news disclosure are statistically insignificant. This might be caused by abnormal returns on the disclosure day of the ad-hoc news subsequent to insider sales. In particular, the number of positive and negative market reactions is rather balanced as 53.65% trigger a positive and 46.35% trigger a negative market reaction.
Given the evidence that insiders as a group seem to exploit inside information, we are curious which type of corporate insider is particularly engaged in exploiting inside information. To put things differently, we want to figure out whether it is primarily the group of members of the executive board (executive managers), the group of members of the supervisory board (directors), or the group of other corporate insiders which tend to trade in an unethical manner. This question is basically an empirical one, since two opposite effects are simultaneously at work. On the one hand, senior managers which are involved in day-to-day business operations have superior access to inside information compared to the two other groups of insiders. Thus, concerning the criterion of accessibility to inside information, senior managers might be most tempted to front-run on important corporate news. On the other hand, one has to be aware that senior managers are also in the spotlight of public attention, and, more importantly, probably under particular scrutiny of the regulatory authority. Hence, the fear of prosecution might weaken their intention to exploit inside information. With respect to the two other groups of insiders, monitoring might be less pronounced. more bold to trade on their inside information as they do not fear a thorough inspection of their trading records. Table 5 provides interesting empirical evidence concerning the trading behavior of different groups of insiders. On the buy-side, we find that senior managers do not seem to be engaged in frontrunning on inside information. Not only do they trade least frequently prior to ad-hoc disclosures, but senior managers also do not realize superior profits. Particularly, the CAR[0;+20] equals 2.61% for purchases with subsequent news disclosures in the 20 trading days after the insider's trading day, whereas the remaining transactions yield a profit of 3.50%. However, from a statistical point of view, the difference in means of -0.89% is not statistically different from zero. A very different picture emerges when we look at directors' purchases. In addition to their significantly higher trading frequency prior to ad-hoc news releases, they seem to trade on valuable information. E.g., the CAR[0;+20] for front-running purchases for directors equals 12.96%. The difference in mean profits of front-running transactions compared to no news transactions is statistically significant for all analyzed periods from [0;+5] onwards and equals 9.46% for the period [0;+20]. This indicates that directors trade on the information content of the subsequently released ad-hoc news.
Consequently, directors and other insiders might be
We get a similar result concerning the group of other insiders. Even though other insiders do not frequently front-run on corporate news, they do realize exceptional profits with those transactions. In particular, they realize a handsome profit with a CAR[0;+20] of 8.03% with their front-running purchases. Additionally, the difference in means compared to the no news transactions shows to be 4.53%, a profit significant on the 5%-level. Given that other insiders seem to exploit inside information, a natural question arises. How does the group of other insider split between family members of executive managers and family members of directors? For purchases, the 183 transactions of other insiders are anything but uniformly distributed across the two remaining insider groups. Only 39 transactions of other insiders could be traced to a relative of a member of the executive board, whereas the majority of 116 transactions can be directed to a member of the supervisory board. The remaining 28 transactions could not be unambiguously assigned to one group or the other. Although, family members of executive managers do not trade very often, they nonetheless seem to exploit inside information with their 12 transactions which are succeeded by an ad-hoc disclosure. Looking at the 20-day period following the transaction Boehmer, Musumeci, and Poulsen (1991). With respect to comparisons of cumulative abnormal returns between subgroups, +++, ++, + indicate statistical significance at the 1%-, 5%-, 10%-level according to the two-sample t-test. 2 day, family members of executive managers earn 8.57% with their transactions succeeded by an adhoc news disclosure. Concerning family members of directors, results do not differ much whether the insider traded by himself or through a family member. In particular, in the 20 trading days subsequent to the insider trading day, family members of directors earn 8.24% with those transactions which front-run on corporate news. Regarding sales transactions, we find no specific group of insiders to be severely engaged in exploiting inside information. Although according to Table 5, we predominately find the profits associated with sales which front-run on corporate news to be higher for executive managers and directors, differences in means of abnormal returns are not statistically significant.
Multivariate Analyses
In order to test for robustness, we complement our univariate analysis by performing multivariate regressions. Basically, we want to verify the conjecture that higher profits of front-running transactions on the buy side are indeed driven by subsequent ad-hoc news disclosures. In contrast, they could be triggered solely by common characteristics of the transaction not accounted for in univariate group comparisons. Thus, Table 6 (on page 15) displays multivariate regression results for two models on the dependent variables CAR[0;+10] and CAR[0;+20]. The most important independent variable is the dummy variable NEWS, which equals one if the company discloses ad-hoc news in the period [+1;+20] subsequent to the trading day and zero otherwise. The first model includes besides a number of control variables listed below the NEWS variable exclusively, which -in this model -measures the effect of a subsequent ad-hoc disclosure on cumulative abnormal returns. The second model differs from the first one as it additionally includes two interaction terms in order to reveal differences between the three groups of insiders. The two interactions terms are (i) NEWS, multiplied by a dummy variable indicating whether an insider is a member of the supervisory board (SUPERVISORY BOARD), and (ii) NEWS multiplied by a dummy variable indicating whether an insider is a member of the group of other insiders (OTHER INSIDERS). One can interpret the respective coefficients of the interaction terms as the differential effect for transactions (i) of directors and (ii) other insiders com-pared to transactions of senior managers for the sub-sample of front-running transactions. Finally, the coefficient on NEWS indicates the differential effect of front-running transactions by senior managers compared to the remaining transactions without a subsequent news disclosure. In order to take common characteristics of a transaction into account, we include a set of control variables concerning trade-specific factors as well as company-specific factors. 20 Concerning tradespecific factors, we construct several control variables which relate to characteristics of the transaction. First, e.g., Seyhun (1992); Bettis, Vickrey, and Vickrey (1997);and Fidrmuc, Georgen, and Renneboog (2006) have shown that relatively large volume trades yield higher returns than small volumes. Thus, in order to control for the trading volume we construct the variable TRADING_INTENSITY, which is defined as the trading volume of the respective transaction divided by the market value of the firm on the trading day. Second, we control for the delay in reporting. We expect that corporate insiders are more reluctant to report their trades in a timely manner for those trades which are based on valuable information. Thus, we include the variable REPORTING_DELAY, which counts the number of trading days between the transaction day and the reporting day. Third, Seyhun (1986) has shown that insider profits are negatively related to prior performance. Because of this evidence and due to our finding that insiders are contrarian investors, 21 we include the variable MOMENTUM which measures the cumulative abnormal return prior to the trading day in the period CAR [-20;-1]. Fourth, if several insiders trade the same stock on a particular date, this might reveal a stronger signal about the insiders' estimation of the company's prospects. Accordingly, Fidrmuc, Georgen, and Renneboog (2006) have shown that the market reaction of insider transactions depends on the number of simultaneously acting traders. Therefore, we introduce the dummy variable MULTIPLE_TRADERS, which equals one if more than one insider traded the stock on the same day and zero otherwise. Fifth, we con-20 With respect to both categories we include foremost those control variables which have shown to be significant determinants in prior research. 21 A contrarian investor is someone who purchases after a recent decline in stock prices and sells after a recent increase in stock prices. As one can see from Table 3, the tendency to act as a contrarian investor is more pronounced on the sell side. We also include a set of control variables referring to company characteristics. First, since several studies (e.g., Seyhun 1986 andPurkis 1994) have shown that profits are higher for smaller firms, we define the variable SIZE, which equals the natural logarithm of the market value of a firm at the beginning of the respective calendar year, to control for the firm's size. Second, our results might be triggered by thinly traded stocks. Thus, we include the dummy variable PENNY_STOCK, which equals one if the respective stock is traded at a price below one euro at the day of the transaction and zero otherwise. Third, previous research has documented a decisive role of the price-to-book ratio in explaining (abnormal) returns (see, e.g., Fama and French 1993;and Fama and French 1995). Thus, we include the variable PRICE_TO_BOOK, which equals the market value compared to the book value of a firm at the transaction day. Finally, Fidrmuc, Georgen, and Renneboog (2006) and Betzer and Theissen (2005) show that profits for corporate insiders and the market reaction to the announcement of insider transactions depend on the firms' ownership structure. Therefore we include two variables to control for the ownership structure and the corporate governance structure of the firm. The variable FREEFLOAT proxies the level of management autonomy and is defined as the fraction of shares in the free float as opposed to shares held by block holders. Moreover, we control for dominating shareholders including the dummy variable BLOCKHOLDER, which equals one if the respective company has a majority shareholder (i.e., a single shareholder controls more than 50 percent of the voting shares) and zero otherwise. In order to evaluate whether the size of both boards have an influence on insider profits, we include the variable SUPERVISORY BOARD_SIZE (EXECU-TIVE BOARD_SIZE), which represents the number of board members in each board. As far as purchases are concerned, the first models in Table 6 (see following page) which exclude the interaction terms confirm our univariate finding that corporate insiders as a group earn exceptional profits by front-running on corporate news. In par-ticular, the coefficients for the dummy variable NEWS are positive and statistically significant for both periods. Concerning the control variables, the signs of the coefficients are predominantly in line with our predictions. Interestingly, however, the coefficient on MULTIPLE_TRADES is significantly negative in both regressions. Thus, insiders earn mediocre profits with those transactions where more than one insider traded a stock at the same time. The finding contradicts the prediction that the information value might be larger if a bunch of insider flock together in their trading decision but confirm the results of Betzer and Theissen (2005) who report a similar finding. Moreover, the size of the boards does not affect cumulative abnormal returns in a systematic way. Regarding sale transactions, multivariate regression support the univariate finding that corporate insiders as a group do not earn significantly higher profits with front-running transactions. The results displayed in Panel B of Table 6 show the respective coefficients on the variable NEWS to be negative but insignificant. Multivariate regression analyses conducted in the second models support our findings concerning the different types of insiders. Looking at members of the executive board, regression results emphasize that members of the executive board do not seem to front-run on valuable ad-hoc news. As displayed in Panel A of Table 6, the coefficient of NEWS in the second model, which measures the differential effects of front-running transaction by members of the executive board compared to the remaining transactions (No News), is insignificant. According to the two remaining groups of insiders, multivariate regression results confirm the univariate conjunction that these groups seem to exploit inside information. As far as directors are concerned, the coefficient on the respective interaction term is highly significant, indicating that directors earn more with their front running transactions than executive managers. The effect compared to the no news transactions is captured by the sum of the coefficients on NEWS and NEWS*SUPERVISORY BOARD. In particular, the sum of coefficients shows to be significant on the 1%-level for both analyzed periods. Analogously, the finding that front-running transactions of other insiders yield exceptionally high profits is also supported by multivariate results for the period [0;+10], but not for the longer event window. As displayed in Panel A of Table 6, the coefficient on the interaction term is significant for this short period [0;+10] implying that other insider earn more with front-running transactions compared to members of the executive board. Again, the sum of the coefficient on NEWS and the interaction term NEWS*OTHER INSIDERS measures the profit compared to no news transactions. For the short period, this number shows to be significant on the 5% -level, while the effect is not statistically significant for the longer event window. Regarding sales transactions, multivariate regression results confirm the finding that no specific group of insiders seems to be severely engaged in exploiting inside information. In particular, multivariate regression results on the sell side show that profits of insiders are not positively affected by a potential exploitation of inside information.
Concluding Remarks
Our study analyzes a large sample of corporate insider transactions reported to the German supervisory authority BaFin in the period July 1, 2002 to April 30, 2005 using event study methodology. In particular, we focus on the question whether corporate insiders seem to exploit inside information while trading in company's stock. Our findings reveal that corporate insiders are able to identify profitable investment situations in their firms. E.g., they earn a mean profit of more than four percent in the 20 trading days after they purchase company's stock. We thereby confirm findings of previous studies that insiders' profits for the German market are higher than for the corresponding markets in the US or UK. Furthermore, we find evidence that corporate insiders seem to be engaged in the exploitation of inside information as they earn above average profits by front-running on corporate news. Finally, looking at the type of insider, we find that members of the supervisory board (directors) and the group of other insiders (basically family members of senior managers and directors) are the ones which profit largely by front-running on corporate news. In contrast, members of the executive board (senior managers) do not seem to exploit inside information according to our criterion as they do not realize superior returns with their rare front-running transactions. However, executive managers cannot be entirely exculpated by our study. One has to keep in mind that executive managers decide upon which impor-tant corporate news qualify as ad-hoc news and thus have to be revealed to the market. In addition, they might also be able to have some influence concerning the timing of the release. This potential discretionary power can, however, only be exercised for unscheduled ad-hoc news like merger announcements or a changes in the executive board. For scheduled ad-hoc news like (quarterly) earnings announcements, executive management lacks discretionary power to time or suppress an ad-hoc disclosure. Consequently, our finding that executive insiders trade less frequently prior to ad-hoc news announcements and that they do not earn exceptional profits with their front-running transactions might be connected to their potential discretionary power concerning ad-hoc releases for unscheduled ad-hoc news. 22 If one believes in the discretionary power to cheat on unscheduled ad-hoc news releases, one would find some evidence for this interpretation in our data. In particular, executive managers earn a profit of 4.80% on the buy side with those transactions not succeeded by an ad-hoc news release. Compared to the respective numbers for directors (1.81%) and other insiders (2.12%), managers yield high profits. Thus, these high profits might be triggered by an omitted or at least delayed release of inside information via an ad-hoc announcement. However, it could also be triggered by intense mimicking trades of outside investors, who assume members of the executive board to have more valuable information than the two remaining groups of corporate insiders. Admittedly, our database might not be the ideal sample to study illegal insider trading. This is because intentional and offensive trading on inside information might not be reported to the supervisory authority. In fact, corporate insiders are often suspected not to trade on their own account but giving hints to close friends which trade on their behalf. Given this consideration, we find it interesting that insiders earn high profits with those transactions which they consider to be unproblematic and thus report. Our results suggest watching trading records of corporate insiders closely; especially those trades which are shortly succeeded by an adhoc news announcement. According to our findings, those insiders (e.g., the group of other insiders) who are not in the spotlight of the public or the financial press do not seem to fear the scrutiny of the regulator as they earn high profits trading shortly prior to ad-hoc disclosures. Therefore, the BaFin might think about intensifying its monitoring activities as well as its ability to impose sanctions to ensure market transparency and integrity of the German capital market. Nevertheless, we also see the ball in the court of the firms themselves. They have to protect their insiders from allegations, justified or unjustified, by establishing voluntary commitments like blackout periods or trading bans prior to specific corporate events. | 2018-10-23T01:51:29.035Z | 2007-07-13T00:00:00.000 | {
"year": 2008,
"sha1": "b1f1f86cb5b82c2e608005770a83266a433b0ff6",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/BF03343533.pdf",
"oa_status": "GOLD",
"pdf_src": "ElsevierPush",
"pdf_hash": "d883238f1ca14d0319620946dc4a6358088b303e",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Economics",
"Business"
]
} |
219002286 | pes2o/s2orc | v3-fos-license | Manipulating Oxidative Stress Following Ionizing Radiation
We have previously shown the existence of an endothelial-stem cell linked pathway that is activated by single-high-dose radiotherapy (SDRT) (>8 Gy/fraction). This endothelial-stem cell linkage pathway mediates normal tissue injury after SDRT of the intestines, lung and salivary glands [1-3], suggesting that this represents a generic response mechanism for mammalian tissue damage injury by large single-dose irradiation, i.e. ablative radiation therapy. Prior studies reported by Moeller et al. [4] provided strong evidence that bursts of ROS are generated by waves of hypoxia/reoxygenation that occur after each radiation exposure in response to conventional fractionated radiotherapy in tumors. Recently, Mizrachi et al. [3] demonstrated that SDRTinduced salivary glands (SG) hypofunction was to a large extent mediated by microvascular dysfunction involving ceramide and ROS generation.
NY, USA
It is now well accepted that the ionizing radiation-generated reactive oxygen species (ROS), that constitute ~2/3 of the effects of external beam radiation, do not only produce direct tumor cell death, but also affect the surrounding microenvironment. Moreover, this indirect effect of radiation may result in systemic effects, specifically the initiation of an inflammatory response.
We have previously shown the existence of an endothelial-stem cell linked pathway that is activated by single-high-dose radiotherapy (SDRT) (>8 Gy/fraction). This endothelial-stem cell linkage pathway mediates normal tissue injury after SDRT of the intestines, lung and salivary glands [1][2][3], suggesting that this represents a generic response mechanism for mammalian tissue damage injury by large single-dose irradiation, i.e. ablative radiation therapy. Prior studies reported by Moeller et al. [4] provided strong evidence that bursts of ROS are generated by waves of hypoxia/reoxygenation that occur after each radiation exposure in response to conventional fractionated radiotherapy in tumors. Recently, Mizrachi et al. [3] demonstrated that SDRT-induced salivary glands (SG) hypofunction was to a large extent mediated by microvascular dysfunction involving ceramide and ROS generation.
ROS generation is important in the maintenance of homeostasis between pro-apoptotic and pro-survival signals [5][6][7]. However, enhanced accumulation of ROS generates chronic pathological conditions. While a large number of studies on oxidative stress focused on the mitochondria-generated ROS, Wortel et al. [8] in their recent study confirmed that the endothelium is one of the major sources of ROS and identified the different types of ROS generated by the SDRT-induced oxidative stress, both within the plasma membrane and in the cytosol of endothelial cells (Figure 1). Similar findings were previously reported by our collaborators in the same cells in response to Fas ligand, tumor necrosis factor-α (TNF-α), endostatin, and homocysteine [9][10][11][12][13][14][15].
Ours as well as others previous studies have shown that the endothelial cells within the different tissues are the most sensitive cells to the effects of ionizing radiation (IR), since they are 20-fold enriched in secretory acid sphingomyelinase (ASMase), as compared to any other cell type in the body [16]. We also showed that ceramide is a sphingolipid messenger capable of initiating apoptotic cascades in response to various stressful stimuli, including IR [17,18]. IR induced alterations in the plasma membrane hydrolyzes sphingomyelin to generate ceramide via sphingomyelinases activation [19,20]. The sphingomyelinases are expressed preferentially in the vascular endothelium [21], suggesting that these mechanisms may be of particular relevance for vascular structure and function. Furthermore, data derived from acid sphingomyelinase-knock-out mice, showed that they have a radioresistant vasculature and are partially protected from end-organ radiation injury [2,22] emphasizing the biological significance of this phenomenon.
Advances in cancer diagnosis and treatment have led to increases in life expectancy, bringing forward the issue of long-term treatment-related morbidity and mortality in these patients. Numerous reports have concluded that these patients should be regarded as longterm cancer survivors rather than as healthy individuals due to their long-term risk of developing treatment-related adverse events, and in particular cardiovascular events. With this clinical background in mind, several groups have embarked in the search for the mechanism(s) by which these effects are inflicted upon the cardiovascular system with special attention to the vascular wall and the vascular endothelium [23][24][25][26].
Radiation induces microvascular dysfunction via activation of the acid sphingomyelinase (ASMase)/ceramide pathway. Microvascular dysfunction is crucial for tumor response to radiation. ASMase activation triggers the generation of ceramide-rich platforms (CRPs), NADPH oxidase (NOX) activation and subsequent production of ROS, resulting in microvascular endothelial dysfunction ( Figure 1) [27]. Elevated ROS formation in the vascular wall is a key feature of all cardiovascular diseases and a likely contributor to endothelial cell dysfunction, vascular inflammation and plaque formation. The NOX family of enzymes comprises seven members (NOX1-5, DUOX1-2), with each one of them displaying distinct patterns of expression, intracellular compartmentalization, regulation, and biological function. NOX-derived ROS control multiple aspects of cell physiology via redox-activated signaling pathways. Nevertheless, NOX over-activity, a condition that is typically associated with significant up-regulation of its expression, has been increasingly reported in a variety of cardiovascular diseases [28]. Numerous studies have demonstrated that expression and activity of at least two isoforms of NADPH oxidase -NOX1 and NOX2 -is increased in animal models of hypertension, diabetes and atherosclerosis. Several studies in transgenic mice support the role for NOX1-and/or NOX2-containing oxidases as sources of excessive vascular ROS production and as triggers of endothelial cell dysfunction in hypertension, atherosclerosis and diabetes [29].
The apolipoprotein-E-deficient (ApoE−/−) mouse is the most widely studied animal model of hypercholesterolemia and atherosclerosis [30,31]. It has been reported that ApoE−/−/ p47 phox −/− double KO are protected, compared to the ApoE−/− single KO mice, from the development of aortic atherosclerotic lesions [32][33][34]. Moreover, Drummond et al. [24] demonstrated that ApoE−/−/NOX2−/− double KO mice were also protected from endothelial cell dysfunction and development of atherosclerotic lesions, suggesting that at least some of the protective actions of p47 phox deletion were likely due to inhibition of NOX2 activity. A major mechanism by which NOX-derived ROS contribute to vascular disease is via superoxide (O2 •− )-mediated inactivation of NO, resulting in loss of its vasoprotective actions, and the subsequent generation of the highly reactive ROS, peroxynitrite (OONO − ) ( Figure 1). Peroxynitrite is a powerful oxidant that causes irreversible damage to macromolecules including proteins, lipids, and DNA, thereby disrupting crucial cell signaling pathways and promoting cell death.
Recent studies have indicated that NOX-derived O2 •− production in the extracellular compartment may be markedly increased during vascular disease. Atherogenic stimuli such as tumor necrosis factor-α (TNF-α), endostatin, cholesterol, and homocysteine increase endothelial cell expression of NOX1 and NOX2, cause CRP generation, resulting in NOX2 activation in the plasma membrane (PM) [12,15,35,36]. The translocation of p47 phox to the cytosolic face of these CRP-containing NOX2 aggregates is likely to result in 'hot spots' of activity in the endothelial cells PM. This, combined with the accumulation of macrophages (which normally express NOX2 oxidase in the PM) in the vessel wall, will result in markedly higher amounts of O2 •− being generated in the extracellular space, thereby increasing the likelihood of NO breakdown and production of OONO − . This may explain, at least in part, why the majority of oxidative damage detected in atherosclerotic lesions occurs in the extracellular matrix space [37]. All cardiovascular pathologies have in common excessive NOX-dependent ROS formation associated with up-regulation of the various NOX subtypes. In this study, Wortel et al., [8] were able to demonstrate for the first time that IR induces excessive NOX activation and ROS generation. This study and other studies published recently by their collaborators [38], demonstrate that ROS generation is an indispensable mediator of SDRT-induced ischemia/reperfusion pathobiology in tumors. They also demonstrated the transient and immediate O2 •− generation and the subsequent accumulation of peroxynitrite (OONO − ) after SDRT, which resulted in impaired endothelial function.
Recently, the first class of the dual NOX1 and NOX4 pharmacological inhibitors, GKT137831, received the approval for phase II clinical trial for the treatment of diabetic nephropathy. According to a recent press release of Genkyotex, the leading pharmaceutical company that develops NOX inhibitors, treatment of patients with diabetic nephropathy with GKT137831, significantly reduced liver enzymes and markers of inflammation. Moreover, the beneficial effects of GKT137831 were reported in several experimental models of disease, including atherosclerosis, hypertension, and diabetes [39][40][41], emphasizing the major role of the endothelial dysfunction in these pathological conditions and the involvement of NOX-mediated ROS.
Obtaining profound knowledge on the pathogenesis of IR-induced vascular injury could help stratify patients who might be at risk for cardiovascular events and potentially identify specific biomarkers that would warrant close surveillance and indicate specific diagnostic interventions. We postulate that acute IR-induced microvascular dysfunction of normal tissue vasculature may contribute to and increase the risk for long-term cardiovascular morbidity and other vascular related diseases.
Wortel's [8] finding that sildenafil protected endothelial cells from RT-induced oxidative stress in the endothelial cells via reduction of NOX-mediated ROS formation and, thus inhibited the pro-apoptotic ASMase/ceramide pathway, is of paramount importance. The endothelium plays a role in the initiation of pulmonary oxidative injury induced by Ischemia/Reperfusion (I/R), and endothelial cells could be either a source or a target of oxidants. Oxidative injury to endothelium has been found to set the stage for secondary pulmonary injury by leukocytes [42,43].
While it is known that sildenafil affects NO levels in the blood vessels, Wortel et al., [8] elucidated the cellular mechanism by which sildenafil, attenuates endothelial dysfunction and protects against radiation-induced erectile dysfunction (ED). This study showed that sildenafil inhibited RT-induced NOX generation of ROS in endothelial cells and protected them from apoptotic death via the acid sphingomyelinase (ASMase)/ceramide pathway. Specifically, by inhibiting ASMase and ceramide generation, sildenafil significantly inhibited O2 •− generation, and subsequently NO was not used to generate ONOO − [44]. ONOO − can disrupt crucial cell signaling pathways and initiate cell death by causing damage to macromolecules, including proteins, lipids, and more critically DNA [3,44]. In addition, to reducing the toxicity of ONOO − , sildenafil maintained the bioavailability of NO in the endothelial cells, and therefore preserved its vasoprotective properties. Reduced NO bioavailability is considered one of the main characteristics of endothelial dysfunction, which is also linked to erectile dysfunction [45]. In addition, since endothelial cells are a major target for RT-induced pneumonitis, GI syndrome, kidney damage, and xerostomia and because it is known that sildenafil is well tolerated, repurposing this drug should be considered for the treatment and prevention of RT-induced side effects, including cardiotoxicity for which it was initially successfully introduced.
Many cancer survivors find themselves struggling with health issues related to prior cancer treatment many years after they are declared cancer-free. These problems include chronic pain, neuropathy, infertility, recurrent infections, memory problems, sexual health issues, cognitive impairments and more, including increased risk of secondary malignancy. For many cancer survivors, these health issues last a lifetime, and some might even be life threatening. Mitigating treatment side effects by protecting the vasculature, in particular the generation of ROS, may result in a significant improvement of patient's quality of life by reducing morbidity and mortality. | 2020-04-30T09:01:52.687Z | 2020-05-17T00:00:00.000 | {
"year": 2020,
"sha1": "6731ddff49a82d3c31a1526b71257a5484764939",
"oa_license": "CCBYNC",
"oa_url": "https://www.scientificarchives.com/admin/assets/articles/pdf/manipulating-oxidative-stress-following-ionizing-radiation-20200508070537.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b73aa4628cc7f8da2ccf493de6416c9f75532e57",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
264887171 | pes2o/s2orc | v3-fos-license | Power Density Increase in Permanent-Magnet Synchronous Machines Considering Active Thermal Control
: The power density of electric machines for future battery electric vehicles must be further increased to improve customer benefits. To this end, this paper compares two state-of-the art electrical traction machines and evaluates the potential for increasing the power density using a third, novel high-speed machine design. The analysis is performed using an electromagnetic finite element analysis, a thermal network with lumped parameters, and a coupled electromagnetic–thermal simulation. The simulations of the three machines evaluate the potential for increasing the power density and overload margins, as well as reducing material consumption. With regard to the active thermal control, the new design aims for reduced thermal capacities and increased loss density to optimize the thermal controllablity and overall performance. The thermal active control is analyzed in thermal transient simulations and electromagnetic simulations with different magnet temperatures. The results show that higher magnet temperatures benefit efficiency and reduce losses for low torque at high speeds. However, a colder magnet is needed for maximum torque at base speed. A maximum loss reduction of 24% is achieved with a 100 °C magnet-temperature difference at maximum speed and low torque.
Introduction
The need to reduce emissions has made the requirements for electrical machines even more demanding, with the cost and availability of materials being critical factors in the development of electric drive units, as well as improving their efficiency and range.Increasing the power density of electrical machines is a promising approach to meet these requirements, as presented by various methods highlighted in this paper.This not only helps in meeting the demands, but also leads to more sustainable electrical machine designs.
In recent years, development trends show an increased component power in the electric drive train, and also in electrical traction machines, as depicted in Figure 1 [1].Increasing the maximum machine speed is the second distinctive trend which can be observed.Advantages in power density and efficiency have led to the predominant use of interior permanent magnet synchronous machine (IPMSM)s in electrical traction applications [2].It is well known that active thermal control of electrical machines can be used to minimize losses.In this work, we combine this degree of freedom with common measures to increase the power density and investigate the synergies.
Although not the topic of this work, it is worth mentioning that lightweight construction approaches, such as the use of carbon-reinforced fiber material or topology and shape optimization in several components can also result in higher power density [3,4].
This work is structured as follows: First, an overview of existing traction machines in production vehicles is given.Then, the basic specifications of the three simulated machines are given.Next, fundamentals of the definition of power density are recapitulated and the simulation setups are described.Two representative passenger vehicle traction machines are analyzed with regards to their power density and power loss density in electromagnetic-thermal simulations.These two state-of-the-art machines are compared to the third new developed high-speed, i.e., 25,000 rpm maximum speed, machine.This machine is especially designed for active thermal control operation, which has further potential for obtaining a higher power density.
In this work, different measures for power density improvement are presented and discussed.Afterwards, four common approaches to increase the power density are examined: 1.
An electromagnetic design approach aimed at increased speed and reduced torque, which leads to smaller machines at the same mechanical power.2.
An improved thermal layout, resulting in higher power limits and more efficient operation in certain ranges, at the same machine size.
3.
Thermal active-component-wise control that reduces losses in large operation areas.4.
Overview of Current Electric Vehicle Traction Machines
There is a general trend to increase the speed, as can be seen in Figure 1: starting from 6000 rpm between the years 1997 and 2002, to 12,000 rpm and 13,000 rpm around 2012, and finally reaching around 17,000 rpm in the most recent electrical vehicles (EVs), from 2016.
Considering the Prius as a representative hybrid electrical vehicle (HEV), from generation 2 to generation 3, a 1.5 kW/kg (45 %) increase was realized.The Prius generation 4 has an additional increase of 0.9 kW/kg ( 18.7 %) compared to generation 3 [5][6][7].This could be achieved by an optimization in the electromagnetic design.The obvious measures are: • Increasing the speed.Assuming a similar corner speed to the maximal speed ratio, the maximal speed can be used as an indicator for increased speed at peak power.
Early designs have a maximum speed of 6000 rpm, which is successively increased to 17,000 rpm by 2017, depicted in Figure 1.• Reduction in the amount of electrical steel used in the rotor, as discussed in [1,5,6], which results in an overall lower weight at the same power, thus increasing the power density.
•
The rotor pole geometry is altered significantly throughout the generations, which probably improved the electromagnetic characteristics.
Although it cannot directly be concluded from the specifications presented in Figure 1, it can be stated that more sophisticated cooling systems are utilized for improved performance (regarding efficiency, power density, and maximal power) [12]; cf.Section 3.3.
Investigated Electrical Machines
The investigated electrical machines are three IPMSMs.Two machines are very similar to existing series electric vehicles' traction drives.These machines (electrical machine model 1 (M1) and electrical machine model 2 (M2)) represent the state of the art considering their motor topology, power class, and cooling systems.Electrical machine model 3 (M3) is a development which tries to capture trends beyond the state of the art, regarding operating speed, power density, and thermal design.
An overview of the machines' specifications is given in Table 1.The performance data are determined at winding temperature (ϑ winding ) = 160 °C and magnet temperature (ϑ magnet ) = 160 °C from the electromagnetic simulation.The DC link voltages are all in the similar range of a 400 V system, and the winding type is hairpin-distributed winding for all three machines.M1 and M2 both have a 48 slot, 8 pole configuration, whereas M3 has a 54 slot, 6 pole configuration due to its higher operating speed.Machines M1 and M2 are, in general, bigger and heavier than M3.Machine M1 weighs 75.68 kg and M2 43.44 kg (about 310 %, and 136 % heavier than M3).For the weight distributions, see Table 2.The cross-sections of the machines' rotors and stators are given in Figure 2. Electrical machine M1 has a delta-shaped pole and measures 220 mm active axial length and 190 mm stator outer diameter.It has a maximum speed of 16,000 rpm and a maximum torque of 284 N m, which is higher than the torque of the compared machines.Machine M1 has a peak power of 150 kW.
Machine M1
Machine M2 Machine M3 The second electrical machine, M2, has a V-shaped pole, an active length of 175 mm, and a stator outer diameter of 190 mm, and reaches 14,500 rpm and a maximum torque of 190 N m at a maximum power of 109 kW.
With the double V-shaped pole and the smallest overall volume (73.5 mm active axial length and 175 mm stator outer diameter) and weight (18.37 kg) of M3, the machine still reaches 80 kW, with a maximum speed of 25,000 rpm.
Both reference machines, i.e., M1 and M2, exhibit similar specific power per weight (ρ grav ) of 2.09 kW kg and 2.51 kW kg .M3 has a significantly higher specific power per weight of 4.68 kW kg .
In Table 2, the weight shares of the machines are shown.The magnet weight share is around 4 % for all machines.Also, the stator makes up around 30 % for all machines.M3 shows a higher share for the winding (21.8 %), and reduced rotor share of 19.3 %, whereas the compared machines show around 30 % for the rotor and 9.8 % (M1) and 12.3 % (M2) for the winding.
Although machine M3 has a significantly lower maximum power than machines M1 and M2, the amount of converted energy in machine M3 is the same in the assumed drive cycles.The occurring losses in machine M3 can be assumed to be very similar, since the efficiency is also similar.Considering the smaller volume and mass of machine M3, the loss density is in general higher in M3.In particular, this will be quantified in Section 5.1 for all three machines.This higher specific loss per volume also allows the active conditioning of each machine component, as explained in Section 6.
All the machines use housing water jacket cooling.Although machine M3 can optionally be used with additional sophisticated component-specific cooling concepts (direct stator and rotor magnet cooling cf.[13]), these are not considered in the following.Instead, further investigations are planned to show the specific thermal conditioning possibilities.These machines are selected for comparison because they represent a typical range of machines for passenger vehicles with similar system specifications and the same topology and basic design concepts.
Methods and Fundamentals
The mechanical power at the shaft, mechanical power (P mech ), is defined by the torque (T) and speed (n): To ensure proper comparability, the electric machine is used with a generic housing, and the performances are derived from simulations, with the simulation setups and boundary conditions being identical.
The overall mass of the electrical machine is defined by: where m winding is the mass of the copper of the winding, m stator is the mass of the stator lamination sheets, m rotor is the mass of the rotor lamination sheets, m shaft is the mass of the shaft, and m housing is the mass of the generic housing.Each of the investigated machines is separated into these component weights in Table 2.A straightforward way to roughly estimate the machine's overall electromagnetic active volume, is to consider the active length of an electrical machine (l) and the stator outer diameter (D S,OD ).The actual volumes and masses of each component are used for the power loss density.The power density (gravimetric) is defined as power per weight: or volumetric; power per volume: with maximal mechanical output power (P max ) and total mass of the electrical machine (m total ), or the simplified machine volume (V EM,total ), defined in (2), (3), respectively.However, the initial quantities are often not properly defined, especially in specifications for commercial use.In fact, there is no clear definition of power and weight, but a variety of possible interpretations are used.It is a significant difference whether comparing the weight for the complete electric drive unit (EDU), the electrical machine including power electronics, or only the electrical machine.Another example is the choice of the specified maximum power.Both continuous power and peak power are used, where 10 or 30 s are specified for the latter.The same definition can be used to give a specific loss per volume (specific loss per volume (ρ L,vol )) for each component i as given in ( 6), where P L,i are the losses in one component, i, and V EM,i is the corresponding volume of component i.
The power for continuous operation is defined by the same maximal operating temperatures (i.e., 160 °C for winding and magnets) coming from the electromagnetic-thermal simulation.The overload power is set as the maximum power before significant saturation effects are noticeable, considering the electromagnetic simulation results only.The detailed investigated electrical machines are modeled in ANSYS MotorCAD for the electromagnetic investigations in 2D finite element analysis (FEA), and for the thermal investigations in a lumped parameter thermal network (LPTN).
Losses Analysis
Total losses, as shown in (7), can be separated into copper losses (P L,Cu ) in the windings, iron losses (P L,iron ) in the laminations of the stator and rotor, magnet losses (P L,magnet ) in the magnets, and mechanical loss (P L,mech ) such as windage losses and bearing friction losses.P L,tot = P L,Cu + P L,iron + P L,magnet + P L,mech , For the investigated IPMSM designs, iron losses and copper losses show the highest share in the average load range.Considering the speed ranges of the other machines in the benchmark in Figure 1, P L,iron may be the major loss and its reduction could be a promising approach for increasing the power density.Although magnet losses have only a small share in the total losses, a reduction in P L,magnet could be advantageous for operation in high-speed ranges, since often the loss density in the rotor is one limiting factor in this range.In the base speed range, P L,Cu is usually the dominant loss type.
For machine M3, the loss separation is shown for the complete operating range in Figure 3.The main loss is P L,Cu , with up to 90% in the base speed range and up to 70% in the field-weakening range.In the lower-torque range, meaning up to 25 N m, P L,iron has a share of around 70%, and in field weakening, also in the high-torque range, still a share of 20% to 30%.P L,magnet generally has its maximum of around 2.5% in the field-weakening range.
Copper Losses
The copper losses can be separated into DC (copper losses DC part (P L,Cu,DC )) and AC (copper losses AC part (P L,Cu,AC )) parts, and considering a linearized temperature dependence represented as (8), where R DC is the DC-equivalent resistance and R ac the equivalent AC resistance, and I is the current, as described in (8), where α θ is the temperature coefficient of copper and ∆θ the temperature change with respect to the reference temperature θ ref .
Iron Losses
The nature of the iron losses has been investigated and discussed in depth: [14,15].With respect to the loss distribution over the operating range, as shown in Figure 3, iron losses account for up to 90 % and over wide ranges up to 25 % of the total losses.Considering the loss shift due to higher speeds mentioned in Section 3.4, reducing iron losses is one important task in future designs.In general, iron losses can be most effectively reduced by the choice of low-loss materials, which are determined either by the material composition or by the lamination sheet thickness; however, in this study, the new designed machine M3 already uses low-loss core materials and, therefore, the potential is limited here.
Magnet Losses
The main origin of magnet losses are eddy currents; hysteresis losses can be neglected [16,17].Since eddy current losses are proportional to the square of frequency [18], magnet losses have to be considered, especially in high-speed operation, as can be seen in Figure 3. Also, eddy current losses are temperature-dependent, and are lower at increased temperatures [17].
Although magnet losses make up only a small part of the total losses in common radial flux machines with distributed windings (in the case of machine M3 maximal 2.5%), it can be useful to reduce P L,magnet .Magnet losses can be significantly reduced by axial and or tangential magnet segmentation [19,20].In general, the improvement should hold against overall machine loss, specific operating limits (e.g., magnet operating temperatures), and the higher manufacturing costs.An example of a real-world traction drive which uses magnet segmentation is Tesla's Model 3 [1].
Reduction of Losses
A favorable way to increase the power density is by reducing the total losses in the electrical machine.On the one hand, the reduced losses will increase the reachable P mech at the same input power.On the other hand, reduced losses will result in a lower thermal loading, resulting in the same temperature at a higher power.Therefore, a loss reduction will have a double beneficial effect on the power density increase.
In the electromagnetic design process, the most relevant operating range for efficiency has to be defined prior to choosing the measures for loss reduction.As briefly described in Section 3.4, the different types of losses can be shifted for different electromagnetic design concepts.
Since copper losses make up a large share of the total losses, it makes sense to focus on a decrease in the current I or on reducing the phase resistance [21].The assumption of a reduced current I can be implemented both by the concept of active thermal-field weakening (cf.Section 6) and by the choice of the electromagnetic design (cf.Section 3.4).
Since the phase resistance, in the case of copper, is temperature-dependent, cooler windings will result in a significant reduction in the DC part of copper losses.However, the AC part of copper losses has an anti-proportional behavior since eddy currents will be higher for an increased conductivity in the material.In the end, it has to be estimated which loss share is more predominant and which target temperature should be selected for minimum losses.Reducing copper losses and having the same cooling boundaries generally results in cooler windings.
It is obvious that improved cooling of the stator windings leads to a higher power density.Sophisticated cooling systems can be specifically used to dynamically and precisely control the winding temperature to the optimal target temperature.In the end, the maximal heat dissipation is limited and further power density increases have to be investigated considering the tradeoffs for each individual design.
Improved Loss Dissipation
The maximum operating temperature of an electrical machine is defined by the material limits of the components (e.g., housing, bearings, shaft, lamination sheets, windings, insulation, permanent magnets).Usually, the components which are closest to their temperature limits are the windings (insulation) [22] and magnets [23].However, the magnet limit is harder to specify, since it is also dependent on the operating point of the magnet; an example calculation can be found in [24].
Improved cooling concepts can ensure the temperature limits in the machine's hot spots, at an increased mechanical power, and total losses, respectively.A thorough review is given in [12].In general, methods are categorized by the location of cooling, i.e., direct contact with the cooled components (spray cooling, flushing, channels in active components), and indirect (cooling jackets, housing fins, and shaft cooling).The second distinctive characteristic is the choice of cooling medium.Commonly, gas or liquid (water-based or oil-based) coolants are used.Cooling of electrical machines was always a topic of interest, as can be seen by early patents, e.g., [25].Nowadays, especially in mobile applications, increases in power density are becoming more relevant.Recent developments include direct-end winding cooling concepts [26], direct winding cooling [27], as well as improved housing jacket cooling [28].All these approaches mainly target the winding temperature.Realizations of rotor cooling, to dissipate loss from the rotor to extend the operating range and increase efficiency, which reduce losses and increase the driving range, are becoming more important [24].
Rotor-cooling concepts can be categorized, as in [12], into rotor shaft cooling, direct liquid-cooled rotor, rotor spray cooling, and combinations of those concepts.Furthermore, air-cooled rotor and rotor jet impingement cooling have to be named.In the following, the focus is on direct liquid-cooled rotor concepts, since these probably offer the highest potential regarding magnet conditioning.
The first patent dealing with dedicated rotor cooling was filed in 1990 and patented in 1993 [29].It consists of a hollow shaft and multiple cooling channels in the rotor.Subsequent patents were disclosed in 2009 [30], also a liquid hollow shaft with coaxial design, and 2013 [31], a hollow shaft with air cooling.In 2014, in [32], a closed hollow shaft solution with heat exchanger and phase-change approach was published.In 2017, a patent considering a hollow shaft was disclosed [33].Following that, commercial approaches were realized by Equipmake [34] and Audi [8].It is obvious that dedicated conditioning of separate machine components is an upcoming technology trend, which enables higher power density, higher efficiency, in particular in certain operating ranges (cf.Section 6), and finally, enables smaller-sized machines.These smaller machines, i.e., machines with higher power density, with dedicated conditioning, are the key to the effective dynamic active-thermal-control strategy for further performance optimization.
Electromagnetic High-Speed Design
The mechanical power can be defined in analytical sizing [35] by where the defined quantities are the stator inner diameter (D S,ID ), the utilization factor (C), the active length of an electrical machine (l), and the speed.The mechanical power is proportional to the torque and to the speed, and the torque is proportional to the square of the stator inner diameter D S,ID .Consequentially, the same mechanical power can be realized by reduced torque and increased speed.This results in a shift from copper losses to increased iron losses, magnet losses, and mechanical loss, which must be taken into account in the design phase.The maximum torque-speed characteristic of the high-speed machine M3 can provide the same power at higher speeds and significantly lower torque.Thus, the overall volume and weight of an electrical machine can be reduced, c.f. Section 5.1.As shown in Table 1, high-speed machine M3 has a 123 % higher gravimetric power density compared to the state-of-the-art machine M1.Additionally, the component weights, and along with this the thermal capacities, will be decreased, leading to a more dynamic thermal behavior.
Nevertheless, in the end, the electromagnetic benefits of high-speed machine designs needs to be weighted, on a system level, against a higher gear ratio in transmission, special bearings, and material choices, which is not part of this study.
Active Thermal Control
As mentioned in Section 3.3, the majority of (rotor) conditioning concepts aim at either an extended operating range or performance improvement by ensuring that magnet temperature stays below the defined limit.
For the winding, a faster controlled and limited heat-up can be beneficial for the reduction in the AC part of the copper losses.
An intentionally hot-conditioned magnet has a significant potential to reduce losses, as described in this work by the electromagnetic investigations.
IPMSMs cannot modify the rotor flux, which is chosen to reach maximum torque.For field weakening, this flux then needs to be compensated for by direct axis current (I d ).This results in increased loss, and further, increased cost, requiring either higherenergy magnets or more magnetic material, which additionally reduces the power density.The only possible way to change the magnetic flux, excluding I d , is increasing the magnet's temperature [36].The commonly used Neodymium Iron Boron (NdFeB) magnets show a temperature-dependent remanence flux density.The used magnet material in the investigated machine M3 is given in [37], and exhibits an averaged linear reversible temperature coefficient of −0.1125 % °C .With an appropriate conditioning system, dedicated to controlling magnet temperature, scenarios with hot as well as cold magnets and additionally transient changes in magnet temperature can be achieved.Especially for the operating range at lower torques, where a hot magnet is beneficial; c.f. Section 7. Given a machine with such a conditioning system, the loss can be minimized in all operating regions, increasing the effective power density and driving range.For identification of the loss reduction potential, electromagnetic simulations for the IPMSMs are performed, which are described in Section 4.
Current EVs have unnecessarily high power reserves in the base speed and fieldweakening range.This over-performance is usually achieved by higher torques, e.g., machine M1 has 200 N m to 350 N m [1].In simplified terms, torque is proportional to the size of the machine.Large machines have higher thermal capacities and a slower thermal response.On the one hand, this allows for longer or higher overload operation with a standard housing water jacket, but on the other hand, it limits the power density and makes active-component conditioning more difficult.
For identification of the loss reduction potential by active thermal-field weakening, electromagnetic simulations for the IPMSMs are performed.
Simulation Setup
For a holistic comparison, every physical domain that affects the performance of the machines needs to be simulated.The investigation is split into a stand-alone 2D electromagnetic FEA simulation, a thermal LPTN stand-alone simulation, and a coupled electromagnetic-thermal simulation.The electromagnetic simulation is used to identify the maximum torque-speed performance as well as providing an estimation of the efficiency.Furthermore, the resulting losses are used as input for the thermal simulations.The stand-alone thermal simulation is used to compare the temperature development in high-speed-and high-torque-case operating points.The coupled electromagnetic-thermal simulation evaluates the machines' behavior in common drive cycles such as WLTC or Artemis motorway 150 (Artemis motorway 150), considering the efficiency, temperature development, and main ranges of energy conversion.In the following, the simulation approaches are briefly described.
Electromagnetic Model
The electromagnetic model represents the rotational symmetric part of the machine as a two-dimensional stationary electromagnetic FEA.The outlines for the simulation model are taken from Figure 2. The materials are characterized on the basis of [38,39] for the electrical steel and on the basis of [37,40] for the magnets.The material data for copper is taken from the MotorCAD database; it has a defined electrical resistivity of 17.24 µΩ m.The simulations are current-fed with sinusoidal current wave forms, i.e., they are fundamental wave models, which means that no harmonic effects from, e.g., inverter-based current wave forms, are accounted for.The conductors are represented by homogeneous areas of constant current density.The temperature effects are defined by material characterization in pre-processing only, i.e., the resistance of the winding and remanence of the permanent magnet.The temperature is assumed to be constant in each electromagnetic simulation step.The control strategies maximum torque per ampere (MTPA) and maximum torque per flux (MTPF) are adjusted for each magnet temperature to achieve minimal losses, with an assumed 5 % voltage reserve [41].The control parameters, i.e., I d and quadrature axis current (I q ) are identified a priori using ANSYS MotorCAD Lab.
The operating maps, with temperature-optimized control parameters.are simulated by steady-state simulations for defined torque and speed set points using ANSYS MotorCAD.The resulting loss data are interpolated in post-processing.The rotational speed, amplitude, and phase of the current is defined for each simulation point.The iron losses and magnet losses are calculated in the FEA, while the copper losses are only represented as in (8).The winding temperature is assumed to be 160 °C as a possible worst case for copper losses and the temperature limit of insulation class H (180 °C according to [22]), with a feasible margin.
To perform an in-depth analysis of power and loss densities, the different machines are compared at specific operating points.
Thermal Lumped Parameter Model
The thermal transient simulations of the electrical machines are performed using lumped parameter thermal network (LPTN) models.The thermal networks depict the radial cross-section of the motor.For each motor, one pole is modeled and thermal resistances and capacities are scaled to values for the whole machine.In addition, the axial heat flow is modeled with a front and rear section, so that the networks depict the machines in three axial layers (front, rear, and the main section in between).For reference, the resulting thermal network is depicted in Appendix A. Cooling is modeled assuming fixed temperatures at the cooling inlet and thermal resistances from the inlet to the cooled machine components.Depending on the individual geometry, each LPTN network consists of around 70 nodes.The recommended values, given for the materials in Table 3, from the ANSYS MotorCAD database were used for each machine; no parameter identification of the heat transfer coefficients (HTCs) was performed.The thermal conductance of the statorto-housing is set to 1057 W K m 2 , and of housing-to-bearing shields, rotor lamination-to-shaft and magnets-to-lamination are set to 6341 W K m 2 .The effective bearing conductance is speed independent and set to 75.8 W K m 2 .The effective air-gap resistance is speed-dependent and given in Figure 4.The identification of the critical heat transfer coefficient (HTC)s from the winding to the stator and convective heat transfer from the rotor to the stator and winding would result in more realistic values for the temperatures.For reasons of comparability, and since no particular features are implemented in the investigated machines, the used values are considered to be accurate enough.The coolant boundaries are constant and set to be same for each machine at a volume flow of 6 l min and 60 °C inlet temperature.The ambient temperature is set at 24 °C, and only natural convection is assumed at the machine's outer surface.The simulation step time is 1 s.
Transient Electromagnetic-Thermal Model
Since drive-cycle simulations have a quite extensive time frame, the coupled simulation model is simplified.Therefore, the electromagnetic behavior is represented by a map-based model, where flux linkages and inductances are I d -and I q -dependent and losses are speedand torque-dependent.In each time step, the requested torque and speed is input to this model (as well as the temperatures of the magnets and windings resulting from the LPTN model).The corresponding machine behavior is fed back and the losses in particular are fed into the LPTN model.
Housing cooling with a flow rate of 6 l min and a coolant temperature of 60 °C is used.The coolant is an ethylene glycol-water mixture in the proportions of 50% for water and ethylene glycol.The start temperature for the drive-cycle simulations of the complete machines is the ambient temperature of 24 °C.The temperatures used for steadystate simulations are the resulting temperatures after five drive cycles using the Artemis motorway 150.For all simulations, the same 1800 kg vehicle model is used.The gear ratio is adapted for each machine to match the maximum motor speed to a vehicle speed of 180 km h .For each machine, the drive cycle is simulated five times in a row.
Simulation Results
For a holistic evaluation, the machines are compared in driving cycles and explicit stationary operating points, as well as stationary points over the entire operating range.The focus is first placed separately on the electromagnetic and thermal behavior.Subsequently, a dedicated thermal-electromagnetic coupled investigation is considered.
Stationary Electromagnetic Performance Analysis
Efficiency maps, including maximum and continuous torque as well as markers of the operating points of the considered machines, are given in Figure 5. Continuous torque curves are shown in dashed lines for a resulting maximal winding and magnet temperature of 160 °C.Four operating points are defined for specific power requirements.The resulting loss densities are calculated using (6).
•
The OPHT is a high-torque, low-speed operating point at 35 kW, the loss density is shown in Figure 6a.• TheOPHS is a high-speed, low-torque operating point at 40 kW, the loss density is shown in Figure 6b.
•
The OPMS is an operating point defined for the maximum speed and a power of 30 kW, the loss density is depicted in Figure 6c.
•
The OPA150 is an operating point which is derived from the average torque and speed of the considered machines in the Artemis motorway 150 cycle.The loss density of operating point averaged from Artemis 150 cycle (OPA150) is shown in Figure 6d.
Due to the different operating ranges, these operating points are differently placed within the operating range of each machine, the torque-speed positions are shown in Figure 5.The operating point high torque (OPHT), operating point high speed (OPHS), and operating point maximum speed (OPMS) conditions represent the limits of the operating ranges and can be used to consider special use cases such as rotor heat-up on a long high-speed trip (OPMS and OPHS), or stator heat-up during extreme acceleration or hill climbing (OPHT).
High-Torque Operating Point
In operating point high torque (OPHT), a typical behavior for high-torque, low-speed operation can be observed.The highest loss density occurs in the windings for all machines.Loss density in the windings is especially high for OPHT due to high currents for the high torque demand.Machine M3 shows the highest loss density, of around 12,000 W l , compared to 3800 W l (M2) and 2400 W l (M1).This results also in a rather fast temperature rise of the complete machine.The frequency-dependent losses, i.e., P L,iron and P L,magnet are comparable for all machines and significantly below all other analyzed operating points.
High-Speed Operating Points
For high-speed, low-torque operating points OPHS and OPMS, the loss density in the windings is roughly halved but still high.It is notable that M1 shows an increase of around 142 % in the winding loss density from OPHS to OPMS, compared to M2 of only 69 % and M3, which shows a reduction of 7 %.This indicates that the magnets of M1 are clearly oversized, which results in a substantial field-weakening current being needed and leads also to a lower efficiency in this operating region, as can also be seen in Figure 5, where M1 shows a wider range with low efficiency at very low torques compared to M2 and M3.The loss density in the stator is increased significantly, by around 300 %, for all machines, due to the increased frequency in the excitation current and the resulting stator rotational field.Higher harmonics in the rotor flux density lead also to increased loss density in the rotor, which is visible in the roughly five-times-higher rotor loss density and ten-times-higher magnet loss density, similar for all machines.The magnet loss density does not show a general trend from OPHS to OPMS: M1 shows the commonly expected increase, but M2 only marginally increases and M3 shows a reduced magnet loss density, by around 44 %.
Artemis Motorway 150 Operating Point
The operating point OPA150 is derived from the Artemis motorway 150, i.e., average speed and torque of the cycle.OPA150 represents a more realistic long-term operating point in the high-speed region compared to OPHS and OPMS.For M2 and M3, OPA150 is located in the maximum-efficiency region, but for M1 it is torque-wise well below the maximum-efficiency region.The winding shows the lowest loss density of all the depicted operating points, but the ratios between the three machines are similar, as for the other operating points.The stator loss density is in a range between 450 W L (M2) and 497 W L .The common observation from all of the investigated operating points is that machine M3 has higher loss densities for all components, the only exceptions are the loss densities of the magnet and rotor for OPMS, where M1 has a similar rotor loss density and 85 % higher magnet loss density.While a similar distribution can be observed for the stator loss density for all machines (cf. Figure 6c,d), machine M3 shows a significantly higher winding loss share of 51%.
Electromagnetic Transient Drive-Cycle Analysis
For further comparison of the investigated machines, transient drive-cycle simulations are performed in Artemis motorway 150 to compare the machines in a highway operation.The magnet and winding temperatures are set to 160 °C as a worst-case estimate.
Figures 7-9 show the operating point distribution, i.e., the energy conversion share, of the drive cycle for each machine plotted onto the corresponding efficiency map.The color scheme allows for the differentiation of the energy conversion at the different operating points.Furthermore, the operating range of highest efficiency, i.e., always higher than 96%, is marked by a blue outline.The total electromagnetic losses are shown as contour levels and are marked with labels.
The drive-cycle analysis for M1 shows that the main energy conversion is below 100 N m and between 8000 rpm and 13,000 rpm.As can be seen in Figure 7, even for highway situations, M1 operates only in a limited region of the map and outside of the maximum efficiency region, for which a higher torque would be needed.For the case of the Artemis motorway 150, this machine can clearly be considered oversized.As can be seen in Figure 8, M2 only partly has energy conversion outside of the efficient region, because partly the torque range in the drive cycle is lower than the highest-efficiency region.Already, the smaller M2 shows a better matching of the maximal efficiency area and the area of most converted energy in the Artemis motorway 150 cycle.
As is depicted in Figure 9, M3 mainly operates in a range below 50 N m and between 12,000 rpm and 20,000 rpm.The torque reaches the maximum torque for speeds higher than 15,000 rpm.Therefore, compared to the other machines, the operating points are more distributed over the possible torque range in this machine's operating range.M3 generally operates at a lower torque but higher speed than machines M1 and M2.The energy conversion is observable mainly in the high-efficiency region of the operating range.
It can be concluded from the results shown in Figures 7-9 that M3, as the smallest machine, best matches this representative cycle's requirements.Machines M1 and M2 could be downsized to a lower target maximum torque without having an effect on the driving scenario and are, therefore, considered to be oversized.A small, high-speed machine, such as M3, is more suitable in this highway drive cycle.
Thermal Transient Analysis
For further analysis, the drive cycles WLTC class 3 and Artemis motorway 150, and two extreme operating point heat-ups (OPHT and OPHS) are simulated.The temperature development in the drive cycles is used for benchmarking the novel, small high-speed machine M3 against M1 and M2.The single-operating-point heat-ups are used to evaluate the temperature rise rates and, therefore, the overload and thermal-field-weakening potentials.
Drive-Cycle Thermal Transients
Over the course of five drive-cycle simulations, a convergence to a quasi-stationary temperature range can be observed.The temperature results of WLTC are depicted in Figure 10.The different magnet temperatures and the different thermal dynamics between M1 and M2 compared to M3 are particularly significant in the first cycle, where the machines heat up.As the WLTC is a rather low-performance drive cycle, a similar temperature for the magnets and windings is achieved, around 80 °C in quasi-steady state.The magnet temperatures vary by around 5 °C and, thus, show lower dynamics than the windings, which fluctuate in the range of 20 °C.In Figure 11, the temperature profiles for the maximum temperature of the magnets and windings are depicted for the Artemis motorway 150 drive-cycle simulation.In general, the temperature of the windings remains higher than the magnet temperatures.The lowest magnet temperature is reached with machine M1.This machine also has the slowest change in temperature, especially in the first drive cycle up to 1200 s, where the initial temperature is low and, therefore, heat up is fast compared to the following cycles.A steady-state temperature level is established for all machines after around three cycles.M1 shows the lowest temperatures, for the winding of 100 °C to 117 °C and for the magnet around 100 °C.The temperatures of M2 and M3 are very close, with the magnets (117 °C) and windings at (117 °C to 130 °C).However, machine M3 has a more dynamic magnet temperature during the cycle and fluctuates by around 5 °C.This can be explained by the higher loss density in the rotor and magnets, as can be seen from Figure 6d.The higher temperature level can be explained by a higher loss density for M3 and the lower operating speed of M2, which results in a higher thermal resistance from rotor to coolant.
In comparison, M3, whose performance map matches the operating requirements for Artemis motorway 150 well (see Figure 9), shows a lower thermal inertia, which means a faster heat-up or cool-down.Both investigated drive cycles can be easily performed with a large temperature margin (max.temperature assumed 160 °C) by M3.Also, without any special considered cooling concepts (only waterjacket cooling is used for all three machines), the resulting temperatures of all the machines are in a very similar range.
Single-Operating-Point Thermal Transients
To evaluate the heat-up at more extreme operating points, the cases of OPHT and OPHS are simulated with the LPTN model.The thermal dynamics of the windings and magnet define, on the one hand, the overload capability, and on the other hand, the potential for the faster achievement of the desired component temperature.
The starting temperatures are derived from the steady-state temperatures at the end of the Artemis motorway 150.As can be seen in Figure 12, M1 and M2 clearly show a similar rise in the winding temperature of 0.398 °C s and 0.312 °C s , respectively.Motor M3 shows a significantly increased rise rate of 2.38 °C s .Despite this high rate, M3 reaches 160 °C after 21 s and 180 °C after 33 s, which is still a feasible overload time.Furthermore, M1 is designed to be operated with dedicated cooling systems for a higher heat dissipation, including in the windings, which would result in a slower temperature rise, if needed.The temperature rise in the magnet in OPHT is insignificant for M1 and M2, and despite that, M3 rises 4.25 times faster than M2, which is still very low at 0.02 °C s and would not limit the overload time at OPHT operation.It is a clear finding that temperature changes in specific motor components, especially those such as magnets, are faster in machines which are the right size for the planned application.Oversizing and the resulting use of too much material, in contrast, leads to a slower thermal behavior.In Figure 13, for an operating point (OPHS) which has significantly higher loss density in the rotor components, it can be observed that the small high-speed machine (M3) shows the highest temperature rise rate for the magnet, of about 0.1 °C s , compared to M1 of about 0.04 °C s and 0.05 °C s for (M2).The reason is the increased power and loss density.However, the rise rate is still rather slow and considering the targeted temperature rise rates needed for an active dynamic thermal control of the machine, a sophisticated cooling concept is essential; the component temperatures could then be dynamically controlled, enabling a better overall performance of the machine.This shows the best opportunities for M3 in the method described in Section 6.Looking at the temperature rise rates for the windings hot-spot in the operating point OPHS, machines M2 and M3 show almost the same behavior, with rise rates of 0.35 °C s , compared to a slightly lower rise rate for M1 of about 0.25 °C s .A downside of a small and thermally highly dynamic machine can be the overall high temperatures in specific machine components.The high temperature of the windings of M3 result from the high winding loss density, which is depicted in Figure 6a-d.However, it can be concluded that M3 indeed can be used for the investigated applications, the same as M1 and M2, since M3 is in a feasible temperature range for typical drive-cycle cases.Given that, it can be stated that, due to its design, machine M3 realizes a 76 % reduction in magnet material and a 75.7 % total weight reduction (M3 vs. M1); and a 49.3 % reduction in magnet material and a 57.7 % total weight reduction (M3 vs. M2).
Active Thermal Control for Electromagnetic Performance Improvement
At the moment, a particular conditioning design is under development, the layout of which has a direct link to the magnets, granting precise and dynamic temperature control.Given this background, the following electromagnetic simulation cases are considered.
For the magnet temperature ϑ magnet , a range between 60 °C and 160 °C is assumed.The resulting remanence used in the simulations is calculated in pre-processing according to the linear reversible temperature coefficient stated in the data sheet [37]; cf.[2].Demagnetization for increased temperature operation is considered; no demagnetization took place.For this investigation, the worst case is the operation with hot magnets (i.e., here, the defined maximum magnet temperature of 160 °C), and maximum field-weakening current.This operating point is identified and simulated in FEA for each investigated machine at the maximum speed and highest torque for each machine.The knee point of the magnets is the flux density below which irreversible damage to the magnets takes place.The knee point is identified as 0.31 T for machine M1, 0.14 T for machine M2, and 0.31 T for machine M3.The results of the FEA simulation for checking whether demagnetization occurs is shown in Figure 14.It can be seen that the flux density stays above the critical knee point for all machines.Therefore, the feasibility of this operating range has been checked.Since the loss reduction is more relevant in this investigation, the operating maps are represented as loss maps rather than efficiency maps, the losses type maps are given in Figure 15a,b.For a temperature distribution of ϑ winding = 160 °C and ϑ magnet = 60 °C, the loss map is given for machine M1 in Figure 16.The ranges of maximal loss, around 5 kW to 8 kW, are predictable in the high-torque regions in base speed as well as in field weakening.But also in low load at high speed, higher losses are observed, i.e., around 4 kW at 15,000 rpm.
For comparison, the loss maps are shown for the assumed highest difference in ϑ magnet of 100 °C, from the operation of 160 °C subtracted from 60 °C.In Figure 17a,b, the total loss difference in the operation of the hot magnet (160 °C) minus operation of the cold magnet (60 °C) is shown.
In general, regions with a loss difference ∆P L,tot < 0 W (marked as orange to green) represent a range where a higher magnet temperature is beneficial.For all cases, ranges which exhibit lower losses for hot magnets are identified in the field-weakening range at low torques.Ranges that show a significant performance deterioration are for high torque, especially in the base speed range.It can be observed for M1 that the region with the highest energy conversion for Artemis motorway 150 and the region for optimal operation with a hot magnet match.
In the investigated cases, the highest difference in losses is observed also for the highest temperature difference (∆ ϑ magnet = 100 °C), depicted in Figure 17a.The maximal loss reduction for a hot magnet of around 1600 W is achieved in the field-weakening range at 16,000 rpm.For cases with lower ∆ϑ magnet , the loss savings are also smaller, but the ranges where hot or cold magnets are beneficial stay nearly the same compared to the maximal ∆ϑ magnet .
Further, the loss distribution is examined in Figure 15a for machine M1, and for machine M3 in Figure 15b.Here, only the major loss shares, i.e., copper losses and iron losses, are shown, because magnet losses are contributing less than 2 % to the total losses and even less to the loss differences.It is clearly observable that the main loss difference results from lower copper losses in the field-weakening area and higher copper losses in the high-torque area.The iron losses only contribute a small amount to the total loss difference and show for both machines the highest increase in the beginning of the field-weakening area, and a reduction in the high-speed field-weakening area.In the base speed range, there is only a marginal change in the iron losses.The difference in the field-weakening area can be explained by a different operating condition with different control parameters, i.e., in particular, the different field-weakening current.When comparing the potential loss reduction in machine M1 and machine M3, it shows that M3, i.e., the smaller, right-sized machine, has a lower loss reduction potential in the field-weakening region, of only 400 W, whereas machine M1 shows a higher reduction, of around 1600 W. The consequent trend is also seen in the overload-torque region in the base speed range, where the oversized machine M1 only has a reduction potential for a cold magnet of around 400 W and machine M3 of 1000 W. This shows just the regions for which the machines are designed, and for which the amplitude of the rotor flux is chosen.For machine M1, this is for reaching high torques, and it exhibits its best efficiency in the mid-speed and high-torque ranges; c.f. Figure 7.For machine M3, this is for lower loads and mid-to higher-speed ranges and the mid-torque range; c.f. Figure 8.
Conclusions of Simulation Results
The three machines, M1, M2, and M3, are compared at steady operating points; in general, this shows that all the machines offer similar efficiencies, in the range of the highest efficiency, all machines reach above 96 %, though machine M2 has the highest efficiency, of over 97 %.Machine M3 shows higher efficiency over a wider speed range and up to deep field weakening, but has a significantly lower efficiency in the high-torque overload case at around corner speed compared to the other machines, due to a lower permanent magnet flux.The comparison of the ranges of the highest energy conversion of Artemis motorway 150 and the ranges of the highest efficiency are most overlaid for machine M3, which results in the lowest cumulative loss.
In the drive-cycle simulation machine, M3 shows the best overlay in the range of highest efficiency and the range of highest energy conversion.The thermal drive-cycle simulations reveal that for typical drive cycles the crucial magnet and winding temperatures of all three machines, although M3 is designed for a significantly lower maximum power, are at similar temperature ranges, which are far from the common temperature limits.In conclusion, a general loss reduction for hot magnets in field-weakening operation of IPMSM is proven.Also, it is shown that in most cases in the base speed range, a cold magnet is beneficial for loss reduction.Machine M1 shows a higher potential for loss reduction in the field-weakening range, and shows a lower impact of hot magnets in the high-torque base speed range.But M1 cannot change internal temperatures as fast as M3 (c.f.Figures 12 and 13), which then leads to higher losses in normal operation, especially in the field-weakening range.
Summary and Outlook
A detailed overview of three current state-of-the-art electrical machines is given.It is concluded that currently one major design goal is increasing the power density.Approaches to reaching this goal are discussed regarding their further potential.The component-wise power density loss for the complete operating range is analyzed using example operating points for the investigated machines.The thermal behavior of the three machines is discussed for the Artemis motorway 150 and WLTC drive cycles.The idea of altering the remanence of the permanent magnet by changing its temperature is presented.With electromagnetic simulations, the potential to operate more efficiently with active thermal-field weakening over the operating range is identified for the investigated IPMSM.The highest potential to reduce losses at high magnet temperatures is in the field-weakening and low-torque regions, for a selected difference in magnet temperature of 100 °C (60 °C to 160 °C); the maximum loss reduction of 1750 W(70 % decrease) is achieved for machine M1 at around its maximum speed, 16,000 rpm.The improvement for an intentionally cold magnet at maximum torque in the base speed range is around 400 W.
The relatively large machine M1 shows a higher loss reduction in this operating range compared to machine M3, which also shows reduced losses with hot magnets in the field-weakening range, but only has a maximum loss reduction of around 600 W at its maximum speed of 25,000 rpm and around 1000 W at maximum torque in the base speed range with cold magnets.But the comparable small machine, M3, shows the fastest thermal heat-up in the simulated drive cycles WLTC and Artemis motorway 150, even without any special cooling concept.This fast thermal behavior is beneficial for the dedicated thermal control to achieve the target magnet temperature in an acceptable time.By means of a newly developed conditioning concept, designed for dedicated magnet-temperature control, an adapted model is going to be built.For a holistic assessment, a coupled simulation (thermal and electromagnetic) representing the new concept will be carried out for transient, especially drive-cycle, investigations, with a focus on energy savings and overload capability.Furthermore, the models of the machines are going to be validated on the test bench.Fast changes in magnet temperatures can probably be achieved, and an optimal efficiency and power density over the complete operating range is going to be realized by a proper control strategy.For real-world applications, online magnet-temperature knowledge is needed, and the demagnetization risk always has to be considered.Additionally, the aging of magnets due to the often fast thermal cycles of magnets will be considered.
Figure 2 .
Figure 2. Radial cross-sectional views of the investigated machines.From left to right: M1, M2, and M3.
Figure 5 .
Figure 5. Efficiency maps and continuous torque curves of M1 to M3 for a magnet and winding temperature of 160 °C.
Figure 10 .Figure 11 .
Figure 10.Temperature profiles of magnet and winding in a five-time WLTC simulation.
Figure 12 .
Figure 12.Temperature profiles of magnet and winding at OPHT operation, starting temperatures are the end temperatures of Artemis motorway 150 simulation.
Figure 13 .
Figure 13.Temperature profiles of magnet and windings at OPHS operation; starting temperatures are 25 °C.
Figure 14 .
Electromagnetic check-up for demagnetization for maximum speed, maximum torque, and maximum ϑ magnet = 160 °C, for machines M1 (a) and M3 (b).
Figure A1 .
Figure A1.Example lumped parameter network from software tool from machine M3.
Table 1 .
Specifications of investigated machines.
Table 2 .
Weights of investigated machines.
Table 3 .
Material specifications in thermal model.Thermal effective resistance of the air gap of the investigated machines. | 2023-11-02T15:04:07.595Z | 2023-10-31T00:00:00.000 | {
"year": 2023,
"sha1": "e8ff99249910d9dd7cb4009dbf949148abe6070e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/16/21/7369/pdf?version=1698768540",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ead4b9a43e70ae9bfdd0d6980af9c185df7eddda",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": []
} |
4676780 | pes2o/s2orc | v3-fos-license | Orbital Characters Determined from Fermi Surface Intensity Patterns using Angle-Resolved Photoemission Spectroscopy
In order to determine the orbital characters on the various Fermi surface pockets of the Fe-based superconductors Ba$_{0.6}$K$_{0.4}$Fe$_{2}$As$_{2}$ and FeSe$_{0.45}$Te$_{0.55}$, we introduce a method to calculate photoemission matrix elements. We compare our simulations to experimental data obtained with various experimental configurations of beam orientation and light polarization. We show that the photoemission intensity patterns revealed from angle-resolved photoemission spectroscopy measurements of Fermi surface mappings and energy-momentum plots along high-symmetry lines exhibit asymmetries carrying precious information on the nature of the states probed, information that is destroyed after the data symmetrization process often performed in the analysis of angle-resolved photoemission spectroscopy data. Our simulations are consistent with Fermi surfaces originating mainly from the $d_{xy}$, $d_{xz}$ and $d_{yz}$ orbitals in these materials.
I. INTRODUCTION
The spectral intensity measured by various experimental tools is modulated by matrix elements sensitive to the nature of the states probed, as well as to the experimental setup. For particular configurations, symmetry imposes some matrix elements to vanish or to reach maxima. Taking advantage of such selection rules, one can extract precious information on the probed states. For example, numerous textbooks describe how to use Raman and infrared selection rules to reveal the symmetry of phonons and other excitations. As with other probes, symmetry plays an important role in the photoemission process. It has been used often in the past to identify the nature of the electronic states of various systems [1][2][3][4][5][6][7].
Unfortunately, simple photoemission selection rules are restricted to a few configurations, which are not necessarily accessible with every experimental setups. Moreover, a slight sample misalignment may cause a misinterpretation of the data. In fact, the intensity variations in the momentum space often look strange and asymmetric, and they are usually neglected by ARPES experimentalists, which refer to them as the nebulous matrix element effects. In some cases, Fermi surface mappings are symmetrized to make them look more "natural". Despite several attempts reported previously to reproduce experimental data [7][8][9][10], the determination of the orbital characters in ARPES experiments is still not performed routinely, mainly due to the complexity of the calculations. A simpler and more practical approach is needed * dingh@iphy.ac.cn to extract useful information that is otherwise commonly sacrificed.
In this paper, we develop a systematic but simple approach to the calculation of photoemission matrix elements in Fermi surface mappings. We apply this technique to optimally-doped Ba 0.6 K 0.4 Fe 2 As 2 , a multi-band Fe-based superconductor for which plenty of data is available in literature [11], and to FeTe 0.55 S 0.45 , an Fechalcogenide superconductor. A precise knowledge of the determination of the orbital characters of the lowenergy bands is particularly crucial in these materials, for which superconducting pairing mechanisms involving orbital fluctuations have been proposed [12]. Our calculations show remarkable agreement with experimental data in multiple experimental configurations of polarization and beam orientation.
II. EXPERIMENT
In order to test our numerical approach, we performed ARPES experiments on high-quality single-crystals of Ba 0.6 K 0.4 Fe 2 As 2 and FeTe 0.55 Se 0.45 under various conditions. For each experimental setup, samples have been cleaved in situ and maintained in ultra-high vacuum conditions. ARPES Fermi surface mappings were performed at the Institute of Physics, CAS, in a weakly polarized πconfiguration using a MBS T1 microwave-driven helium source (hν = 21.2 eV) and a VG-Scienta R4000 electron analyzer. Synchrotron-based experiments were also performed at Swiss Light Source beamline SIS and at beamline UE112 PGM-2b of BESSY using a VG-Scienta R4000 electron analyzer mounted in p and s configura-tions, respectively. For these experiments, photons in the 20-138 eV range with different circular and linear polarizations were used. All measurements have been performed below 20 K.
III. DEFINITIONS AND CONVENTIONAL SELECTION RULES
Photoemission is a complex quantum problem which is far from easy to handle. For a simpler description, it is very convenient to decompose this process into the three steps of the so-called 3-step model [13]: (i) excitation of an electron of initial state | i into a bulk final state; (ii) travel of the excited electron towards the surface; (iii) transmission of the excited electron through the surface into a final state | f approximated by a plane wave. During the whole process, the relaxation of the remaining electrons and their interactions with the photoelectron are neglected. Within the 3-step model, the matrix element characterizing the photoemission process is given by: where A is the potential vector associated with the incoming photon and r is the position operator. For sake of clarity, we also disregarded a constant prefactor. We present in Figure 1(a) two commonly used ARPES configurations that simplify the analysis significantly. We call A π and A σ , respectively, the components of the potential vector parallel (π polarization) and perpendicular (σ polarization) to the emission plane defined by the vector k along which the photoemitted electron is ejected and the normal to the sample surface. Similarly, the incident plane is defined by the incident light vector and the normal to the sample surface. When dealing with unpolarized light, it is also useful to define two special configurations of ARPES setup. Hereafter, we call p and s the ARPES configurations for which the incident and emission planes are parallel and perpendicular, respectively. With θ l described as in Figure 1(a), the potential vector can be expressed in a more general configuration with linear polarized light as: Right-handed circular polarization C + and left-handed circular polarization C − are defined by A(C ± ) = A π ± iA σ , and thus for circular polarized light we have: A(C ± ) = (−A π cos θ l , ±iA σ , A π sin θ l ) Non-polarized light is treated by adding separately the contributions of π and σ linearly polarized photons to the photoemission intensity | M if | 2 . Since | M if | 2 is a scalar observable, it must necessarily transform under crystal symmetry operations like the fully symmetric irreducible representation Γ 1 of the corresponding group in order to be different from zero. In other words, the decomposition of the tensor product of Γ i , Γ f and Γ op , which are the representations associated to | i , | f and A · r, respectively, must contain Γ 1 , which is possible only if their total parity is even. The plane wave r | f = e ik·r is always an even state with respect to the emission plane. With respect to that same plane, the operator A · r has an even and a odd parity for light polarization parallel (A π ) and perpendicular (A σ ) to the emission plane, respectively. Knowing the parity of both A · r and the final state from the experimental configuration, one can deduce the parity of the initial state by choosing a proper set of coordinates. For a tetragonal system with d electrons like the Fe-based superconductors, the most natural orientations for ARPES experiments is to align the sample (i) with the Fe-Fe bonds parallel to the emission plane to probe the electronic states along the Γ-M direction (here defined in the 1-Fe/unit cell representation), or (ii) with the Fe-Fe bonds at 45 • from the emission plane for ARPES measurements along the Γ-X direction. In these simple cases, the five orbital wave functions d z 2 , d xz , d yz , d xy and d x 2 −y 2 form a convenient basis to describe the initial state. It is often preferable though to use linear combinations of d xz and d yz to construct the wave functions d o and d e , which are odd and even with respect to any emission plane, respectively, as shown in Figures 1(b) and (c). More specifically, we have where θ F S is the Fermi surface angle defined in Figures 1(b) and (c). Although such approach has been used already to study the Fe-based superconductors [14][15][16][17][18][19], the various interpretations are not always consistent, thus calling for alternative methods for determining the orbital characters.
IV. COMPUTATIONAL DETAILS
In this section, we explain briefly how to use ARPES intensity patterns to determine the orbital characters of the Fe 3d electronic states near the Fermi level of Febased superconductors. A more detailed calculation is given in Appendix A. Here we focus only on the main steps.
Within the 3-step model, as mentioned previously, we use the 3d atomic orbital wave functions {d xy , d xz , d yz , d z , d x 2 −y 2 } to characterize the initial state | i : where α m are coefficients, R 32 (r) ∝ r 2 e −r/3 is the longitudinal part of the 3d atomic wave functions with r given in Bohr radius units and Y m l (θ, φ) is the spherical harmonic with angular moment l and azimuthal moment m. The final state, here approximated by a plane wave function, can be expressed in terms of the spherical harmonics as: where j l (k f r) is the Bessel function. The photoemission matrix element M λ if associated with the different spherical harmonic Y λ 2 becomes where, The passage from these matrix elements to matrix elements involving the 3d orbital atomic wave functions is performed using the following relations: In Figures 2(a)-(e), we give the φ dependence of the x, y and z components of these matrix elements for a photon energy of 21.2 eV, which corresponds to the Iα line of conventional He discharge lamps, and for k f || = 0.3π/a, where a is the in-plane lattice parameter. We used the fact that the standard Gaunt coefficients f λ α (l, µ) are nonvanishing only for l = 1, 3. In addition, we found empirically that the coefficients ρ 3 (k f ) = 1 and ρ 1 (k f ) = −2/5 reproduce the experimental data very well over a wide range of photon energy. For a better comparison, all the matrix element weights are normalized by z −M (d e ). We note that matrix elements for the purely in-plane orbitals d xy and d x 2 −y 2 are smaller than the other ones by a factor of 5, even though d xy , d xz and d yz are equivalent orbitals under symmetry operations. This effect is caused by the smallness of the angle θ k when using a photo energy in this range. We also point out that the z component of the d z 2 matrix element is larger than any other, which indicates that the d z 2 matrix element is more sensitive than others to a A z polarization. Due to the fact that k z is not a good quantum number in photoemission experiments, we introduce a few empirical parameters to the formula describing the full matrix element M δ if and improve the agreement between simulations and experimental data. The full matrix element is now expressed as: where δ = d z 2 , d xz , d yz , d xy , d x 2 −y 2 . From direct comparison with experiments, we found out that the ratio between the matrix elements associated with the d xy and d yz orbitals is only 1/2 instead of 1/5. We thus introduced the weight factor w δ = {5(δ = d xy , d x 2 −y 2 ), 1 oth-erwise}. Within our semi-quantitative approach, these parameters are viewed as phenomenological parameters compensating for our simplified model. They have been fixed at the same values for all our simulations. For the study of Fe-based superconductors, w z and γ have been fixed to 4 and k z c+ 3π 2 , respectively, where c is the lattice parameter along the z direction.
Since ARPES allows only measurement of the intensity I δ if (φ k ) = |M δ if | 2 , and more precisely the relative distribution of intensity in the momentum space, several prefactors can be dropped in the calculations, including the imaginary prefactor. Assuming small θ k in Eq. (9), we can simplified the matrix elements as follows, keeping only the angular parts of the matrix element components: The different components of these simplified matrix elements are given in Figures 2(f)-(j). Although their precise absolute values differ from that of the components in Figures 2(a)-(e), they carry essentially the same orbital information while simplifying calculations significantly.
We now consider the effect of light polarization on the photoemission response, which is widely known by experimentalists to be important. We first start by the experimental observation of a difference, often called circular dichroism, between the photoemission response to left-handed and right-handed circular polarizations. This effect can have different origins [20]. For example, it has been attributed to spontaneous breaking of the timereversal symmetry in Bi 2 Sr 2 CaCu 2 O 8+δ [21]. This effect is quite different from the circular polarization used in YBa 2 Cu 3 O 7−δ , for which circular dichroism appears as a surface anomaly. It has been useful to separate the photoemission contributions of the bulk and of the highly polar surface resulting from the absence of natural cleaving plane in this material [22][23][24]. In this particular case, only a non-trivial combination of the photoemission responses to left-handed and right-handed circular polarized light can allow a full separation of these two components [25].
We note that besides these anoumalous circular dichroism effects, one should also expect asymmetric photoemission response to C + and C − light, depending on the geometry of the ARPES configuration. Indeed, it has been shown that unless the photoemitted momentum, the normal to the sample surface and the incident beam momentum are all coplanar (p ARPES configuration) in a mirror symmetry plane of the sample, circular dichroism can be observed [20]. Rather than searching an origin for circular dichroism in the Fe-based superconductors, which goes beyond the purpose of the current work, i.e. to extract useful information on the orbital characters of the bands and FSs observed by ARPES, here we simply try to describe its phenomenology and to add it as a tool to determine the orbital characters of bands.
By working out the details of Eq. (13), one can show that all the matrix elements M δ=d z 2 ,dxz,dyz,dxy,d x 2 −y 2 α=x,y,z have pure imaginary values. Assuming the form of the potential vector given in Eq. (3) for circular polarization, we deduce that the photoemission intensity for C ± polarized light is given by: The previous equation indicates that there should be no difference between the photoemission responses to C + and C − polarized light if all the matrix elements have pure imaginary values, which is supposed from Eq. (13). To account for the difference occurring in real experimental data, we add a phase to each matrix element. The photoemission intensity when circular polarized light is used thus becomes: The addition of phase factors to each matrix element also influences the photoemission intensity I π corresponding to π-polarized light and the photoemission intensity I σ associated to σ-polarized light, which are now respectively expressed as: (17) and These later considerations allow us to predict appropriate phenomenological forms for the photoemission intensity responses to circular and unpolarized light in the common p and s ARPES configurations. As illustrated in Figure 1 From these later equations, we can conclude that in the p configuration, there is no difference between C + and C − along the high symmetry line cut, a result valid for both odd and even orbital characters and consistent with a previous work [20]. This contrasts with the intensity predicted for a non-polarized light excitation. Since the photoemission intensity I non for a non-polarized light excitation can be described by the sum of I π and I σ , we have: These equations indicate that in the p ARPES configuration, even symmetry orbitals may lead to an intensity asymmetry along k x , but not the odd symmetry ones.
In the s configuration, electrons are collected in the k y − k z plane, and we should expect different selections rules. Indeed, we now have for the odd symmetry orbital characters: M δ=odd In contrast to the p configuration, the equations show that we can expect circular dichroism in the s configuration, in agreement with a previous work focused on core levels [20]. The use of circular polarized light is also a useful way to determine the symmetry of the band structure. As for the photoemission response to non-polarized light in the s ARPES configuration, we now have: Even though the comparison between the photoemission intensity recorded with linear π-polarized and σpolarized light give the strongest contrasts, some assumptions on the symmetry of bands can still be made based on data recorded with non-polarized light, such as a traditional He discharge lamp.
Following LDA band calculations indicating that the orbital weight around the Fermi level in Ba 0.6 K 0.4 Fe 2 As 2 is dominated by the Fe 3d orbitals d xz , d yz and d xy , we only considered the related matrix elements in our simulations. More specifically, LDA predicts that there are three holelike Fermi surfaces centered at the Γ point with d xy , d e and d o orbital characters. Previous ARPES results also show the existence of 3 holelike Fermi surface pockets centered at Γ, two of them being nearly degenerate [6,[26][27][28]. Following a previous notation, here we call β the outer Fermi surface, and α and α the two others, which will be considered degenerate in our simulations. At M = (π, 0), here defined in the 1 Fe/unit cell description, theoretical calculations predict a Fermi surface pattern formed by the hybridization of 2 ellipses. For k z = 0, the ellipse tips have a d xy orbital character while the inner part comes from d yz and d xz [29,30]. This orbital distribution around M is reversed for k z = π. Theoretical calculations also predict a non-negligible k z variation at the M point [29][30][31] that is not observed by ARPES [28]. While ARPES performed for several Fe-based materials with different cleaving surfaces reveal k z variations of the electronic band structure at the Γ point [11], the reasons behind this experiment vs theory discrepancy for the electronic band structure at the M point are still under intense debate. In our simulations, we use as M-centered electronlike Fermi surface pockets the k z -invariant hybridized functions determined from a three-band model [32]: where we imposed t 3 = t 2 = 1 and 0 xy = 0.1 for convenience, these parameters making the weight of different orbital characters similar to random phase approximation (RPA) results. As we show below, such functions are at least consistent with the ARPES observations. Figures 3(a)-(d) show the Fermi surface intensity patterns of Ba 0.6 K 0.4 Fe 2 As 2 in various configurations. For each experimental pattern, we give in the second column from the left the corresponding result from our calculations using the orbital configuration given above (Figures 3(e)-(h): Simulation A). The size of each Fermi surface used in the calculations is chosen to match approximately the size of the corresponding experimental Fermi surface. We note that small variations in the Fermi surface sizes do not have qualitative effect on the calculated patterns. In the first experimental configuration, light is σ-polarized along the x-axis direction. The experimental results indicate much stronger weight at the M point than for the Γ-centered Fermi surfaces. The intensity is even weaker for the β band, especially along the k y direction. Since the polarization is parallel to k x , this result suggests that the β band must have an odd symmetry along both k x and k y , and we thus tentatively associate the β band to a d xy orbital character, leaving d o and d e symmetries for the nearly degenerate α and α bands. In this configuration, our simulation shows a much stronger intensity at the M point than for the Γ-centered Fermi surfaces, in agreement with experimental data. Moreover, it predicts weaker intensity on the outer Γ-centered Fermi surface, with even weaker spectral intensity along the k y direction, which is also consistent with the experiment.
To test our approach and our orbital attributions further, we show in Figure 3(b) results obtained at 21.2 eV (near k z = 0 [28]) with light π-polarized along Γ-X(π/2, π/2). The Fermi surface mapping is quite counter-intuitive, with very strong intensity spots found on the Γ-centered Fermi surface pockets in the first quadrant. The result also indicates strong intensity on the tip of the ellipse that has been measured. Surprisingly, even such a peculiar Fermi surface pattern is qualitatively well reproduced by our simulation displayed in Figure 3(f), except perhaps for a weaker intensity on the inner Γ-centered bands than expected. This good agreement between simulation and experiment reinforces our initial orbital assignment.
In Figures 3(c) and (d), we present the Fermi surfaces obtained with non-polarized light from the Iα spectral line of a Helium discharge lamp (hν = 21.2 eV) for a beam incidence aligned along the Γ-X and Γ-M directions, respectively. Although both configurations give rise to much stronger intensity along the M-centered Fermi surface elongated along k x than the one elongated along k y , the Fermi surface patterns are quite different around the Brillouin zone center. While the map obtained with the Γ-X orientation of the light shows spectral intensity almost suppressed in the third quadrant, the intensity has a more symmetric distribution in the map recorded in the Γ-M configuration, albeit for an intensity slightly smaller below the k x = 0 line than above. Moreover, the β Fermi surface exhibits an additional suppression of intensity along k x and k y . Once more, our simulations, displayed in Figures 3(f) and (g), explain well the strange spectral weight intensity distribution found experimentally.
In the third column from the left in Figure 3, we illustrate the sensitivity of our approach to distinguish between two sets of simulations by displaying simulation results (Simulation B) using a wrong orbital assignment. As compared to Simulation A, we exchanged the orbital characters of the α (d o in Simulation A) and β (d xy in Simulation A) bands. We also switched the orbital characters around the M point, where now the tip is considered to have a d xz /d yz character as opposed to a d xy orbital character for the inner part. Although the results seem also good when using σ-polarized light, the agreement becomes much worst for other configurations. This observation is valid not only for the β band, but also for the Fermi surface intensity pattern at the M point, which is mainly aligned along k y rather than k x , in contrast to the experimental results and to Simulation A. For these reasons, we argue that the orbital configuration used in Simulation A is at least compatible with the experimental results, whereas the one used in Simulation B must be discarded.
Additional information can be obtained from the simulations away from the Fermi level. In Figure 4 we display the ARPES intensity plots of Ba 0.6 K 0.4 Fe 2 As 2 recorded along the M-Γ direction using 138 eV photons. This photon energy corresponds to k z = π, where the α and α bands have the largest separation and thus their apparent degeneracy is removed [28]. As expected, the intensity pattern is strongly polarization-dependent. While the M-centered bands have very high intensity for σ polarization as compared with the Γ-centered bands, the opposite is observed for π polarization. The spectrum obtained with circular polarization is more or less an hybrid of the two others. Interestingly, the spectral weight is strongly asymmetric with respect to the zone center when using π polarization, whereas it is almost symmetric for the spectrum recorded with σ polarization.
The dispersions and Fermi wave vectors of the various bands can be approximated from the intensity plots as well as from the corresponding curvature intensity plots [33], which are given in the second row of Figure 4. Using this information, we performed simulations for energies away from the Fermi level. The results are compared directly to the MDC profiles in Figure 4. To simplify, we attributed the same half-width at half-maximum to each band. Despite this simplification, the simulations allow a good understanding of the MDC profiles. For example, the simulations predict the relative symmetry and asymmetry of the photoemission intensity with respect to the zone center. More importantly, they allow us to pin down the orbital characters of the α and α bands. Symmetry imposes the intensity of the d e band to vanish when using σ polarization along that particular direction. Accordingly, only two bands are observed around Γ in this configuration. In contrast, both the d xy and d o bands should vanish around Γ when using π polarization. Accordingly, only one band is detected around Γ using π polarization. Since these bands have different Fermi wave vectors, their orbital characters appear clearly after superimposition of the MDC profiles of the spectra recorded with the σ and π polarizations, as illustrated in Figure 4(j). For instance, we conclude that while the innermost band, the α band, has a d o symmetry, the α band corresponds to the even combination of the d xz and d yz orbitals. Our simulations also confirm that the (Color online) First row: ARPES intensity plots of Ba0.6K0.4Fe2As2 recorded along the M-Γ direction using 138 eV photons and (a) σ, (b) π and (c) circular right polarizations. Second row: corresponding 1D curvature intensity plots [33] along the momentum direction. Third row: corresponding MDC profile integrated within 10 meV below EF , compared to profiles simulated using the same half-width at half-maximum for each band. (j) Direct comparison of the MDC profiles recorded with σ (red) and π (blue) polarizations.
β band carries a dominant d xy orbital character.
Even though the situation is a little more complicated around the M point due the weaker photoemission intensity with π-polarized light, our simulations reproduce qualitatively well the experimental MDC profiles given in Figures 4(g), 4(h) and 4(i), and suggest that the orbital character at the tip of the electronlike Fermi surface pockets with ellipsoidal shape is d xz (d yz ). This conclusion differs from a previous ARPES study on Co-doped BaFe 2 As 2 that rather attributed d x 2 −y 2 and d z 2 characters to the tip [19], which does not show up at the M point in LDA band calculations [31,34,35]. However, both ARPES studies indicate that the shape and orbital characters of the Fermi surfaces at the M point is preserved along k z , in contrast to LDA band calculations.
We now investigate circular dichroism for the band structure at the Γ point and demonstrate that it contains information on the orbital characters of the differ- Figures 5(a) and 5(b) show the experimental data obtained in the p ARPES configuration using C + and C − incoming light, respectively. As expected from the selection rules derived in the previous section for this particular setup and in agreement with our simulations displayed in Figures 5(d) and 5(e), we do not observe strong variations between the two sets of data. This is also confirmed by the near-E F MDCs shown in Figure 5(c) as well as the simulated ones given in Figure 5(f). Interestingly, the experimental data only show strong intensity for the degenerated inner band [α(odd symmetry) and/or α (even symmetry)], but not for the β(odd symmetry) band. This behavior is captured by our simulations and confirms that the β band has a odd symmetry orbital character. From our selection rules, we deduce that mainly the α band is observed in this configuration.
The situation becomes quite different for the data recorded in the s configuration, once again using C + and C − incoming light. The corresponding experimental data are illustrated in Figures 5(g) and 5(h), respectively, and the MDC profiles near E F are displayed in 5(i). When switching from C + to C − polarized light, the observed asymmetry in the intensity is qualitatively reversed with respect to the Γ point. As expected from our simulations given in Figures 5(j) and 5(k), the largest switch in the intensity asymmetry is found on the inner band that has an even symmetry orbital characters. Although observed, this effect is less pronounced for the intensity of the β band.
Circular dichroism is also very well illustrated by the Fermi surface intensity patterns recorded on Ba 0.6 K 0.4 Fe 2 As 2 using 60 eV circular polarized light, which are displayed in Figures 6(a)-(c). While the intensity on the right side is much stronger when using C + polarization, the situation is reversed when using C − light. This effect is also reproduced by our simulations given in Figures 6(d)-(f). Interestingly, a comparison of Figures 6(b) and (c) indicates that the pattern rotates when the beam incidence rotates as well. It is worth noting that with circular polarized light the minimum of intensity occurs always on one side of the incoming beam direction whereas it is observed away from the incoming beam side when non-polarized light is used, as suggested by Figures 3(c) and (d).
At this stage we would like to clarify how we determined the phase γ δ that appears in Eq. (14). Although we do not understand its complete meaning, which goes beyond the purpose of the current paper, we can intuitively relate this phase to the discontinuity along k z at the surface of the sample. To fix this parameter, we measured the electronic dispersion along k z , as we now explain. Despite the 3D nature of the crystal and electronic structures of materials measured in ARPES, this technique is so to speak essentially a 2D probe since the momentum perpendicular to the surface exposed is not a good quantum number. However, within the nearly-free electron approximation for the final state [36], access to the third dimension of momentum is often possible by varying the energy of the incident photons. The momentum along the z direction is then given by: The phase γ δ of Eq. (14) has been fixed to kzc + δ.
where θ is the angle between the emission direction and the normal to the surface, m is the free electron mass and V 0 is to inner potential, which is determined experimentally.
The photoemission intensity is expected to change with photon energy due to the photoemission cross section [37] and can even show resonances at particular photon energies. Photoemission measurements over a wide photon energy range can indeed be used to determine the elemental characters of the states probes [1, 3,5,6]. Experimentally, additional effects that cannot find a simple explanation in the photoemission cross section are observed. Figures 7(a)-(e) show such an interesting phenomenon: the energy-momentum photoemission intensity measured on Ba 0.6 K 0.4 Fe 2 As 2 samples with C + polarized light exhibits an asymmetry that varies with photon energy. At 38 eV, the left part of the spectrum has a much weaker intensity than the right part. This is no longer the case at 46 eV, where the two sides show almost equivalent intensity. The asymmetry is even reversed at 52 eV, with the left side of the spectrum being much stronger than the right side. The intensity on both becomes almost equal once more at 60 eV before recovering the initial pattern at 66 eV. After finding the k z correspondence of each photon energy using V 0 = 14.5 eV (similar to the value reported previously [28]), we can plot the normalized intensity difference between the left and right sides of the spectra as a function of k z . The results are displayed in Figure 7(f). Interestingly, the data can be fitted by a cosine function with k z c + 3π/2 as argument, where c = 6.6 A is the lattice parameter of the primitive unit cell, which is equivalent to the distance between Fe layers.
This strange behavior of the photon energy dependence of the intensity extends beyond the 38-66 eV range. Figure 7(g) reveals oscillations in the 30-90 eV range, as k z goes up and crosses different Brillouin zones. This range corresponds to k z variations between 7π/c and 11π/c. The Z positions coincide to k z values with the largest k F positions, i. e. k z = 7π/c, 9π/c and 11π/c, whereas the Γ positions coincide with k z = 8π/c and 10π/c. For each Brillouin zone, the signal on the left-hand side is quite strong as we increase k z from Γ to Z, while the signal is much weaker on the right-hand side. The situation is completely reversed with k z increasing from Z to Γ, with the spectral intensity switched from one side to the other. To obtain this effect in the simulations, the phase γ δ has to be fixed to k z c + 3π 2 over the whole range (see Figure 7(i)). A variation in the phase leads to simulated results completely inconsistent with the experimental data and justify our choice of phase. However, a deeper knowledge of the details of the photoemission process would be needed to provide an ab initio value for this parameter. ertheless, in both cases the Fermi surface patterns exhibit a suppression of intensity along the x-axis. Figure 8(e) shows the ARPES intensity cut along Γ-M for a light momentum aligned along the same direction (blue line in Figure 8(a)). Two bands are clearly observed, one of them not crossing or barely crossing the Fermi level. Actually, a fine study indicates the presence of the expected third band, which has a much weaker intensity and a k F only slightly larger than that of the other band crossing the Fermi level [38]. We display the results of our simulations in Figure 8 (f) for a cut in the same configuration, where we assume that the inner band carries a d e character while the weak outer one is dominated by d xy . The main observation is that the d e band exhibits a strong asymmetry with respect to Γ. This is indeed what is observed experimentally, reinforcing our assumption. We thus conclude that the outer band has a d xy orbital character.
VII. DISCUSSION
Prior to discuss further the method presented in this paper, we would like to comment on the results obtained for the orbital characterization of the Fermi surface of FeTe 0.55 Se 0.45 and Ba 0.6 K 0.4 Fe 2 As 2 . The summary of our orbital character attributions for the various electronic bands in these materials are displayed in Figure 9. For convenience, we spaced the α and α Fermi surfaces in Ba 0.6 K 0.4 Fe 2 As 2 , which are almost degenerate in the k z = 0 plane. Except for absolute and relative variations of the Fermi surface sizes at the Γ point, these patterns hold for all k z values. We stress once more that our experimental observation contrasts with the theoretical expectation of a switch in the orbital distribution of the M-centered Fermi surfaces at k z = π compare to k z = 0 [29,30], which may have important consequences for inter-pocket interactions [39].
The superconducting gap of Ba 0.6 K 0.4 Fe 2 As 2 is Fermisurface dependent [26,27,40]. More precisely, it is about 12 meV large for all Fermi surface sheets except for the 6 meV gap found on the β band, which carries a dominant d xy character. The 2∆/k B T c ratio indicates a pairing in the weak coupling limit for the β band. Gaps in the weak coupling regimes are also observed for the β band in overdoped Ba 0.3 K 0.7 Fe 2 As 2 [41] and underdoped Ba 0.75 K 0.25 Fe 2 As 2 [42]. Interestingly, the 2.5 meV gap size on the β band in FeTe 0.55 Se 0.45 (T c = 14.5 K) leads also to a similar ratio [38].
From these observations, one could be tempted to argue that superconducting pairing is controlled by the orbital character, which for some reason could be less efficient for the d xy orbital. However, this argument is in contradiction with the observation of a d xy orbital character at the M point. In reality, the two electronlike ellipses at the M point hybridize and form two distinct Fermi surfaces [40]. While the inner one is largely dominated by a d xy character, the outer one is formed by a combination of the d xz and d yz orbitals. Both of them show a gap size indicating a strong coupling regime [40]. A recent study suggests similar results in FeTe 0.55 Se 0.45 [38]. Therefore, we conclude that in the Fe-based superconductors there is no direct correlation between the orbital character of a Fermi surface and the gap size. Analyses of the gap size on various Fermi surfaces using gap functions derived from local antiferromagnetic exchange interactions rather suggest that the relative size of the superconducting gap on a particular Fermi surface is determined by its momentum position [28,38,43,44].
The method described in this paper is certainly a reliable and relatively simple way to obtain empirically the orbital characters of bands in the iron-based superconductors. With a proper choice of basis functions, it can be applied to other materials as well. Nevertheless, the model has its own limitations. For example, it remains quite difficult to determine the orbital characters from the Fermi surface patterns in the case of bands with mixed characters. Some theoretical assumptions are often necessary to guide the analysis. For example, we assumed a particular angular distribution for the orbital characters of the electronlike Fermi surfaces forming the Fermi surface at the M point of Fe-based superconductors in order to get a nice agreement between the experimental data and our simulations. However, the method is a powerful tool to discard some scenarios.
Another important limitation concerns the determination of unknown parameters, such as γ δ in Eq. (14). As explained in Section V, we imposed the phase γ δ by looking at the photon energy dependence of the Fermi surface pattern. It is clear though that the phase itself may carry some important information that is not accessible directly from our simplify model. From the experimental point of view, further ARPES studies on different materials, involving different electronic orbitals or even different transition metals, may help clarifying this issue.
VIII. SUMMARY
We introduced a simple method to obtain the orbital characters of the various sheets forming the Fermi surface of crystals. The method exploits the asymmetries obtained experimentally in the photoemission intensity patterns of Fermi surface mappings and energy-momentum plots revealed by ARPES in various experimental conditions of beam orientation and light polarization, including non-polarized light. Our method has been successfully applied to Ba 0.6 K 0.4 Fe 2 As 2 and FeTe 0.55 Se 0.45 , which are two Fe-based superconductors. We showed that the multi-sheet Fermi surface of these materials originates mainly from Fe 3d electrons with d xy , d xz and d yz orbital characters. Our results suggest that there is no direct relationship between the strength of the superconducting gap on the various Fermi surface sheets of these multi-band systems and the orbital characters from which they are mainly formed. 1 m1=−1 α m1 Y m1 1 . | 2012-04-20T07:58:38.000Z | 2012-01-17T00:00:00.000 | {
"year": 2012,
"sha1": "086244966b269abca93dff0ddef9d0eb8a60723e",
"oa_license": null,
"oa_url": "https://www.dora.lib4ri.ch/psi/islandora/object/psi:12536/datastream/PDF/Wang-2012-Orbital_characters_determined_from_Fermi-(published_version).pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "086244966b269abca93dff0ddef9d0eb8a60723e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
52312240 | pes2o/s2orc | v3-fos-license | Th2 cytokine bias induced by silver nanoparticles in peripheral blood mononuclear cells of common bottlenose dolphins (Tursiops truncatus)
Background Silver nanoparticles (AgNPs) have been widely used in many commercial products due to their excellent antibacterial ability. The AgNPs are released into the environment, gradually accumulate in the ocean, and may affect animals at high trophic levels, such as cetaceans and humans, via the food chain. Hence, the negative health impacts caused by AgNPs in cetaceans are of concern. Cytokines play a major role in the modulation of immune system and can be classified into two types: Th1 and Th2. Th1/Th2 balance can be evaluated by the ratios of their polarizing cytokines (i.e., interferon [IFN]-γ/Interleukin [IL]-4), and animals with imbalanced Th1/Th2 response may become more susceptible to certain kinds of infection. Therefore, the present study evaluated the in vitro cytokine responses of cetacean peripheral blood mononuclear cells (cPBMCs) to 20 nm citrate-AgNPs (C-AgNP20) by quantitative reverse transcriptase polymerase chain reaction (qRT-PCR). Methods Blood samples were collected from six captive common bottlenose dolphins (Tursiops truncatus). The cPBMCs were isolated and utilized for evaluating the in vitro cytokine responses. The cytokines evaluated included IL-2, IL-4, IL-10, IL-12, interferon (IFN)-γ, and tumor necrosis factor (TNF)-α. The geometric means of two housekeeping genes (HKGs), glyceraldehyde 3-phosphate dehydrogenase (GAPDH) and β2-microglobulin (B2M), of each sample were determined and used to normalize the mRNA expression levels of target genes. Results The ratio of late apoptotic/necrotic cells of cPBMCs significantly increased with or without concanavalin A (ConA) stimulation after 24 h of 10 µg/ml C-AgNP20 treatment. At 4 h of culture, the mRNA expression level of IL-10 was significantly decreased with 1 µg/ml C-AgNP20 treatment. At 24 h of culture with 1 µg/ml C-AgNP20, the mRNA expression levels of all cytokines were significantly decreased, with the exceptions of IL-4 and IL-10. The IFN-γ/IL-4 ratio was significantly decreased at 24 h of culture with 1 µg/ml C-AgNP20 treatment, and the IL-12/IL-4 ratio was significantly decreased at 4 or 24 h of culture with 0.1 or 1 µg/ml C-AgNP20 treatment, respectively. Furthermore, the mRNA expression level of TNF-α was significantly decreased by 1 µg/ml C-AgNP20 after 24 h of culture. Discussion The present study demonstrated that the sublethal dose of C-AgNP20 (≤1 µg/ml) had an inhibitory effect on the cytokine mRNA expression levels of cPBMCs with the evidence of Th2 cytokine bias and significantly decreased the mRNA expression level of TNF-α. Th2 cytokine bias is associated with enhanced immunity against parasites but decreased immunity to intracellular microorganisms. TNF-α is a contributing factor for the inflammatory response against the infection of intracellular pathogens. In summary, our data indicate that C-AgNP20 suppresses the cellular immune response and thereby increases the susceptibility of cetaceans to infection by intracellular microorganisms.
INTRODUCTION
The application of silver nanoparticles (AgNPs) in industry and in consumer products has increased, and the production of AgNPs and the number of AgNP-containing products will increase over time (Massarsky, Trudeau & Moon, 2014). AgNPs can be released during the production, transport, decay, use, and/or disposal of AgNP-containing products, subsequently draining into the surface water and then accumulating in the marine environment (Farre et al., 2009;Walters, Pool & Somerset, 2014). Therefore, the increasing use and growing production of AgNPs, as potential sources of Ag contamination, raise public concerns about the environmental toxicity of Ag (Li et al., 2018a). Previous research has demonstrated that AgNPs can precipitate in marine sediments, be ingested by benthic organisms (such as benthic invertebrate species), enter and be transferred from one trophic level to the next via the food chain, and thereby cause negative effects on the animals at different trophic levels, such as algae, invertebrates and fishes (Buffet et al., 2014;Farre et al., 2009;Gambardella et al., 2015;Huang, Cheng & Yi, 2016b;Wang et al., 2014). Previous studies have demonstrated that AgNPs are toxic to all tested marine organisms in a dosedependent manner, indicating that AgNPs may have negative effects on marine organisms at different trophic levels of the marine environment. Immunotoxic effects of AgNPs have been demonstrated in some aquatic animals such as Nile tilapia and mussel (Gagne et al., 2013;Thummabancha, Onparn & Srisapoome, 2016). Nevertheless, to date the potential toxicity of AgNPs on marine mammals such as cetaceans has not been sufficiently studied.
AgNPs have been demonstrated to cause several negative effects, such as hepatitis, nephritis, neuron cell apoptosis, and alteration of gene expression of the brain, on laboratory mammals (Espinosa-Cristobal et al., 2013;Sardari et al., 2012;Shahare & Yashpal, 2013). In vitro studies using different cell lines have also indicated that AgNPs can cause damage to DNA, cell membranes, and mitochondria through reactive oxygen species (ROS) dependent/independent pathways and further induce cytotoxicity and genotoxicity (Kim & Ryu, 2013). In addition, previous studies conducted in laboratory mammals, including mice and rats, have demonstrated that AgNPs can enter the blood circulation through alimentary tracts and then deposit in multiple organs (Espinosa-Cristobal et al., 2013;Lee et al., 2013;Shahare & Yashpal, 2013; Van der Zande et al., 2012). Considering the negative effects of AgNPs and the presence of AgNPs in blood circulation, the negative effects of AgNPs on leukocytes should be of concern. Previous studies have demonstrated that AgNPs can cause several negative effects on human polymorphonuclear leukocytes (PMNs) and peripheral blood mononuclear cells (PBMCs). These studies demonstrated that AgNPs can cause morphological alterations, cytotoxicity, atypical cell death, inhibition of de novo protein synthesis, increased production of the CXCL8 chemokine (IL-8), and impaired lysosomal activity in human neutrophils (Poirier et al., 2014;Poirier, Simard & Girard, 2016;Soares et al., 2016). Only a few studies have investigated the toxicity of AgNPs on human PBMCs, which have shown that AgNPs can cause cytotoxicity and functional perturbations, including inhibition of proliferative activity and cytokine production (Franco-Molina et al., 2016;Ghosh et al., 2012;Huang et al., 2016a;Orta-Garcia et al., 2015;Paino & Zucolotto, 2015;Shin et al., 2007).
The environmental contamination level of AgNPs is expected to increase greatly in the near future, and cetaceans, as the top predators in the ocean, will suffer the potentially negative impacts caused by AgNPs. Besides, immunotoxic effects of AgNPs have been demonstrated previously in humans and aquatic animals. Therefore, it is crucial to investigate the immunotoxic effects caused by AgNPs in cetaceans. Generally, in vivo experiments are rarely feasible, and the ethical issues concerning the study of immunotoxic effects caused by environmental contaminants in cetaceans are difficult to overcome, so in vitro study using blood samples from captive cetaceans would be a logical and crucial approach (Beineke et al., 2010;Desforges et al., 2016). Cytokines play a major role in the modulation of the immune system, including lymphocyte proliferation/differentiation, lymphoid development, cell trafficking, and inflammatory response through the interactions between the cytokines themselves and the surface receptors of many different cells (Owen et al., 2013;Tizard, 2013a). Previous studies have found that the sequence homology of cytokines among terrestrial and aquatic mammals is low, but conserved molecule regions can still be found on biologically active areas in marine mammals, such as the receptor binding sites of cytokines, suggesting a conserved biological activity of cytokines in both terrestrial and aquatic mammal species (Beineke et al., 2004;Beineke et al., 2010). Therefore, the functions of cytokines on the immune system in cetaceans may be similar to those in mice and humans. Cytokines can be classified into two groups, Th1 and Th2, and their secretion pattern is associated with the balance of Th1 and Th2 responses (Kidd, 2003;Owen et al., 2013;Tizard, 2013b). Th1 response promotes cell-mediated immune response and thus enhances the immunity against intracellular microorganisms, such as Toxoplasma gondii and Brucella spp., and a variety of viruses. In contrast, Th2 response is associated with enhanced immunity against parasites but decreased immunity to intracellular microorganisms (Owen et al., 2013;Tizard, 2013b). The Th1/Th2 balance can be evaluated by the ratios of their polarizing cytokines (i.e., interferon [IFN]-γ/interleukin [IL]-4), and animals with imbalanced Th1/Th2 response (Th1/Th2 polarization) may become more susceptible to certain kinds of infection (Owen et al., 2013;Raphael et al., 2015;Tizard, 2013b). Cytokine profiling is still a relatively new field of immunotoxicology in cetaceans, and thus the enzyme-linked immunosorbent assay (ELISA) kit is not widely used for cytokine profiling in cetaceans (Desforges et al., 2016). Hence, it is more feasible to study the cytokine profiling by molecular biology (i.e., quantitative reverse transcriptase polymerase chain reaction; qRT-PCR). Therefore, the present study evaluates the in vitro cytokine responses of cPBMCs to C-AgNP 20 by qRT-PCR. The cytokines measured were as follows: polarizing cytokines of Th1 (IL-12 and IFN-γ) and Th2 (IL-4), and some pro-and anti-inflammatory cytokines (IL-2, IL-10, and tumor necrosis factor [TNF]-α).
AgNPs characterization
Considering the extensive use of 20 nm citrate-AgNPs (C-AgNP 20 ) in recently reported studies of cetacean and human blood cells (Huang et al., 2016a;Li et al., 2018b;Poirier et al., 2014;Poirier, Simard & Girard, 2016), commercial C-AgNP 20 (Pelco R citrate Biopure TM silver; Ted Pella, Redding, CA, USA) was chosen. The C-AgNP 20 had been extensively washed (without centrifugation) so that the level of trace elements becomes less than 0.000001%. Transmission electron microscopy (TEM) for determining surface area and size/shape distributions, UV-visible spectroscopy for measuring the optical properties, particle hydrodynamic diameter and zeta potential, and dynamic light scattering (DLS) for determining the size distribution were performed according to the manufacturer's instructions and previous studies (Poirier et al., 2014;Poirier, Simard & Girard, 2016). The endotoxin level of C-AgNP 20 suspension was examined by ToxinSensor TM Single Test Kit (GenScript, Piscataway, NJ, USA), and it was lower than or equal to 0.015 EU/ml. For characterization, the C-AgNP 20 obtained from the manufacturer were suspended in complete RPMI-1640 medium (RPMI-1640 (Gibco, New York, NY, USA) with 10% fetal bovine serum, 2mM L-glutamine, 50 IU penicillin, and 50 µg streptomycin) at a concentration of 50 µg/ml, and then examined using a JEM-1400 (JEOL, Tokyo, Japan) TEM. The size distribution and zeta potential of the C-AgNP 20 were determined through Zetasizer Nano-ZS (Malvern Instruments Inc., Westborough, MA, USA) ( Table 1). Measurements were performed by using 100 and 500 µg/ml C-AgNP 20 in two mM citrate buffer (pH 7.4). The C-AgNP 20 were diluted to one, 10, and 100 µg/ml with two mM citrate buffer and instantly used for subsequent experiments. Two mM citrate buffer was used as a vehicle control (0 µg/ml C-AgNP 20 ).
Animals and blood sample collection
All procedures involving animals were conducted in accordance with international guidelines, and the protocol has been reviewed and approved by the Council of Agriculture of Taiwan (Approval number 1051700175). Voluntary blood samples were obtained from six clinically healthy bottlenose dolphins (Tursiops truncatus) with confirmation by physical examination, complete blood count, and biochemistry on a monthly basis from 2015 to 2017 at Farglory Ocean Park. Forty millilitres of heparin-anticoagulated whole blood were collected, stored, and shipped at four • C within 8 h for subsequent experiments.
Isolation of cPBMCs
Cetacean peripheral blood leukocytes (cPBLs) were collected by slow spin method with minor modifications (Bossart et al., 2008). The isolated cPBLs were resuspended in RPMI-1640 (Gibco, New York, NY, USA) with 10% ethylenediaminetetraacetic acid (EDTA) and subsequently used in the isolation of cPBMCs by density gradient centrifugation method. After centrifugation at 1,200× g for 30 min at 20 • C, the cPBMCs were collected from the cell layer between the RPMI-1640 (Gibco) and Ficoll-Paque PLUS (GE Healthcare, Uppsala, Sweden), washed with RPMI-1640 twice, resuspended to a final concentration of 1 × 10 7 cells/ml in complete RPMI-1640 media, and kept on ice until they were utilized in subsequent experiments. The cell viability of cPBMCs was determined by the trypan blue exclusion method using a hemacytometer, and the cell purity based on the cell size (forward-scattered light; FSC) and inner complexity (side-scattered light; SSC) of cPBMCs were determined by FACScalibur flow cytometry (BD, CA, USA). The cPBMCs with higher than 90% viability and 80% purity were used in this study.
Determination of the sub-lethal dose of C-AgNP 20 on cPBMCs with/without concanavalin A (ConA)
The cytotoxicity of C-AgNP 20 on cPBMCs was evaluated by the Annexin V-FITC/PI Apoptosis Detection Kit (Strong Biotech, Taipei, Taiwan) according to the manufacturer's instructions. Freshly-isolated cPBMCs were seeded in 96-well plates at a density of 5 × 10 5 cells/well and exposed to C-AgNP 20 at concentrations of 0, 0.1, 1.0 and 10 µg/ml with or without two µg/ml ConA (Sigma-Alderich, St. Louis, MO, USA). After 24 h of culture, cells were collected and resuspended in binding buffer for further analysis by FACScalibur flow cytometry (BD). The percentages of early apoptotic (PI-and Annexin +) and late apoptotic/necrotic cells (PI + and Annexin +) were determined. A total of 8,000 events/sample were acquired. The sub-lethal doses of C-AgNP 20 for cPBMCs were determined and subsequently used in the cytokine expression assay.
qRT-PCR efficiency of each primer sets
The primer sets used in cytokine expression assay are summarized in Table 2. The amplification efficiency (E) of qRT-PCR with each primer set was evaluated by the slope and R 2 of standard curves using the equation: E = 10 −(1/slope) − 1 (Svec et al., 2015). The standard templates for qRT-PCR with target primer sets were prepared by serial dilution of PCR products, which were amplified from cDNA samples of isolated cPBMCs with target primer sets. The PCR product was 500-fold diluted with subsequent six steps of serial Sitt et al. (2008) 10-fold dilutions, and subsequently used for qRT-PCR. The Cycle threshold (Ct) values of each dilution were evaluated by qRT-PCR with each primer set to generate the standard curves.
Extraction of RNA, synthesis of cDNA and qRT-PCR
Total RNA was extracted from blood samples by RNeasy R Mini Kit (Qiagen, Valencia, CA, USA) according to the manufacturer's instructions. The RNA samples were treated with genomic DNA (gDNA) wipeout solution (Qiagen). Treated samples were then tested by qRT-PCR to confirm the absence of residue gDNA prior to cDNA synthesis. The QuantiTect R Reverse Transcription kit (Qiagen) was used for cDNA synthesis. The reverse transcription was conducted within 4 h after RNA extraction. The cDNA from each sample was stored at −20 • C for qRT-PCR. The qRT-PCR was performed on Mastercycler R ep realplex (Eppendorf, Hamburg, Germany). Each reaction contained 10 µl of SYBR R Advantage R qPCR Premix (Clontech, Mountain View, CA, USA), 7.2 µl of RNase/DNase-free sterile water, 0.4 µl of each 10 mM forward/reverse primers, and two µl of DNA template, and the final volume of each reaction was 20 µl. Two microliters of RNase/DNase-free sterile water was used as the non-template negative control. The thermocycle conditions were set as follows: initial denaturation at 95 • C for 30 s and 40 cycles of denaturation at 95 • C for 10 s, annealing at 60 • C for 20 s, and extension at 72 • C for 30 s with fluorescence detection. Furthermore, the melting curve analysis was performed at the end to identify non-specific amplification. All PCR protocols were performed in accordance with the Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) guidelines (Bustin et al., 2009;Taylor & Mrkusich, 2014).
Time kinetics of mRNA expression levels of selected cytokines of cPBMCs
To evaluate the time kinetics of mRNA expression levels of selected cytokines in cPBMCs with ConA, the cytokine gene expression levels of cPBMCs with ConA (0.5 µg/ml) were determined by qRT-PCR ( N = 4). Freshly-isolated cPBMCs were seeded in 96-well plates at a density of 5 × 10 5 cells/well and incubated for 0, 4, 8, 12, 16, 20 and 24 h of culture in a humidified atmosphere of 5% CO 2 at 37 • C. Then the cPBMCs were collected for subsequent mRNA extraction, complementary DNA (cDNA) synthesis, and qRT-PCR. The cPBMCs with 0 h incubation were used as control for the calculation of cytokine expression level by CT method. In addition, the geometric means of two housekeeping genes (HKGs), glyceraldehyde 3-phosphate dehydrogenase (GAPDH) and β2-microglobulin (B2M), of each sample were determined and used to normalize the expression levels of target genes (Hellemans et al., 2007;Vandesompele et al., 2002).
Effects of C-AgNP 20 on mRNA expression levels of selected cytokines of cPBMCs
The cPBMCs were seeded in 96-well plates at a density of 5 × 10 5 cells/well and exposed to sub-lethal doses of C-AgNP 20 with 0.5 µg/ml ConA for 4 and 24 h of culture in a humidified atmosphere of 5% CO 2 at 37 • C. After incubation, the cPBMCs were collected for subsequent mRNA extraction, cDNA synthesis, and qRT-PCR. PBMCs with 4 and 24 h incubation without C-AgNP 20 treatment were used as control for the calculation of cytokine expression level by CT method. In addition, the geometric means of two HKGs, GAPDH and B2M, of each sample were determined and used to normalize the expression levels of target genes (Hellemans et al., 2007;Vandesompele et al., 2002). The experiment was independently repeated twice in duplicate (N = 12).
Statistical analysis
In all experiments, the results from duplicates were averaged. To compensate for individual differences, the results at different concentrations of C-AgNP 20 for each individual were calculated as percentages of the results of the control (exposed to 0 µg/ml C-AgNP 20 ). In addition, Th1/Th2 ratios at different concentrations of C-AgNP 20 were determined by the cytokine mRNA ratios of Th1 (IL-12 or INF-γ) and Th2 (IL-4) polarizing cytokines and then compared to the control. Our data were first checked by Shapiro-Wilk normality test and Brown-Forsythe test, and the results indicated that the assumptions of normality and/or equal variance were violated. Therefore, the Kruskal-Wallis Test (post hoc test: Dunn's multiple comparison test) was subsequently performed on the data. A p value <0.05 was considered statistically significant, and the analysis was performed in Prism (GraphPad Software, La Jolla, CA, USA). All data were plotted on box-plot graphics. The bar in the middle of the box represented the second quartile (median), and the bottom and top of the box described the first and third quartiles. The whiskers showed that the 75th percentile plus 1.5 times IQR and 25th percentile minus 1.5 times IQR of all data, and any values that greater than these were defined as outliers and plotted as individual points. Asterisks above the boxplots indicated statistically significant differences compared to the control of each experiment.
Sub-lethal dose of C-AgNP 20 to the cPBMCs with or without ConA stimulation
The treatment of C-AgNP 20 at 10 µg/ml significantly increased the ratios of late apoptotic/necrotic cells in cPBMCs with or without ConA stimulation. The ratios of early apoptotic and late apoptotic/necrotic cells of cPBMCs with different concentrations of C-AgNP 20 as compared to the control are presented in Fig. 2. After 24 h of 10 µg/ml C-AgNP 20 treatment, the ratios of late apoptotic/necrotic cells of cPBMCs significantly increased with (median ± interquartile range (IQR): 3.55 ± 3.42; p = 0.0073) or without ConA stimulation (median ± IQR: 1.78 ± 2.24; p = 0.0103). In contrast, no statistically significant increases in the ratios of apoptotic and late apoptotic/necrotic cells in cPBMCs were found after 24 h culture with 0.1 and 1.0 µg/ml C-AgNP 20 treatments. Therefore, 0.1
Time kinetics of mRNA expression levels of selected cytokines of cPBMCs with ConA stimulation
The mRNA expression levels of IL-2 and TNF-α were significantly increased at 4 h of culture, gradually decreased from 8 to 20 h of culture, and then mildly but not significantly increased at 24 h of culture. The mRNA expression level of IFN-γ was significantly increased at 4 h of culture, gradually decreased at 8 and 12 h of culture, and then increased from 16 to 24 h of culture. In addition, IL-4, IL-10, and IL-12 were significantly increased at 4 h of culture and gradually decreased over time. Therefore, the time points chosen for the following experiments were 4 h and 24 h. All the results are illustrated in Fig. 3.
DISCUSSION
Our data indicated that the concentration of 10 µg/ml C-AgNP 20 was lethal dose for cPBMCs after 24 h of culture. Although previous studies of human PBMCs have used a variety of AgNPs (including different sizes and coatings), the lethal dose of AgNPs to human PBMCs is generally higher than 10 µg/ml (Ghosh et al., 2012;Greulich et al., 2011;Huang et al., 2016a;Orta-Garcia et al., 2015;Paino & Zucolotto, 2015;Shin et al., 2007). Therefore, our data suggest that cPBMCs may be more vulnerable than human PBMCs to the cytotoxic effects of C-AgNP 20 . However, previous studies have demonstrated that the toxicity and physicochemical characteristics of AgNPs are associated with their surface coating and size (Kim & Ryu, 2013), and thus further investigation using the same AgNPs from the same manufacturer is necessary to compare the differences of susceptibility between cetaceans and humans. In addition, the negative effects of AgNPs with different sizes and coatings on the cPBMCs are also worth to be further studied. It has been demonstrated that ConA (a selective T-cell mitogen) induces proliferative activity and gene expression of cytokines in bottlenose dolphins, but no information is available regarding the time course (Hofstetter et al., 2017;Segawa et al., 2013;Sitt et al., 2008). Previous studies on ConA-induced cytokine mRNA expression levels of cPBMCs only presented one or two time points (Segawa et al., 2013;Sitt et al., 2008). Sitt et al. (2008) quantified the ConA-induced cytokine mRNA expression levels of cPBMCs after 48 h of treatment, but the reason for choosing this time point was not explained. Their results showed that the mRNA expression levels of IL-2, IL-4, IL-12, and IFN-γ in cPBMCs are The bar in the middle of the box represents the median, and the bottom and top of the box describe the first and third quartiles. The whiskers show the 75th percentile plus 1.5 times IQR and 25th percentile minus 1.5 times IQR of all data, and any values that are greater than these are defined as outliers and plotted as individual points. Asterisks indicate statistically significant differences from the control (p < 0.05, Kruskal-Wallis Test).
The mRNA expression levels of IL-4 and IFN-γ were mildly increased and that of IL-12 was seemingly unaffected at 4 h of C-AgNP 20 treatment. IL-4, as a polarizing Th2 cytokine, is mainly produced by T cells (especially the Th2 subset) and mast cells, and it promotes the differentiation of naïve T cells to Th2 cells, stimulates the growth and differentiation of B cells, and induces class switching to IgE, which may promote allergic responses (Owen et al., 2013;Tizard, 2013b). IFN-γ, as a polarizing Th1 cytokine and a key mediator of cell-mediated immune response, is produced by Th1 cells, cytotoxic T cells, and NK cells. The major functions of IFN-γ are enhancement of Th1 differentiation, inhibition of Th2 differentiation, and activations of NK cells and macrophages (Owen et al., 2013;Tizard, 2013b). IL-12 is also a polarizing Th1 cytokine and is produced by dendritic cells, monocytes, macrophages and B cells. IL-12 induces differentiation of Th1 cells, increases IFN-γ production by T cells and NK cells, and enhances NK and cytotoxic T cell activity (Owen et al., 2013;Tizard, 2013b). This mixed pattern of Th1 and Th2 cytokines may be indicative of a mixed Th1/Th2 cytokine response of cPBMCs at 4 h of C-AgNP 20 treatment. However, considering the significant decrease in the IL-12/IL-4 ratio, Th2 cytokine response is still predominant in cPBMCs following 4 h of C-AgNP 20 treatment. The mRNA expression levels of IL-12 and IFN-γ were significantly decreased by 0.1 or one µg/ml C-AgNP 20 , and that of IL-4 was seemingly unaffected, in cPBMCs following 24 h of culture. The significantly decreased Th1/Th2 (i.e., IFN-γ/IL-4 and IL-12/IL-4) ratios suggested that the immune response of cPBMCs following 24 h of C-AgNP 20 treatment is Th2 biased. Furthermore, the mRNA expression level of TNF-α was significantly decreased by 1 µg/ml C-AgNP 20 after 24 h of culture. TNF-α is a cytokine specifically useful to measure the inflammatory state of an animal and it is primarily produced by macrophages and both Th1 and Th2 cells in response to both acute and chronic conditions (Eberle et al., 2018). Previous studies have demonstrated that TNF-α is a contributing factor in the inflammatory response against infection of intracellular micropathogens such as Plasmodium spp., T. gondii, Leishmania major, and Trypanosoma spp. (Korner et al., 2010). Hence, our data indicate that C-AgNP 20 induced a Th2 biased immune response and suppressed the mRNA expression level of TNF-α in cPBMCs, which may weaken the cellular immune response and further impair the immunity against intracellular organisms and virus. Similar Th2 immune response was observed in other studies that evaluated the expression of cytokines in different cetacean tissues (Jaber et al., 2010). A variety of infections caused by intracellular pathogens in cetaceans have been reported and may be associated with the mass stranding events of cetaceans (Cvetnic et al., 2016;Domingo et al., 1990;Domingo et al., 1992;Dubey et al., 2007;Dubey et al., 2008;Mazzariol et al., 2016;Mazzariol et al., 2017). In addition, previous studies suggested that Ag contamination exists in all aspects of the marine ecosystem, and cetaceans may have been negatively affected by Ag contamination (Becker et al., 1995;Caceres-Saez et al., 2013;Chen et al., 2017;Dehn et al., 2006;Kunito et al., 2004;Li et al., 2018a;Mendez-Fernandez et al., 2014;Reed et al., 2015;Rosa et al., 2008;Seixas et al., 2009;Woshner et al., 2001). The direct correlation between the infection of intracellular pathogens and the severity of Ag contamination in cetaceans is worth studying.
Following 4 h of 1 µg/ml C-AgNP 20 treatment, the mRNA expression level of IL-10 was significantly decreased and that of IL-2 was mildly increased. In other words, mRNA expression levels of IL-2 and IL-10 were respectively upregulated and downregulated by C-AgNP 20 in cPBMCs. Subsequently, the mRNA expression level of IL-2 was significantly decreased, and that of IL-10 seemingly unaffected, in cPBMCs following 24 h of treatment of 1 µg/ml C-AgNP 20 . IL-2, which is produced by activated T cells, can stimulate proliferation and differentiation of T and B cells and activates NK cells (Owen et al., 2013;Tizard, 2013b). However, a growing body of evidence has indicated that IL-2 is crucial for the development and function of regulatory T cells (Treg cells), which secrete effector cytokines, such as IL-10, to control and modulate the immunity to self, neoplasia, microorganisms, and grafts (Owen et al., 2013;Pérol & Piaggio, 2016). Considering the roles of IL-2 and IL-10 in immune tolerance, it is speculated that C-AgNP 20 may play a significant role in peripheral immune tolerance by regulating the balance between IL-2 and IL-10 (Pérol & Piaggio, 2016;Veiopoulou et al., 2004).
The effect of C-AgNP 20 on the ConA-induced mRNA expression levels of the selected cytokines in cPBMCs is mainly inhibitory. A previous study found that 25,40,45, and 110 nm in diameter) could bind to RNA polymerase, disturb the process of RNA transcription, and thus decreased the overall RNA synthesis in mouse erythroid progenitor cells (Wang et al., 2013). Although the down-regulation of mRNA expression levels may be associated with decreased RNA synthesis due to the direct interaction between C-AgNP 20 and RNA polymerase, it cannot fully explain the unaffected Th2 cytokines (IL-4 and IL-10) of cPBMCs in this study. On the other hand, the ConA-induced proliferative activity of cPBMCs is inhibited by 0.1 and 1.0 µg/ml C-AgNP 20 (Li et al., 2018b), and this phenomenon may be associated with the decreased mRNA expression levels of IL-2, IL-12, IFN-γ, and TNF-α and/or a suppressive effect on DNA/RNA synthesis induced by ConA. Further investigation on the underlying mechanism of AgNPs in cetacean leukocytes is important to ascertain the negative health impact caused by AgNPs on cetaceans, and such investigation would improve the understanding of the potential hazards of AgNPs to environmental condition and human health.
Furthermore, although the biodistribution of AgNPs or Ag in cetaceans is still undetermined, previous in vivo studies of AgNPs by oral exposure in laboratory rats demonstrated that the Ag concentration in the liver is approximately 10 times higher than that in the blood or plasma (Lee et al., 2013;Loeschner et al., 2011;Van der Zande et al., 2012). Based on these animal models, it is presumed that the Ag concentrations in the blood of cetaceans may range from 0.01 to 72.6 µg/ml (Chen et al., 2017;Li et al., 2018a). Although previous studies have indicated that the status of AgNPs in the aquatic environment is complicated and variable (i.e., the concentrations of AgNPs and other Ag/Ag compounds are still undetermined in cetaceans) (Levard et al., 2012;Massarsky, Trudeau & Moon, 2014), our data suggest that cetaceans may be negatively affected by AgNPs.
CONCLUSIONS
The present study has demonstrated: (1) the sublethal dose of C-AgNP 20 to cPBMCs (≤ 1 µg/ml), (2) the time kinetics of mRNA expression levels of selected cytokines in cPBMCs, and (3) the inhibitory effect of C-AgNP 20 (0.1 and 1 µg/ml) on the mRNA expression levels of selected cytokines of cPBMCs with evidence of Th2 cytokine bias. Taken together, C-AgNP 20 may suppress the cellular immune response and thus inhibit the immunity against intracellular microorganisms in cetaceans. | 2018-09-24T08:31:11.554Z | 2018-07-23T00:00:00.000 | {
"year": 2018,
"sha1": "821801d59cd51f5317b62b216d0f4958b0bda53a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7717/peerj.5432",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "821801d59cd51f5317b62b216d0f4958b0bda53a",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
226583018 | pes2o/s2orc | v3-fos-license | A Model for Operationalizing the Information Technology Strategy Based on Structuration View
A R T I C L E I N F O A B S T R A C T Article history: Received: 10 March, 2020 Accepted: 24 March, 2020 Online: 11 June, 2020 Many organisations adopt and implement information technology (IT) but fail to operationalise it. As a result, the process of implementation is continually repeated without achieving the goals and objectives, which are often to gain competitive advantage and sustainability. This study employs structuration theory as lens to examine and understand the factors that influence operationalisation of IT strategy in an organisation. The case study approach was employed, and semi-structured interviews technique was used to collect data. The hermeneutics approach was used in the analysis, which was guided by the duality of structure from the perspective of structuration theory. From the analysis six factors were found to primarily influence the operationalisation of IT strategy in an organisation. Based on the factors, a model was developed, which is intended to guide both IT and business managers in the operationalisation of IT strategy.
Introduction
The pervasiveness of Information Technology (IT) has compelled organisations in different business spheres to adopt it. Thus, IT divisions are necessary to enable and support innovations in organisations. However, the deployment and use of IT in organisation has never been straightforward or as easy as sometimes proclaimed. Author [1] argues that developing technology is generally viewed as a variable and erratic undertaking. It is often complex to both individuals and organisations at large, with complexities attributable to both human and technology factors. Furthermore, [2] emphasise that interrelated factors, technical, social, and organisational make implementation of information technologies extremely complex.
Based on this some of the complexities on one hand, essentialities on another hand, the use of IT solutions clearly require strategy in fulfilling business needs and requirements over a period of time. The researcher [3] assert that whether or not an organisation intends to strive for any competitive advantage, Information Systems (IS) or IT will still require a strategy to manage it, if only to circumvent being disadvantaged by the conduct of others. Accordingly, [4] affirms that IT enables organisations to implement strategies and to realise objectives.
Additionally, the rapid changes in business and technological environments compel many organisations to adopt strategies in response to the ever-changing business needs and new opportunities. The objectives are often to increase their capability to escalate competitive and sustainable in line with the organisational vision and strategic intent. Thus, many organisations develop strategies. Some authors argue, though, that most strategic initiatives remain on paper, and are only as good as the paper they are written on [5]- [7].
The need to deliver heightened business value and streamlined processes through IT is greater than ever before. Thus, organisations put emphasis on IT strategy and operationalisation thereof to continually enable and support their processes and activities [8]. Moreover, operationalising IT strategy often assists an organisation to change in a more informed and systematic way, thereby managing challenges such as IT ineffectiveness, an IT approach that is vague or uncertain, business and IT plans that are not aligned, IT being reactive as opposed to proactive and inconsistency of IT practises with best practices. Scholar [9] states the greatest benefits of IT strategy seem to be realised when IT investment is linked with other aligned investments and strategies, and all new business processes seem to be important in realising the maximum benefit of IT.
Based on these and other factors, many organisations attempt to operationalise the IT strategies in order to realise their organisational goals and objectives. However, this has not been easy; instead, some organisations develop IT strategy year-in and year-out. Also, if only some human actors adopt, implement and operationalise the strategy, realising the goals and objectives may be hampered. Therefore, organisations constantly develop and implement IT strategies, unaware of the numerous challenges hindering the operationalisation of the IT strategy.
The remainder of paper is divided into five main sections. The first and second sections are a review of literature. This is followed by the research methodology that was employed in the study, and analysis of the data. The fifth section presents the results from the analysis, based on which the conclusions were drawn in the last section.
Information Technology Strategy and its implementation
Through the innovative use of IT, organisations are able to outperform their competitors [10]. Concurring, [11] are of the view that disruptive innovation is putting some organisations at the lead in a highly competitive environment. Innovations concerning IT have the potential to provide valuable opportunities for organisations [12]. Clearly, IT is not merely a support function. It has become embedded in the systems and processes of many organisations. With the rapid spread of IT and the increasing connectivity of the modern world, relying on an IT strategy is no longer a luxury for organisations; indeed, it has become necessary for survival.
The aim of the IT strategy is often to create a plan that manages investments on IT solutions. Accordingly, [13] assert IT strategy aims to create a medium to long-term plan for introducing information systems and to manage related IT investments. [14] state that IT strategy uses IS to support business strategy; it is the main plan of the IS function, and the collective view of the role of IT within the organisation. [15] affirm that an IT strategy concerns the use of IT to support business operation and strategy. However, [4] maintains that an IT strategy is a phrase that concerns a complex mix of concepts, ideas, visions, experiences, objectives, knowledge, recollections, views and opportunities that provide overall guidance for certain actions in the interest of specific outcomes within the computing environment.
Some studies indicate that while organisations develop comprehensive IT strategy plans, they are unable to implement them successfully, thereby leading to poor overall organisational performance [3,16,17]. It is much simpler to reflect on a good strategy than to implement it; thus, the interest in implementing strategies, in practice, has intensified, primarily because good strategies are not necessarily implemented successfully. Authors, [17,18] articulate a different view, stating that inadequately implementing a strategy may not be bad in an environment where strategies themselves may often be flaw; incorrect implementation may be a valuable source of bottom-up consideration for better strategies. As a result, even after more than a decade of research in the disciplines of information technology strategy, implementation and operationalisation are not fully understood.
A critical challenge within IT strategic implementation is that little has been investigated in terms of how to successfully implement strategic change linked to the use of it [19]. Despite the interest and the vital role of implementing the strategy, most strategy implementations fail. One challenge organisations experience is that of putting an implementation team in place to execute and operationalise the IT strategy [17].
Hence, [17] emphasises that as a result of the prominent deficiency, a conclusion can be drawn verifying a lack of expertise in implementing strategies in organisations. It is apparent that on one hand the implementation of IT strategy does not happen by default, and on the other hand, after the strategy is implemented, the operationalisation is normally left to happen by itself. A comprehensive, coherent IT strategy and implementation plan alone does not guarantee the success of IT. Authors [3] are of the view that a sustainable, strategic approach to support every aspect of IT is inclusive in the IT strategy. Thus, it is critical to operationalise this strategy in fulfilling the objectives. Regardless of the type and level of strategy in an organisation in the end management is faced with putting strategy into practice which is described as the implementation of tactics so that the organisation moves in the desired strategic direction [20]. Implementation of the IT strategy enables and ensures the use of systems, rendering IT solutions capable of supporting organisational practices [19]. These authors [21] explain that failure to put the implemented IT strategy to good use manifests into strategy blindness. Hence, [19] define strategy blindness as an organisation's inability to achieve the strategic intent of implementation of available IT abilities. While much attention is paid to the challenge of implementing an IT strategy that aligns organisational strategy to IT investment, there is a dearth of information pertaining to putting IT strategy into practice successfully [19].
Structuration View
Structuration theory's (ST) main emphasis is to understand how social practices are structured across time and space. The theory of structuration is a general sociological theory regarded as connecting multiple levels from society down to the individual [22]. Academic [23] postulate that although ST only infrequently refers to IT, it has been extensively used in IS studies because it is regarded as particularly useful to describe unexpected results of IT implementation. Structuration theory plays a significant role in this study in comprehending the social, organisational and personal contexts within which an IT strategy is implemented and operationalised. The theory draws a vital connection to comprehend an IT strategy, which on the one hand is constrained or enabled by the societal context in which it operates, and on the other hand, is a means for sustaining or amending that context. As far back as 2003 [24] explain ST has a significant role to play in the advancement of understanding how technological systems support human interaction in societal, organisational and personal contexts. It has been argued that without social interpretation, technology can be viewed as 'meaningless' [25].
Therefore, in this study, ST serves as a lens to understanding the meaning of the actions, rules and resources associated with the operationalisation of an IT strategy. The interaction between agency and structure signifies a mutually constituted duality. Explaining structuration [26] advocates that human agents (agency) continuously produce, reproduce and change social societies (structures). Similarly, an IT strategy operationalising recursively produces outcomes that mutually reproduce the social world, because the rules and resources available for formulation, development, implementation and dissemination are distinctive to every organisation. One of the main tenets of structuration is the dual relationship between agency and structure. Duality of structure is described as the repeated relation between human and structures, whereas structures shape human actions, and in turn, form the structure [4]. Whereas, [27] refers to duality of structure as the relationship between agency and structure which poses one of the most prevalent and challenging issues in social theory, asserting that structure exists only in and through the actions of human agents [28].These dynamics may adversely affect the thoroughness and validity of the processes, technologies and capabilities required to implement an IT strategy rendering it operational.
The role of structure can therefore be seen as both a constraining and enabling element for human action. Thus, [29] suggest that structures in organisations have these enabling and constraining aspects, enabling because they provide a valuable framework for social dealings, but also constraining as they afford little flexibility for how individuals conduct themselves and interact within the organisation's boundaries. While structuration theory assists in explaining communication practices within organisations and helps in clarifying how employees understand their organisational rules, structures can be useful as well as adverse for organisations and employees. Therefore, structuration theory expresses the power of agency and structure over time in a social system [30].
Research Approach
The study employed the qualitative method because views and opinions of participants were required in achieving the objectives. According to [31] qualitative method, assist researchers to get hold of the thoughts and beliefs of participants, which enables comprehending the meaning that people attribute to their experiences. Qualitative method undertakes to enhance the understanding of why things are the way they are in social world and why people act the way that they do [32]. The case study approach was employed primarily because real-life setting was the focus. The approach enables an in-depth exploration of a real-life phenomenon in its natural setting [33]. An organisation from the private sector was selected as a case. Private organisations are companies that are owned by private investors. At the time of the study, the organisation needs to operationalise its IT strategy, therefore making it interesting and appropriate to examine the factors influencing the operationalising of its IT strategy. Triumph Technologies, was selected for the study, to gain an empirical understanding of how IT strategy can be operationalised in a reallife setting.
Triumph Technologies (TT) is a multinational privately-owned organisation in the telecommunication industry. The organisation is wholly owned by the employees. The organisation operates in over 170 countries and regions, including South Africa. The head office referred, to as 'headquarters' (HQ) is in Asia and the regional office is situated in South Africa. The organisation was selected as a case for study based on specific criteria, including: (i) a developed IT strategy in the organisation; (ii) a willingness to participant in the study; and (iii) previously established distinct foci for the organisation.
According to [34] the purpose of qualitative interviewing is to express and clarify individuals' real-world life as they live it, feel it, experience it and make sense of accomplishments. Semistructured interview was used for the data collection. [this author [35] describe semi-structured interviews as starting with defined questions, however the interviewer has the autonomy to adapt the questions to a specific directions response in an effort to allow for more spontaneous and instinctive conversations between the interviewer and interviewee. Sixteen (16) participants were interviewed until a point of saturation was reached. Generally, researchers should carry on the interview participants until the field of interest is saturated, meaning until anything new is said by the all participants [35]. Through the hermeneutic approach form the perspective of the interpretivist approach the data was analysed using the lens of the structuration theory as a guide. The results from the case study give a deeper understanding to how IT strategy is operationalised in an organisation, making a case for generalisation. According to [36:1452], "the end product of qualitative analysis is a generalization, regardless of the language used to describe it".
Data analysis through duality of structure
The data collected from the case was analysed with a hermeneutic approach from the interpretive paradigm. The analysis was guided by structuration theory employing duality of structure as a lens to guide the analysis. A summary of the analysis is depicted in Table, below. Through duality of structure, the research examined how rules and resources enable and constrain agencies to operationalise the IT strategy, producing and reproducing events, processes and activities in the organisation.
Agencies at TT were divided into two categories: technical and non-technical. Technical agencies comprised proprietary technologies and IT systems; playing an integral part in developing and implementing systems and innovations to operationalise the IT strategy. At TT, the IT specialists and business users are the nontechnical agents. Most of the non-technical agents, in particular, the IT specialists and IT management representatives such as the CIO, report into the international structures. Although the regional office has IT specialists, the headquarters IT support team based in Asia remotely supports the regional office in South Africa.
At the organisation, structure was classified under rules as IT policies, and resources were the IT and business people and the processes. These IT policies are the guidelines followed to operationalise the IT strategy. Resources include IT and business people and processes. The IT people are IT specialists and IT management teams that implement and operationalise the IT strategy. The business people are non-IT employees who are participating in operationalising the IT strategy. Processes are used by IT and business employees to perform business activities and actions.
Signification Domination Legitimation
At the organisation, some employees considered the IT strategy and its operationalisation as very critical and significant in achieving efficiency and effectiveness. This consideration was based on factors, such as people, technologies processes, and continuous learning.
Owing to the IT strategy impacting business processes, activities and events, IT staff put emphasises on operationalising it. The IT staff where the main role players driving the implementation and operationalise the IT strategy. Due to their involvement as main role players, some were aware of the IT strategy and had the skills to operationalise it. Because knowledge and skills were critical, some employees used it to dominate the environment.
The organisation had rules, policies, processes, frameworks and controls that guided and managed carrying out of the IT strategy activities and actions that authorised some employees' actions and behaviour, legitimating it.
Interpretive scheme Facility Norm
At Triumph Technologies, based on different views and interpretations from various employees across the organisation, the IT strategy and operationalisation of it, on the one hand, enabled and on the other hand constrained business activities, events, processes and policies.
In an effort to operationalise the IT strategy different resources (people) such as people in the headquarters in Asia were employed in various roles as enablers and at the same at time constrained some processes. An example is the remote support provided by the people in the headquarters. This constrained some business events, as they had to wait for assistance from someone who is based far and remote.
It was the organisational culture to that the people, processes and technologies involved and through which the IT strategy was operationalised were allocated and managed from a central point, the headquarters in Asia. This included skilled people, new processes and advanced technologies and training, roles, responsibilities.
Communication Power Sanction
Different means and ways were used to communicate, share knowledge and information about IT strategy and how it can be operationalised in the organisation. This included teleconference, video conference, meetings, workshops and electronic training. The CIOs were accountable and responsible in sharing information about IT strategy its operationalisation with the stakeholders.
The roles some employees occupy in the organisation delegate authority, in particular CIO and the heads of the different IT divisions to manage and control how to operationalise the IT strategy. This type of control, where power played a role, increases the interest and contribution from some of the employees, because they respect the authority bestowed on the person.
The work ethics and culture of some employees in the organisation was to adhere to instructions and go beyond the call of duty to operationalising the IT strategy. This was not only based on their employment agreement with the organisation, also on the work-centric and customer-centric attitude and believes.
The discussion that follows should be read with the Table to gain better understanding of the data analysis.
Signification/Interpretive scheme/Communication
In the organisation, the IT strategy was the roadmap defining what and how solutions should be deployed. This includes planning, IT systems development and management of telecommunication devices. In addition, through the IT strategy, synergy and consolidation of artefacts and systems were carried out. From this viewpoint, some employees considered the IT strategy significant in that it simplifies the numerous activities that were carried out within the environment. In operationalising the IT strategy, the consolidation approach reduced the numerous systems, some of which were redundant and others duplications. This ensures that the solutions selected and deployed were unified, enhancing consistency, standardisation, and reducing complexity, promoting efficiency and effectiveness, and advancing the organisation's competitiveness.
Another important aspect of the IT strategy was that its operationalisation enabled a seamless link between the branches of the organisation across the world, between the Asian and African continents. This enabled Triumph Technologies to achieve the organisational goals and objectives by reducing operational cost and increasing competitiveness. This was important to both the management of the organisation including some employees. However, many employees did not fully grasp the significance of consolidating and unifying technology solutions in the organisation, as these employees thought it was fanciful, or nice to have. Others, however, understood the cost implications as well as the efficiency and effectiveness that such initiatives contribute to the environment. This diverse understanding was based on individual and group interpretation of the activities undertaken within the business units as enabled and supported by the IT unit through operationalisation of its strategy. The interpretation was influenced by communication.
In Triumph Technologies, electronic mail (email) and mobile electronics applications (apps) were the primary methods of communication. Video conferencing and teleconferencing were secondary methods of communication within the organisation, because both video conferencing and teleconferencing were often used for meetings and clarifications of subjects that had previously been communicated through the email. Spoken language was a challenge during communication, whether management-toemployee or employee-to-employee. This critically influenced the interpretation of contents during operationalisation of the IT strategy within the organisation. Occasionally language had to be translated for other employees or stakeholders. In the process of language translation, some of the meanings or contexts are misconstrued.
Operationalisation of the IT strategy was influenced, enabled and constrained within Triumph Technologies by the significance associated to it. Moreover, interaction was of mixed feelings because some employees were privileged and others were not in terms of sharing organisational information. Thus, meanings which individuals and groups make of the technology solutions and artefacts affected the operations. In addition, communication was not always straightforward, which often influenced employee interpretations and the value they associated with the IT strategy.
Domination/Facility/Power
At Triumph Technologies, there were imbalances from various perspectives, such as allocation of tasks and information sharing. The imbalances enabled and sometimes constrained events, processes and activities, consciously or unconsciously. These were actions that reproduced themselves during the operationalisation of the IT strategy in the organisation. During operationalisation of the IT strategy, various facilities were employed, including processes and spoken languages. The facilities were employed from two viewpoints, personal and organisational. At the organisational front, processes were followed in the operationalisation of the IT strategy towards achieving the goals and objectives of the organisation. From a personal perspective, some employees spoke in the language that friends among colleagues understood, excluding others from participating in discussions.
A South African language (Sesotho) and an Asian language (Hakka) were commonly spoken divisively to exclude colleagues from discussions. In addition, some employees having close or have personal relationships with their managers preferred to speak in the language only both understand instead of the generally accepted language of the environment, which was English at the time of this study. The use of the English language was mainly because they, the promoters, did not have a choice but to employ an inclusive approach for tasks to be carried out. The reliance on a particular spoken language to exclude certain colleagues was at some point a hindrance to the operationalisation of the strategy. This was so because many of the interested employees, or those with the necessary skill-sets, found it difficult to participate in discussions, affecting their overall execution of tasks. This worsened as the exclusion approach was also practiced in formal meetings.
Through the preferred spoken language, employees unconsciously created networks within the organisation, meaning that networks were formed along language lines. Consciously or unconsciously, the networks regulated activities of the IT strategy during operationalisation. This was primarily because some of the employees were more loyal to their networks than the organisational objectives. Another reason for loyalty was attributed to the fact that some employees admitted to receiving more information from their networks than from the formal hierarchal structure within the organisation. In the operationalisation of the IT strategy, there were also factors of power at personal levels and from organisational hierarchical levels (positions). This factor caused imbalance in the organisation during the operationalisation of the IT strategy. At a personal level, the source of power came from knowledge, which some employees acquired through continuous learning and the privilege to information.
Although there was power associated with knowledge, skills and understanding of the IT strategy and its operationalisation in the organisation, there was also power that bestowed on the positions. The staff in the headquarters (HQ) had power to approve or reject activities relating to the operationalisation of the IT strategy in the organisation. The HQ team includes the Chief Information Officer (CIO) and the IT specialists in Asia. Business initiatives were discussed with the HQ team who have the ultimate decision-making power. It is clear that during the operationalisation of the IT strategy, there were imbalances, which means that some employees were dominant over their colleagues. This dominance was based on levels of access to facilities that were sources of power. Power was enacted by the facilities, enabling and simultaneously constraining activities in the operations of the IT strategy
Legitimation/Norms/Sanction
At Triumph Technologies, operationalisation of the IT strategy entails various activities through different processes, rules and regulations to fulfil organisational requirements, goals and objectives. These actions were assessed and deemed eligible for use within the organisation. Thereafter, actions were executed by humans using facilities such as technology solutions (devices), spoken language and face-to-face meetings to operationalise the IT strategy.
In operationalising the IT strategy, micro and macro approaches were employed at middle management and lower management, respectively. The different management approaches, micro and macro, were employed because of the hierarchical structured nature of the environment. The macro focuses on strategic intent, while the micro was operational. Thus, the approaches were purposely followed to enforce the different types of instructions, rules and regulations through the hierarchy, for different events and activities during operationalisation of the IT strategy.
At both micro and macro levels, long working hours (beyond the prescribed eight working hours) and late-night meetings were held. Although some employees were initially not accustomed to this culture, with time, they became acclimated to this as it gradually became the norm as operationalisation of the IT strategy continued within the organisation. Few other actions, such the use of certain spoken languages for exclusivity, were also norm. This occurred even though they were consciously or unconsciously used to enable or constraint in one way or the other, the activities involved in the operationalising of the IT strategy.
Even though the facilities were approved for organisational purposes, some of the actions that manifested were not entirely geared towards achieving the goals and objectives of the IT strategy. For example, Hakka was spoken for exclusivity purposes. Despite its negative connotation, it became a culture, a way of conducting the business of operationalising the IT strategy, practiced over a period of time within the organisation. Some employees accepted this practice, not because they liked or agreed with it, but because they felt that had no choice.
This was because the senior organisational management sanctioned the practice. Management and even some employees sanctioned some of the actions, such as the long hours of meetings, meetings at late hours, and the use of the Hakka language for exclusivity. This was not because they wanted to, but because it facilitated productivity in the operationalisation of the IT strategy in the organisation. These actions were practiced, and eventually became the norm, mainly because they were first sanctioned by the management at the HQ, the decision-making authority in Asia.
At Triumph Technologies, as an initiative to educate aspiring IT specialists to address the different spoken language imbalances, learning materials were presented. The intention of this initiative was to make operationalisation of IT strategy easier and more efficient, creating a culture of learning and inclusion. The learning culture was sanctioned by everyone who wanted to acquire skills and knowledge and participate in operationalising the IT strategy. The culture of learning encouraged employee awareness of the IT strategy, learning and understanding why and how to operationalise it.
In operationalising the IT strategy, many of the human actions as well as the technological solutions were reproductive. Even though the actions and technological solutions were eligible (legitimate) within the frame of the organisation, they were not always to promote organisational interest. In addition, some of the actions and activities that were considered as the norm were not generally agreed upon by the many of the employees. For example, only a few of the employees agreed to the abnormal working hours to protect their jobs. The management sanctioned activities and actions intended for the benefit of the organisation, but with little regard for the consequences to the employee.
Discussions and findings
Six factors were identified from the analysis that enabled and simultaneously constrained the operationalisation of the IT strategy at Triumph Technologies (TT): hierarchical consciousness; technology solutions; network of people; training and skill-set; exclusivity vs inclusivity; and language differentiation (Figure). The figure needs to be perused with the discussion in mind to ascertain exactly how the factors shape IT strategy and its operationalisation.
The model depicted in Figure 1, present factors that are interrelated. Thus, the factors influence and are being influenced by others. In other words, these factors enable and constrain each other during the IT strategy operationalization process at Triumph Technologies.
Hierarchical consciousness
Hierarchical levels are necessary in an environment to steer information appropriately [37], such as IT strategy solution. Author [38] suggests that processing information or tasks that involve many behavioural options require consciousness. This is to avoid potential disintegration of solutions such as the operationalisation of the IT strategy within an environment. Furthermore, [39] explain that consciousness can play a role in enabling tasks within and environment. To the contrary [37] argue that some users often lose consciousness of their tasks as they navigate within hierarchy. But successful integration of artefacts or solutions requires clear consciousness of the people that are involved [39]. At Triumph Technologies, adherence to organisational structure was considered an important influencing factor in operationalising the IT strategy in the organisation. During the operationalising of IT solutions, approval was sought from senior management and structures in Asia, a practice accepted by both IT specialists and business users, irrespective of whether or not they agreed with the strategy and its processes. This enabled smoothness of the processes and various activities as well as employee inclusiveness in the operationalisation of the IT strategy. Additionally, the approval of the strategy ensures that the solutions operationalised are in alignment with the organisation's universal strategy.
As organisational structure allows strategy and its process to circumvent duplication of IT solutions, promoters of the IT strategy verified and validated each innovation with senior management. The verification and validation processes occurred by way of interaction among stakeholders involved in operationalising the strategy. Without approval through the organisational structures, activities and events involving operationalisation would potentially be delayed, with some activities even facing termination or rejection. Thus, IT specialists and business users were intentionally conscious, aware of the significance of the organisational structures in carrying out their responsibilities related to the operationalisation of the IT strategy.
Technology solutions
Technological solutions refer to information systems and technological tools or artefacts used to enable and support activities [40]. IT strategy defines the solutions and arranges them in priority perspective for more efficient organisational use. This evolves over time, gradually addressing the changing needs of an organisation [41]. Technological solutions do not operate in and of themselves, but require human expertise [42].
Technology solutions were defined by the IT strategy, including standard deployment, management and use of the solutions for best organisational purposes. The IT systems, IT infrastructures and telecommunication devices were the main aspects of the IT strategy, with IT systems involving mobile applications, applications, electronic flows (e-flows) and tools. At Triumph Technologies, IT infrastructures consist of servers, laptops, desktops and notebooks used by the employees to manage processes and activities. Telecommunication devices were employed for teleconference and videoconference meetings with the headquarters and other branches globally.
The IT strategy was operationalised to enable deployment of the technology solutions, with the intent of improving organisational efficiencies. During operationalisation of the technology solutions, processes and activities were managed attentively to ensure appropriateness and suitability in accordance with organisational purposes. This was because technology solutions both influence and are influenced by other factors such as hierarchical consciousness, skill-sets and networks of people ( Figure). The process of operationalisation required legitimisation that happens through hierarchical consciousness of management. Also required were appropriate skill-sets and the deliberate involvement of various personnel. Above all, interaction and relationships among stakeholders were of critical importance.
Network of People
Network of people refers to conscious or unconscious groupings of employees within an organisation. According to [43], people engage in networks for various purposes, both personal and organisational. These authors [44] explain how the interaction that occurs within networks of people influence technology deployment within an organisation. The success or failure of operationalisation of IT strategy can be influenced by the interactions and actions within networks of people. Scholar [45] argue that in recent years, traditional hierarchical approaches are struggling against challenges of an emerging relational set-up in which decisions cannot be imposed but must emerge from the interactions among actors.
Alignment of various agencies played a significant role in operationalising the technology solutions as the agencies formed a homogenous network of people, consciously or unconsciously, intended to achieve business objectives of the organisation. The networks, formed based on spoken languages, skills and competencies, were enabling as well as constraining in the operations of personnel. From the enabling front through the networks, deliverables were fostered, primarily because employees were either acquaintances or friends, and based on strength of relationships, they offered various levels of support to each other. From the constraining perspective, collaboration between various networks were challenging because of factors such as language differences, which, while including some, often excluded others.
In the operationalisation of IT strategy, it is important that the different networks of people involved have not only the skill-set and understanding of various processes but work collaboratively with one another to achieve organisational business objectives. Thus, skill-sets and collaboration of various people were significant in operationalising the IT strategy. Skill-set deficiencies and lack of training surrounding various processes involved in operationalisation meant inefficiency and ineffectiveness of the IT strategy.
Training and skill-set
The roles of employees are not as easily ascertained as believed; otherwise, the operationalisation of technology solutions will be even more complex due to human actions [44]. The different standards and levels of employee actions, based on knowledge and skill, determine the success of activities within an environment [45] so it is critical that organisations involve employees with the right of skill-sets as that is critical for competitive advantage [46]. Thus, it is essential to train and develop employees appropriately about operationalising the IT strategy in the organisation.
Training and development meant that employees in the organisation were equipped with vital knowledge and skills for understanding the processes and activities involved in operationalising technological solutions as defined by the IT strategy. Training and development were often conducted through different methods and mediums such as electronic learning (elearning), which gave employees the convenience of accessing training material and course participation in the operations of the IT strategy in the organisation. The training enabled some of the employees to carry out their responsibilities from anywhere, and at any time, through their mobile devices.
Through training and development, knowledge about the technology solutions was acquired. Therefore, networks of people had the capabilities and knowledge to operationalise the IT strategy. The importance of training and development during operationalisation was for the network of people to generate a common understanding about the processes and activities when interacting during operationalisation; however, employees were also eligible to interact in different languages, which created a language barrier in the organisation.
Language Differentiation
Understanding of activities and tasks is mediated by language of instruction and engagement through facilitating communication among team members [47]. Thus, devising effective strategy is necessary to bridge the language barrier and manage significantly negative activity [48]. Even though training programs are carried out, they do not always consider language barriers, an oversight that can engender additional complexities in an environment [49]. This needs more attention in that through language the communicating of thoughts, ideals and knowledge is manifest, so language is clearly an influence in terms of how IT strategy is operationalised.
The spoken language was used, whether consciously or unconsciously, to enable and occasionally constrain operationalisation of the IT strategy in the organisation. On the one hand, when employees of the same network communicate using a preferred language, such as Hakka, in sharing knowledge and ideas to ease understanding about technology solutions and processes, smooth operationalisation is heightened; but on the other hand, when employees who are unfamiliar with the network language become difficult, communication challenges escalate. This is a constraining barrier during the operationalisation of the IT strategy. This situation was reproduced time and again in operationalising the IT strategy in the organisation.
The language differentiation influenced and was influenced by networks of people, by the exclusivity or inclusivity of employees, and by the use of technology solutions. This was both enabling and constraining in operationalising the IT strategy, as explained above. The most important thing is that language differentiation has been identified as an essential influencing factor when operationalising IT strategy in an environment as it creates division among employees in operationalising the IT strategy in the organisation.
Exclusivity and Inclusivity
Inclusiveness aims to enrol as many as possible participants while exclusiveness is about access by only a privilege few. According to [50] inclusivity is a process that genuinely and legitimately allows broader participation in an activity. However, deceptive actors tend to use more cognition, inclusivity and exclusivity in words when interacting with groups within environment [51]. Postulated by [52] an understanding of information system continuance for information-oriented mobile applications requires a dramatic shift from exclusivity to inclusivity to influence operationalisation of the IT strategy.
In operationalising the IT strategy in an organisation, exclusivity and inclusivity of employees were both enabling and at the same time constraining. Exclusivity minimises too many opinions and options, mineralising complications inherent in decision-making. However, the same factor of exclusivity deprived certain employees from participating in processes and activities tasked for the execution of IT strategy. The concept of inclusiveness was beneficial to both the business and IT units of the organisation, from an alignment viewpoint, as alignment between business and IT units was instrumental in operationalising the IT strategy in the organisation. Despite the positive aspect of inclusivity, it was also constraining. For example, too many people could not be involved in certain decisions, especially those requiring technical expertise.
In the environment and during operationalisation of the IT strategy, exclusivity or inclusivity of a group of employees was sometimes consciously and sometimes unconsciously created. This happened at various levels, from senior management to technical expertise. Some employees were privileged, granted exclusive access to information pertaining to IT strategy operationalisation. Both exclusivity and inclusivity of employees influenced and was influenced by the relationship and interactions during operationalisation of IT strategy in the organisation, impacting how some employees were nominated for skills development, but not others, how processes were defined, and how tasks were assigned to certain distinct individuals in the operationalising the IT strategy in the organisation.
Conclusion
This paper provides a clear distinction between IT strategy implementation and operationalisation. This is a confusion that has contributed to the misunderstanding, and negativity which the IT strategy has received for many years. The study reveals, and makes it possible to gain better understanding of the factors that influence operationalisation of an IT strategy in an organisation, which were not empirically known. The factors are critical as they assist in achieving business goals and objectives. Thus, the research intended to benefit academics and professionals alike that focus on operationalising IT strategies in organisations. The academics domain gain from this research through its addition to existing literature in the subject areas of information technology strategy, implementation and operationalisation. Professionals in the business sphere, the benefits come from gaining better understanding of the influential factors involve in operationalising IT strategies in organisations. | 2020-07-02T10:31:11.991Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "3d00a0e982289d8557c38860f81313ef2b77debe",
"oa_license": null,
"oa_url": "https://doi.org/10.25046/aj050348",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ff328f77cca2038d12002f34228845f0580e616b",
"s2fieldsofstudy": [
"Computer Science",
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
269624469 | pes2o/s2orc | v3-fos-license | Comparison of preventive effects of combined furosemide and mannitol versus single diuretics, furosemide or mannitol, on cisplatin-induced nephrotoxicity
Cisplatin (CDDP)-induced nephrotoxicity is a common dose-limiting toxicity, and diuretics are often administered to prevent nephrotoxicity. However, the efficacy and optimal administration of diuretics in preventing CDDP-induced nephrotoxicity remain to be established. This study aimed to evaluate the efficacy of combining furosemide and mannitol to prevent CDDP-induced nephrotoxicity. This was a post-hoc analysis of pooled data from a multicenter, retrospective, observational study, including 396 patients who received one or two diuretics for CDDP-based chemotherapy, compared using propensity score matching. Multivariate logistic regression analyses were used to identify risk factors for nephrotoxicity. There was no significant difference in the incidence of nephrotoxicity between the two groups (22.2% vs. 28.3%, P = 0.416). Hypertension, CDDP dose ≥ 75 mg/m2, and no magnesium supplementation were identified as risk factors for nephrotoxicity, whereas the use of diuretics was not found to be a risk factor. The combination of furosemide and mannitol showed no advantage over a single diuretic in preventing CDDP-induced nephrotoxicity. The renal function of patients receiving CDDP-based chemotherapy (≥ 75 mg/m2) and that of those with hypertension should be carefully monitored. Magnesium supplementation is important for these patients.
reducing CDDP concentration in the kidneys.In recent years, a short-term low-volume hydration method has been developed and used in routine clinical practice, along with conventional high-volume hydration 6,7 .
Additionally, hypomagnesemia induces the saturation of active transport mechanisms in renal tubular cells, leading to excessive CDDP levels in renal tubular cells and subsequent cell necrosis.Therefore, magnesium (Mg) supplementation has been used to prevent nephrotoxicity, and its preventive effects have been described in several studies [7][8][9][10][11] .
Although controversial, diuretics have been reported to limit CDDP-induced nephrotoxicity.Diuretics decrease urinary CDDP concentrations by increasing water excretion and blocking chloride reabsorption, thereby decreasing the rate of CDDP activation by aquation 5,12,13 .
Commonly used diuretics in clinical practice include the osmotic diuretic mannitol and loop diuretic furosemide.Several studies have reported the protective effects of mannitol 4,[14][15][16][17] .However, mannitol may contribute to hypomagnesemia by increasing Mg excretion 18 , and there is insufficient evidence to support the use of mannitol in forced diuresis.
Studies evaluating the role of furosemide in reducing CDDP-induced nephrotoxicity have reported conflicting results.Increased nephrotoxicity has been reported in rodents treated with furosemide 19 .Another in vivo study demonstrated the protective effect of reduced urinary platinum levels after furosemide administration prior to CDDP administration in rats 20 .Santoso et al. reported that hydration with saline or saline plus furosemide was associated with reduced CDDP-induced nephrotoxicity 21 .Although there is some consensus regarding the use of diuretics to prevent nephrotoxicity, the evidence is insufficient as there are many unknown aspects regarding the effects of diuretics.Furthermore, the efficacy and optimal administration of diuretics to prevent CDDP-induced nephrotoxicity are yet to be established.
Furosemide and mannitol are currently used in clinical practice, and the two can be used in combination.A previous study reported that approximately 30% of the patients undergoing CDDP-based chemotherapy received a combination of two diuretics for forced diuresis 22 .Whether the administration of dual diuretics is more effective than that of a single diuretic in preventing nephrotoxicity remains unclear.Therefore, this study aimed to investigate the efficacy of combining furosemide and mannitol in preventing CDDP-induced nephrotoxicity.
Setting and patients
This study was a post-hoc analysis of pooled data from a multicenter, retrospective observational study conducted in five hospitals affiliated with the National Hospital Organization in Kyushu, Japan 22 .All participants were treated in accordance with the principles outlined in the Declaration of Helsinki.The Ethics Committee of Beppu Medical Center waived the requirement for informed consent owing to the retrospective nature of the study.Patient data were used after allowing patients to refuse to participate using an opt-out form.In this study, we analyzed the pooled data of 657 patients with cancer with Eastern Cooperative Oncology Group (ECOG) performance status (PS) of 0 to 2, creatinine clearance (CCr) ≥ 60 mL/min, and no history of CDDP administration and had received 20 mg of furosemide and/or 300 mL of 20% mannitol as forced diuresis with conventional high-volume hydration for each chemotherapy cycle.Furosemide and mannitol were given sequentially rather than concurrently when administering the two diuretics.In this study, patients with a short hydration method were excluded to ensure comparable hydration conditions for evaluating the effects of diuretics.Information on the cancer types and chemotherapy regimens of eligiblepatients is presented in Supplementary Tables 1 and 2.
Data collection
Data on the following patient characteristics were collected: Mg dose, sex, age, primary cancer site, cancer stage, ECOG PS, presence of cardiac disease, presence of diabetes, presence of hypertension, chemotherapy regimen, CDDP dose, presence of short hydration, regular use of nonsteroidal anti-inflammatory drugs (NSAIDs), diuretic type, number of chemotherapy courses administered, serum creatinine (SCr) level and changes therein, CCr and changes therein, occurrence of renal failure, and Common Terminology Criteria for Adverse Events (CTCAE) ver.5.0 grade.Cardiac disease was defined as angina pectoris, myocardial infarction, atrial fibrillation, arrhythmia, or valvular disease.SCr was measured using an enzymatic method at least 2 weeks after the start of CDDP administration and was used to determine the presence of renal impairment.CCr was calculated using the Cockcroft-Gault formula.The CTCAE [6][7][8]10,11,14,16 and the Cockcroft-Gault formula [6][7][8]10,11 are widely used for the assessment of renal function in the setting of cancer chemotherapy. Based n the CTCAE ver.5.0 grades for creatinine increase, the development of renal impairment was defined as an increase in the SCr after CDDP administration of at least one grade higher than that before CDDP administration. All patiets received conventional high-volume hydration, and none of them received short hydration.
Statistical analysis
Patient characteristics and incidence of nephrotoxicity were summarized using descriptive statistics or contingency tables and were compared using the Mann-Whitney U test and Chi-square test.Propensity score matching was used to reduce bias and balance patient characteristics between the one-and two-diuretic groups.A propensity score calculated using logistic regression analysis was used for this purpose (covariates: age > 63 years, male sex, cardiac disease, diabetes, hypertension, CDDP dose > 75 mg/m 2 , Mg supplementation, regular use of NSAIDs, ECOG PS, and the first cycle of chemotherapy).The cutoff values for age (63 years) and cisplatin dose (75 mg/m 2 ) were those obtained in a previous study 22 .For confirmation, these cutoff values were also calculated for the present study population with similar results.Patients were matched for variables at a 1:1 ratio using a caliper width of 0.2 of the standard deviation from the propensity score logit.
In the matched cohort of 396 patients, we compared the incidence of nephrotoxicity between the two groups using the Chi-square test.Furthermore, we evaluated the rate of CCr or SCr change by comparing whether the two-diuretic group was superior to the one-diuretic group by comparing the indices of nephrotoxicity after CDDP administration.
The rates of CCr and SCr change were calculated using the following formula: We assessed the independent risk factors for nephrotoxicity using logistic regression analysis to control for the following potential risk factors: age > 63 years, heart disease, hypertension, diabetes, CDDP dose > 75 mg/ m 2 , male sex, concomitant NSAIDs, Mg supplementation, and two diuretics.Statistical significance was set at P < 0.05.All statistical analyses were performed using JMP 14.3.0software (SAS Institute, Cary, NC, USA).
Research involving human participants
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional research committees and with the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards.
Consent to participate
The requirement for informed consent was waived due to the retrospective nature of the study.
Patient characteristics
We analyzed the data of 396 matched patients: 198 received one diuretic (furosemide or mannitol) and 198 received two diuretics (furosemide mannitol) (Fig. 1).The patient backgrounds before and after adjustment for propensity score matching are shown in Table 1.There were no significant differences in these characteristics between the two groups after propensity score matching.There was no difference in baseline CCr (mL/min) values between the two groups (85.4 ± 16.1 vs. 84.7 ± 17.3, P = 0.507).
Incidence of nephrotoxicity
The incidence of nephrotoxicity in each group after adjustment is presented in Table 2.There were no significant differences according to CTCAE ver.5.0 grading between the two groups (P = 0.416).
Changes in SCr and CCr in all subsequent cycles
There were no significant differences in the rates of SCr and CCr change between the two groups in all subsequent cycles (P = 0.683 and P = 0.764, respectively) (Fig. 2).www.nature.com/scientificreports/
Risk factors for nephrotoxicity
The results of univariate and multivariate logistic regression analyses of the risk factors for nephrotoxicity are shown in Table 3. Hypertension (P = 0.003), CDDP dose ≥ 75 mg/m 2 (P = 0.018), and no Mg supplementation (P = 0.002) were identified as independent risk factors for CDDP-induced nephrotoxicity.
Discussion
In this study, we compared the efficacy of two diuretics (furosemide and mannitol) versus one diuretic alone (furosemide or mannitol) in preventing CDDP-induced nephrotoxicity.There were no significant differences in the incidence of nephrotoxicity or changes in SCr or CCr levels between the two groups.Hypertension, CDDP dose ≥ 75 mg/m 2 , and no Mg supplementation were identified as risk factors for nephrotoxicity, whereas the number of diuretics was not.www.nature.com/scientificreports/CDDP-induced nephrotoxicity is a dose-dependent toxicity, and in this study, CDDP dose (≥ 75 mg/m 2 ) was a risk factor for nephrotoxicity, consistent with the results of previous reports 23,24 .Regarding hypertension, chronic systemic hypertension accelerates renal aging 25 .Furthermore, renal atherosclerosis is more common in patients with hypertension, and hypertensive nephrosclerosis is associated with chronic ischemic damage to the tubulointerstitium, a major site of CDDP-induced nephrotoxicity 26,27 .These results suggest that the nephrotoxicity due to high-dose CDDP is exacerbated in patients with hypertension, and antihypertensive drugs may also affect nephrotoxicity in patients with a history of hypertension.Regarding Mg supplementation, hypomagnesemia is a well-known side effect of CDDP-based chemotherapy.Several studies have reported that Mg supplementation reduces CDDP-induced nephrotoxicity by preventing hypomagnesemia [6][7][8]10,11 . The rsults of the present study were consistent with those of previous reports.In addition, Mg supplementation may be more important in patients with hypertension because hypertension is reported to be a risk factor for hypomagnesemia 5 .
In this study, the combination of two diuretics did not reduce nephrotoxicity compared with a single diuretic, indicating that the number of diuretics plays a less important role in renal protection against CDDP-induced toxicity than other interventions do, such as hydration or Mg supplementation.In fact, the concomitant use of two diuretics may have resulted in excessive water excretion, leading to increased plasma concentrations of CDDP that may have offset the preventive effect of hydration on nephrotoxicity.Furosemide and mannitol have been reported to prevent nephrotoxicity in vivo; however, the evidence in humans remains unclear.Although there is insufficient robust evidence regarding the efficacy of diuretics in preventing CDDP-induced nephrotoxicity, diuretics are administered with every course of CDDP unless serious side effects or allergic reactions to diuretics develop in clinical practice.If diuretics cannot be administered, a possible approach is to monitor the patient's urine output and adjust the amount of hydration and/or Mg administration, but situations in which none of the diuretics can be used are considered rare.The role of diuretics among various preventive methods for CDDP-induced nephrotoxicity, such as hydration and Mg administration, is unclear.Further research is needed on the role of diuretics in the prevention of CDDP-induced nephrotoxicity.Considering the effect of diuretics in preventing kidney damage, the side effects of diuretics, and the risk of polypharmacy, we could not find a benefit of the use of two diuretics.
This study had several limitations.First, this was a retrospective observational study rather than a randomized or prospective study.Second, individual quantifiable data on heart disease (e.g., cardiac output and ejection fraction) were not available; therefore, we defined heart disease only based on a history of heart disease, such as angina or myocardial infarction.Third, data on serum Mg levels, blood glucose and blood pressure, urine dipsticks for hematuria or proteinuria, and urine volume were not available, and it was not possible to adjust for the time of blood creatinine measurement because of the observational nature of the study.Fourth, data on potential risk factors such as use of H2-receptor inhibitors, metformin, contrast agents, angiotensin-converting enzyme inhibitors, and angiotensin II receptor blockers were not available.Fifth, the safety profile could not be determined in this pooled analysis because data on adverse events were not available in the medical records.Sixth, clinical testing was performed in all cases immediately before each cycle of chemotherapy, whereas testing during the treatment cycle varied from case to case.As a result, the KDIGO criteria for AKI could not be used to assess nephrotoxicity in this study.Although we recognize the importance of the KDIGO criteria in assessing the details of the development of renal injury, the CTCAE is a standard measure of chemotherapy-induced toxicity in clinical oncology, and we consider it has some relevance in this study.Finally, the inclusion of only conventional high-volume hydration and not the short hydration method as a method for preventing nephrotoxicity other than forced diuresis limits the generalizability of the study results.
In conclusion, we did not find any advantage in combining furosemide and mannitol over the use of a single diuretic (furosemide or mannitol) in preventing CDDP-induced nephrotoxicity.Patients receiving high-dose CDDP-based chemotherapy (≥ 75 mg/m 2 ) or those with hypertension should be monitored carefully for renal function, and Mg supplementation should be prescribed.Further randomized trials are needed to determine the optimal use of diuretics to prevent CDDP-induced nephrotoxicity.
Figure 2 .
Figure 2. Comparison of the rate of change in SCr and CCr between the one-and two-diuretic groups in all subsequent cycles.Box-and-whisker plots show the relationship between one-and two-diuretic groups and the median rate of change in SCr (a) and CCr (b) during subsequent cycles of CDDP-based chemotherapy.Differences between the two groups were analyzed using the Mann-Whitney U test.SCr serum creatinine level, CCr creatinine clearance, CDDP cisplatin. | 2024-05-09T06:16:34.048Z | 2024-05-07T00:00:00.000 | {
"year": 2024,
"sha1": "371f7a0a3473eb597af3e47dcbcdc2eec1b180f2",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-024-61245-6.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7c7ef6dbf56175805a83c2fd3993a7e0625a5287",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267904156 | pes2o/s2orc | v3-fos-license | Arabic Language for the Indonesian Migrant Workers in Arabic Countries
The Arabic language came to Indonesia at the same time as the arrival of Islamic teachings (Suryanegara, 1995). Like its entry, Arabic also spread together with the spread of Islam. Because it is directly related to religion, Arabic is the most widely studied foreign language by Indonesian people in formal and informal institutions. Formally, this language is studied by kindergarten to university students. Meanwhile, in informal institutions, Arabic is also studied in mosques, majlis ta'lim, and even in the family
Introduction
The Arabic language came to Indonesia at the same time as the arrival of Islamic teachings (Suryanegara, 1995).Like its entry, Arabic also spread together with the spread of Islam.Because it is directly related to religion, Arabic is the most widely studied foreign language by Indonesian people in formal and informal institutions.Formally, this language is studied by kindergarten to university students.Meanwhile, in informal institutions, Arabic is also studied in mosques, majlis ta'lim, and even in the family.
In terms of its objectives, language learning has two main types.First, the language is learned for general purposes or daily activities (Ṭuaimah, 1989) Although there is an academic nuance to studying Islamic teachings from their sources, Arabic learning in our madrasahs is more orientated towards this general goal.The aim is to provide students with the four language skills (istima ', kalam, qira'ah, and kitabah) in an integrative manner which are useful for communicating in Arabic receptively and productively.
Second, the language is used for certain intentions based on the needs of the learners (Ṭuaimah, 1989).This category includes learning for academic purposes and other pragmatic orientations.The intended academic goal is to study Islamic sciences and Arabic linguistics itself.Another pragmatic goal is for the benefit of certain professions such as translators, diplomacy, and working in various fields that require Arabic like working in various regions of Arab countries.This paper will discuss this last purpose
Method
This research uses a qualitative research method that is descriptive and analytic.This type describes the causes and effects of a particular phenomenon, idea, or symptom.The data collection technique includes taking or searching for secondary data from books, journals, theses, articles, and reports.The data are taken from references or sources that are related to the problem to be examined.The research stage is to identify the problem and analyze the data and facts.
Working in a foreign country: a profession or a compulsion?
Indonesia is one of the countries in the Southeast Asian region that sends a large number of workers abroad.On the one hand, the sending of Indonesian migrant workers is a source of state revenue, apart from tourism and other economic sectors.From the internal side of workers, many people want to become migrant workers because they have fewer opportunities to work in their own country.However, deciding to become a migrant worker who leaves their family for a distant place and for a long time is a tough choice that one must take.
Becoming a migrant worker, especially a less-skilled person, is the last option out of all other ones.The difficulty of finding a job in the country is generally the reason for some uneducated workers to try their luck in other countries.In addition, working as a farmer as a traditional livelihood is increasingly dragging them into poverty.This poverty issue is not caused by a lazy attitude as suspected by some parties, but rather due to the slow responses of relevant bureaucracy to overcome it (Valentine, 1986).
Sending workers abroad is one way to solve this unemployment issue.It is undeniable that sending migrant workers abroad has opened up opportunities for domestic job seekers to find the best jobs.Meanwhile, for the government, this program will trigger opportunities to empower human resources and an effort to reduce poverty by opening jobs abroad.Thus, the Indonesian government always takes various ways to tackle the unemployment problem in Indonesia.One of which is sending migrant workers to Saudi Arabia.It is one of the countries for placing Indonesian migrant workers in the Middle East region because Saudi Arabia is very dependent on foreign workers to work in informal sectors.
Saudi Arabia is continuously experiencing rapid economic development and progress after the discovery of petroleum as the newest source of income.This requires many skilled workforces.Also, economic progress and democratization in Saudi Arabia have had an impact on its people's social life and lifestyle, so domestic work is seen as a low-level job.Saudi Arabian people prefer to employ workers from abroad to do domestic jobs, whereas Indonesian workers dominate this sector.
The large number of migrant workers coming to Saudi Arabia is also facilitated by the existence of close religious ties and good bilateral relations between the two countries.Therefore, they can adapt more easily when working there.The bilateral relations between the two countries are also being further enhanced through various fields of cooperation, including employment cooperation.The workforce field is mostly occupied by Indonesian workers and is most needed in Saudi Arabia.Specifically, most of them work as household assistants because it can accommodate workers with low educational levels so this sector contributes the most foreign exchange to the country.
Those working in this field generally only hold an elementary or junior high school diploma and do not get decent jobs in the country.This is the reason why many workers are placed as household assistants in Saudi Arabia (Ismail, 2019).Although migrant workers are considered Indonesia's biggest foreign exchange earners, they often face problems in the forms of violence, being accused of murder, physical abuse from employers, the death penalty, and even problems related to the illegal status for those who do not have a residence permit (Overstayers).
This problem of migrant workers then becomes homework for the Government of Indonesia to find a solution so that this problem of migrant workers can be resolved.Migrant workers, whether they have high or low educational status, must still be given the same protection.The 1945 Constitution of the Republic of Indonesia, Article 27 paragraph (2) states that "every citizen has the right to work and a decent living for humanity" (Al Hasmi, et.al 2022) The Indonesian government has made many efforts to solve the problems faced by Indonesian migrant workers in Saudi Arabia, one of which is protection diplomacy.Protection diplomacy is a method to protect its citizens through negotiating or non-violent means.
Analyzing the Migrant Workers' Needs
Today, many Indonesian citizens try their luck abroad to find the best jobs.Most of them get good jobs and treatment from the people around them.However, some get inappropriate jobs and treatment from their surroundings.Many migrant workers experience this unfair situation, whether it is published in the media or not.Many heartbreaking cases that befell Indonesian workers abroad often adorn our newspapers.
The problems faced by Indonesian workers abroad are due to several ignored things related to their jobs.Many of them only possess makeshift supplies without adequate knowledge and skills regarding their field of work.Therefore, to prevent other similar cases in the future, prospective migrant workers must prepare the best possible provisions independently supported by related parties in terms of their departure to another country.For those who want to work in Arab countries, there are two crucial things to consider: 1) understanding Arabic culture, and; 2) being able to speak Arabic.Of course, they must not forget the most important matter, namely having adequate skills to perform the best performance out in the country they want to go to.However, it is not my concern to discuss this last factor in this limited working paper.
Provision of Arabic Cultural Knowledge
For many Indonesians, 'Arab' is always associated with wealth and violence.For Arabs, 'Indonesia' has always been associated with overpopulation and poverty.On both sides, there is prejudice, ignorance, and misinformation.Nonetheless, contacts between the two nations are increasingly developing in all aspects along with better progress in communication and greater cooperation efforts based on various interests.Based on the facts above, every Indonesian who wants to interact with Arab people for any business, including for work, should pay attention to their culture and traditions to avoid misunderstandings that often lead to unwanted things.Some Arabic traditions that they must pay attention to include: -Arabic style of communication; the communication style of the Arabs is different from Western people who speak directly and clearly.Generally, Arabs like to talk excessively and have a lot of small talks (mujamalah).For example, when an Arab meets his friend, he will not just ask how he is doing.It is not enough to just express one phrase.The word `la` (no) told by the guest is not enough to stop the extra food and drink.To make sure that the guest is full, the guest must repeat `la` several times, adding an oath (Wallah) if necessary.-Arabic non-verbal gestures; in speaking, Arabs don't just use their mouths, but also their hands.There are many typical Arabic sign languages that Indonesians need to understand.For example, as a substitute for or accompanying the words, `Wait a moment!` or `Be patient!` when called or crossing the road (while a vehicle is approaching), Arabs will place all their fingers together facing upwards.If someone does not understand this sign language, there might be a misunderstanding or even an accident.-Since childhood, Arabs are trained to express feelings as they are, for example by crying or screaming.
They are used to loud voices to express something.However, their loud voices might be interpreted as anger by someone who doesn't understand this style of speech.Many Indonesian workers in Saudi Arabia who do not understand this thing may identify their employer's loud voice with anger, even though they are not angry.On the other hand, the smiles of female migrant workers towards Arab men, which are intended as an expression of politeness, may be considered a 'temptation'.This kind of intercultural misunderstanding can lead to something more serious.-The traffic rules or signs that apply in Saudi Arabia are different from those that apply in our country.
In Indonesia, every public road user must be on the left lane of the road.This is different from what applies in Saudi Arabia, where they ride their vehicles on the right side.The high frequency of traffic accidents by Indonesian drivers is allegedly due to their misunderstanding of these traffic signs.-For Arabs, the house is truly a part of privacy that not everyone can access easily.The design of the house, which is generally in the form of a rectangular terraced building, describes the building of a fortress that is difficult to penetrate.Every house is always closed with a high wall fence and the gate can be multi-layered.What's behind the wall is privacy that should not be consumed by the public.Therefore, one cannot look around and observe the front doors of Saudi houses or just look at the top of the building.Someone who breaches their house can be accused of `harami/ali baba` alias `thief` or kidnapper who is stalking his prey.Even more, so if the perpetrator does not speak Arabic, he will explain the real reasons for his actions.
Provision of Arabic Language
Many negative cases experienced by migrant workers in Arab countries are mostly because they cannot speak Arabic well to communicate.Because of the deadlock in communication between the two parties (employers and workers), there are often unexpected things happen.Therefore, it is important for prospective workers who want to work in Arab countries to master Arabic for building good communication.
Those working in Arab countries have different needs from madrasahs or pesantren students for the Arabic language.Thus, current learning materials applied by formal and informal educational institutions have so far not been following the needs of Indonesian migrant workers.What they need is a language of communication, not theories.Because most migrant workers in Arabia usually work in informal sectors, they also need to learn daily Arabic language (amiyah), and it is often different from the formal one (fusha).
Arabic is one of the major languages in the world spoken by more than 200 million people.It is used officially in more than 22 countries that are members of the Arab League.In general, Arabic has two varieties.The first is Fusha (standard Arabic) and the second is Amiyyah (informal Arabic).The first is generally used in official communications like in schools, offices, seminars, diplomas, news, books, magazines, official documents, and so on.However, this kind of language is also sometimes used to communicate in daily activities.This variety is much studied in Indonesia as the language of the Qoran and hadith, the sources of the Islamic religion embraced by the majority of the Indonesian population (Musgamy, 2014).Meanwhile, the second type is often used for communication purposes or daily conversations by most citizens, even if they are educated or illiterate.Amiyah Arabic is inseparable from the official one (fusha) that we learn in madrasahs.It's just that it doesn't fully comply with Arabic rules or grammar.Almost all Arab countries have their own Amiyah language (slang).In other words, the amiyah language used by Egyptians is slightly different from those in Saudi Arabia, Lebanon, Algeria, Morocco, and others.The language of 'amiyah even varies within the same country.However, they can be quickly adapted, because these dialects are not completely different from Arabic which they both understand and use.In other words, they understand each other, even though they cannot use other nations' dialects.
The 'amiyah language is usually a pronunciation of the official Arabic language with certain patterns that are relatively fixed.Speech lightening in informal communication is a common phenomenon that occurs in all languages in the world.Such a way of speaking is normal for language owners because it is part of pragmatic principles and efficiency in communicating (Hindun, 2012).However, this will become strange for foreigners who study it for the first time.
To find out where a friend is going, the Javanese can simply ask "ngondi"?Foreigners who learn the official variety of Javanese will not understand this question.There are also no Javanese dictionaries that contain it, so it seems as if it does not come from the original Javanese language.This expression is an abbreviation and lightening of the pronunciation of the Javanese fusha "arep lungo menyang endi?" The same thing also applies in Arabic.The amiyah language, which is considered strange for Indonesians who have studied Arabic fusha for decades, can usually be traced to its fusha roots.The Egyptians, for example, pronounce "q" with an "a" sound, and j with a "g" sound.Algerians value the hamza as half of a consonant so that when it comes to life after another letter that dies, the two vowels are swapped.Some phrases that seem strange also usually come from the fusha language where the pronunciation is lightened and accelerated.Elli gabak hina? in Egyptian amiyah, can be traced from ma al-ladzi jabaka huna (What brought you here)?.Wasy rack? in Algerian dialect is from wa ayyu syai araka (how are you)?.Sylunkum?that is commonly spoken by Syrians and Lebanese from ayyu syai launukum (how are you)?.andDahin in the Saudi Arabian dialect stands for hadza al-hin (this time).Because the two varieties of Arabic belong to the same family, Indonesian workers must master both of them, even if with a portion that fits their work needs.Sentences in Arabic are not a pile of words, but a series of words arranged systematically with certain rules to express meanings according to the speaker' will.Thus, knowledge of the fusha language will accelerate people learning amiyah.Amiyah Arabic is a complementary requirement for Indonesian migrant workers in Arabia as well as Indonesian pilgrims who live in Saudi Arabia for a limited period of time (Maknun, 2016).
Arabic curriculum for prospective migrant workers
As has been described previously, migrant workers in Arab countries must prepare two things to be able to communicate at work.They relate to the language and communication culture of the Arabs which are often different from the Indonesian culture.To accurately determine what specific competencies they must master in learning Arabic, there must be a proper assessment need.
Hutchinson and Waters proposed three ways to find out the target language learning needs for specific purposes (Nation & Macalister, 2010), namely: 1. necessities, including what must be mastered by students to be able to use the language learned in certain situations that they will experience 2. Gaps, namely the gaps between what should be mastered and what the learner has mastered in terms of the language being studied 3. Wants, namely what the learner of a foreign language wants to learn The prediction of the basic needs of Indonesian migrant workers is determined by experts based on their research and experience by taking an inventory of the vocabulary, structures, and expressions needed by Indonesian migrant workers to communicate in places and linguistic situations that they may encounter while working in other countries.Thus, the Arabic language learning curriculum for migrant workers can be developed using a notional-situational model.Even though there are many drawbacks like insufficient knowledge of the language being studied, this model curriculum is the best for foreign language learning with special purposes (Richard, 2001).The reason is that this curriculum presents language in context and teaches its use in a practical, communicative, and fast way. | 2024-02-26T16:03:51.312Z | 2023-04-18T00:00:00.000 | {
"year": 2023,
"sha1": "81757a8a1808dbf84eba31ad3c5c3d6068894104",
"oa_license": "CCBYNC",
"oa_url": "https://zienjournals.com/index.php/tjms/article/download/3783/3136",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "ace7b7c932adfe427f70a5df5414795fc5d67c60",
"s2fieldsofstudy": [
"Linguistics",
"Sociology"
],
"extfieldsofstudy": []
} |
203720835 | pes2o/s2orc | v3-fos-license | Babesia divergens glycosylphosphatidylinositols modulate blood coagulation and induce Th2-biased cytokine profiles in antigen presenting cells
Glycosylphosphatidylinositols (GPIs) are glycolipids described as toxins of protozoan parasites due to their inflammatory properties in mammalian hosts characterized by the production of interleukin (IL)-1, IL-12 and tumor necrosis factor (TNF)-α. In the present work, we studied the cytokines produced by antigen presenting cells in response to ten different GPI species extracted from Babesia divergens, responsible for babesiosis. Interestingly, B. divergens GPIs induced the production of anti-inflammatory cytokines (IL-2, IL-5) and of the regulatory cytokine IL-10 by macrophages and dendritic cells. In contrast to all protozoan GPIs studied until now, GPIs from B. divergens did not stimulate the production of TNF-α and IL-12, leading to a unique Th1/Th2 profile. Analysis of the carbohydrate composition of the B. divergens GPIs indicated that the di-mannose structure was different from the evolutionary conserved tri-mannose structure, which might explain the particular cytokine profile they induce. Expression of major histocompatibility complex (MHC) molecules on dendritic cells and apoptosis of mouse peritoneal cells were also analysed. B. divergens GPIs did not change expression of MHC class I, but decreased expression of MHC class II at the cell surface, while GPIs slightly increased the percentages of apoptotic cells. During pathogenesis of babesiosis, the inflammation-coagulation auto-amplification loop can lead to thrombosis and the effect of GPIs on coagulation parameters was investigated. Incubation of B. divergens GPIs with rat plasma ex vivo led to increase of fibrinogen levels and to prolonged activated partial thromboplastin time, suggesting a direct modulation of the extrinsic coagulation pathway by GPIs.
Introduction
Babesiosis caused by Babesia divergens, a protozoan parasite of the Apicomplexa phylum transmitted by the Ixodes ricinus tick, is an emerging disease in both human beings and animals [1]. Symptomatic patients present malaria-like febrile illness, but as babesiosis can be asymptomatic, it represents a major transfusion threat [2e4]. Only two standard antimicrobial combinations currently exist to treat human babesiosis: atovaquone and azithromycin, effective and well tolerated, or clindamycin and quinine, especially useful in severe cases, but unfortunately poorly tolerated [1]. Erythrocyte exchange apheresis is required to complete the treatment [5].
During pathogenesis due to B. divergens, progression of the inflammation-coagulation auto-amplification loop leads to thrombosis, ischemia and in some rare cases, to death [6]. This might be due to Disseminated Intravascular Coagulation syndrome, an imbalance in haemostasis defined by an elevation of procoagulant factors (thrombin-antithrombin [TAT] and fibrin) and a decrease in antithrombin levels, as observed in dogs naturally infected with B. canis [7]. In animals experimentally infected with B. canis, both activated partial thromboplastin time (APTT) and level of fibrinogen in plasma increased in correlation with lower number of platelets [8].
Regulation of pro-inflammatory Th1 and anti-inflammatory Th2 cytokine production by Babesia has been studied in vitro and in vivo. B. bovis merozoites (extracellular form) increased NO production and IL-1b, IL-12p40, TNF-a and IL-10 mRNA expression in bovine monocytes, but not in dendritic cells [9]. IFN-g, but no IL-10 was produced by blood mononuclear cells from B. divergens-infected sheep stimulated in vitro with merozoite protein extract [10]. High levels of the regulatory cytokine IL-10 were detected in the serum of B. microti-infected mice and of B. rossi-infected dogs [11,12]. In B. microti-infected mice, the progressive fall in parasitemia from peak values at 14 days post infection was inversely proportional to the rise of IL-10 and parasite-specific IgG production, suggesting a role for IL-10-linked antibody responses in the reduction and clearance of the parasite [13].
To determine which parasite fraction is able to stimulate cells, a membrane-enriched fraction of B. bovis merozoites or supernatants from B. bovis-stimulated CD4 þ T-cell lines containing IFN-g and TNF-a have been tested on bovine macrophages and both induced production of NO, partially responsible for parasite replication inhibition and phagocytosis of infected erythrocytes in vitro [14,15]. Phosphatidic acid from a B. bovis attenuated strain and the combination of phosphatidylserine-phosphatidylinositol from attenuated and virulent strains were able to increase Th1 (TNF-a, IL-6), but not Th2 (IL-4) and regulatory (IL-10) cytokine production by mouse peritoneal macrophages in a TLR (Toll-Like Receptor)2dependent pathway [16]. Glycosylphosphatidylinositols (GPIs) are abundant glycolipids in the membranes of all apicomplexan parasites. GPIs have been determined as parasite toxins participating in pathogeny due to their pro-inflammatory properties [17]. In the present study, we have investigated the role of B. divergens GPIs in the modulation of antigen presenting cells in terms of cytokine production, major histocompatibility molecule expression and apoptosis. In addition, direct effect of GPIs on the regulation of the coagulation system was explored ex vivo.
Metabolic labelling of B. divergens GPIs
Merozoites of B. divergens strain Rouen 1987 were maintained in vitro in human erythrocytes (5% packed cell volume in Roswell Park Memorial Institute [RPMI] 1640 medium with 10% human serum). Metabolic labelling of B. divergens was performed in 20 mL glucose-free RPMI 1640 medium (Sigma) supplemented with 20 mM fructose, 25 mM Hepes and 0.5 mCi D-[6-3 H]-glucosamine hydrochloride (Hartmann Analytic GmbH) for 4 h at 37 C. After centrifugation, erythrocytes were lysed with a solution of NaCl at 0.2%, neutralized by the addition of same volume of NaCl at 1.6%. After centrifugation, the pellet was frozen at À80 C and washed in phosphate buffered saline (PBS). This step permitted the merozoites to detach from residual erythrocyte membranes. Glycolipids of free merozoites were extracted with chloroformmethanol-water (10:10:3, by volume) by sonication (ultrasound bath Branson 3200, 47 MHz) and recovered in the n-butyl alcohol phase after partitioning between water-saturated n-butyl alcohol and water (1:1, by volume) by centrifugation. Samples were separated by TLC on Silica Gel 60 plate (Merck) using a chloroformmethanol-water (4:4:1, by volume) solvent system. Silica plates were scanned for radioactivity using a Berthold LB 2842 linear analyser.
Purification of individual GPIs of B. divergens
Large amounts (1 Â 10 10 ) of non-labelled merozoites were collected and GPIs not linked to proteins were extracted with chloroform-methanol-water (10:10:3, by volume) by sonication, dried under a nitrogen stream and recovered in the n-butyl alcohol phase after partitioning between water-saturated n-butyl alcohol and water (1:1, by volume) by centrifugation. GPIs were precipitated under a stream of nitrogen to remove contaminating phospholipids [18]. GPIs were then separated by TLC on 0.5 mm silica gel 60 plate (Merck, GPIs from 5 Â 10 9 parasites/plate) using a chloroform-methanol-water (4:4:1, by volume) solvent system, with spots of labelled GPIs used as tracers. TLC plates were scanned for radioactivity using a Berthold LB 2842 linear analyser and areas corresponding to individual GPIs were scraped off the plate, reextracted with chloroform-methanol-water (10:10:3, by volume) by sonication (only half of the material is estimated to be recovered) and residual silica was removed by water-saturated n-butyl alcohol/water partition. GPIs were stored at À20 C in n-butyl alcohol until use. Absence of endotoxin in each GPI was checked with the Pierce® Limulus Amebocyte Lysate Chromogenic Endotoxin Quantitation kit according to the manufacturer's instructions (Thermo Scientific).
Quantification and composition analysis of carbohydrates of B. divergens GPIs
The method is based upon quantification of the GlcN residues of the GPIs being converted to Man 3 -anhydromannitol (AHM) as described elsewhere [19].
Composition analysis of phosphatidylinositol moieties of B. divergens GPIs
Individual B. divergens GPIs were dried and dissolved in sodium acetate followed by the addition of sodium nitrite. The PI moieties released by deamination were partitioned into n-butyl alcohol. The n-butyl alcohol extracts were dried, suspended in chloroformmethanol and analysed by negative ion electrospray mass spectrometry (ES-MS, ABsciex 4000 QTrap). Daughter ion ES-MS-MS spectra were obtained with a collision voltage of 35e50 V [20]. were stimulated at 37 C in 5% CO 2 atmosphere for 24 h with individual GPIs purified from 5 Â 10 8 (10 8 for HEK-Blue™ cells) merozoites of B. divergens or with 200 ng/mL of lipopolysaccharide (LPS from Escherichia coli serotype 055:B5, Sigma). The amount of GPIs needed for one experiment was dried under a nitrogen stream to remove the solvent n-butyl alcohol. GPIs were suspended in culture medium (in 100 or 300 mL/well for RAW 264.7 and HEK-Blue™ or PECs and SRDC, respectively) by sonication. For the negative control, cells were incubated with same volume of n-butyl alcohol dried and suspended in medium by sonication. Levels of cytokines were quantified in the cell culture supernatants by using specific sandwich enzyme-linked immunosorbent assay (ELISA) from Affymetrix eBioscience or MACSPlex Cytokine 10 Kit, mouse from Miltenyi Biotec GmbH, following the manufacturer's instructions. SEAP (secreted embryonic alkaline phosphatase) reporter gene activity of HEK-Blue™ cells was measured at 630 nm after addition of QUANTI-Blue™ detection medium (InvivoGen) to supernatant.
Measurement of MHC expression and apoptosis
SRDC and PECs were stimulated as described above (2.5.). Supernatant was centrifuged at 300Âg to pool floating cells and attached cells after their detachment using accutase (Affymetrix eBioscience). After centrifugation at 300Âg, supernatant was removed for quantification of cytokines (section 2.5.) and pelleted SRDC were suspended and saturated for 30 min on ice in PBS containing 1% bovine serum albumin, 2% mouse serum and 0.1% azide (PBS-BSA-azide). After centrifugation, 3 Â
Measurement of coagulation parameters
Blood was taken from non-infected adult male Wistar rat anesthetized with 60 mg/kg pentobarbital. Nine volumes were mixed with one volume of trisodium citrate at 0.109 M (32%). Plasma was separated by centrifugation during 15 min at 2500Âg. Half of the plasma sample was incubated for 30 min at room temperature with GPIs of B. divergens (from 5 Â 10 8 merozoites for 200 mL plasma). Before adding GPIs, the solvent n-butyl alcohol was dried under a nitrogen stream and GPIs were suspended in 10 mL of Owren-koller buffer by sonication. Half of the plasma sample was incubated for 30 min at room temperature with 10 mL of Owrenkoller buffer containing dried n-butyl alcohol alone. Negative and positive controls of coagulation, all reagents and Star 4 coagulation analyser were from Stago (France). Fibrinogen level was measured on sample diluted at 1/10 and 1/20 (final volume of 100 mL) with Fibri-Prest® Automate 2 (thrombin þ heparine inhibitor) reagent, PT was measured on 50 mL sample with Neoplastine® CI Plus (thromboplastin þ calcium) reagent and APTT was measured on 50 mL sample with C.K. Prest® (cephalin þ kaolin activator) or Cephascreen® (cephalin þ polyphenol activator) reagent, according to the manufacturer's instructions. All results are expressed in time of coagulation and given in seconds.
Statistics
The non-parametric one-way ANOVA test followed by the Dunnett's multiple comparison test (for not significantly different variances determined with Bartlett's test), the non-parametric Kruskal-Wallis test or the Friedman test followed by the Dunn's multiple comparison test (for significantly different variances determined with Bartlett's test) and the Wilcoxon test were used for statistical evaluation (GraphPad Prism 7).
Analysis of carbohydrate and lipid composition of B. divergens GPIs
To detect the GPIs of B. divergens for the first time, merozoites were metabolically labelled with [ 3 H]-glucosamine during in vitro culture within erythrocytes. Merozoites were separated from erythrocyte membrane by osmotic lysis and freezing. After their extraction with organic solvents, the different GPI species were separated by TLC and detected with a Berthold analyser. As shown in Fig. 1, two GPIs (3 and 8) were predominantly labelled, while eight GPIs were less expressed in B. divergens merozoites. In order to study biological effects of GPIs on mammalian cells and to determine their carbohydrate and lipid contents, GPIs were extracted from large amounts of merozoites. Contaminant phospholipids were completely eliminated by precipitation of GPIs [18]. After the separation of GPIs on preparative TLC, silica was scraped off according to the full width at half maximum of each peak. By this way, individual GPIs were highly purified. Based upon AHM and inositol quantification, GPI3 was the most abundant GPI species with more than 400e500 Â 10 4 copies per merozoite (Table 1). In contrast, GPI10 was the lowest represented GPI species with less than 3 Â 10 4 copies per merozoite. Except for GPI8, quantification was consistent with the counts of radioactivity on Fig. 1. The carbohydrate composition analysis by GC-MS of B. divergens GPIs revealed the presence of a maximum of two mannoses ( Table 2). Galactose was detected on GPI1 and GPI2, but not on the other GPI species (Table 2). Mass spectroscopy analysis of the PI moieties showed that each GPI species exists in several forms with different fatty acid methyl esters (from 16:0 to 24:0) or diacyls (from 34:1 to Table 3: results for all GPI species). Altogether, composition analysis gives the proposed GPI-pathway illustrated in Each GPI was submitted to base hydrolysis, deamination/reduction, methanolysis and TMS derivatisation for its subsequent analysis by GC-MS. Single ion monitoring of m/z 273 was selected to detect AHM and m/z 318 to detect both scyllo-and myo-inositol. The peak areas of the corresponding standards were used to calculate the molar relative response, allowing quantification of both AHM and myo-inositol in the samples. Values are means of molar ratios ± standard error of two replicates (equivalent to 2 Â 10 8 parasites, each). AHM: anhydromannitol. Table 2 Carbohydrate composition analysis of individual GPIs of B. divergens.
Each GPI was submitted to base hydrolysis, deamination/reduction, methanolysis and derivatisation for its analysis by GC-MS. The peak areas of the corresponding standards were used to calculate the molar relative response, allowing determination of monosaccharide content. GlcNAc levels were calculated from AHM generated during deamination.
exclusive palmitoylation on the inositol, as there was no fragment ion at either 612 or 668 m/z, which would correspond to myristoyl or stearoyl, respectively (GPI10 spectra on Fig. 2B).
Production of cytokines
In order to study the role of B. divergens GPIs in cytokine production, macrophages and dendritic cells were stimulated in vitro. For this, macrophages of the RAW 264.7 cell line, non-elicited peritoneal exudate cells (PECs, rich in macrophages), or dendritic cells of the SRDC cell line were stimulated during 24 h with the ten individual GPIs extracted from 5 Â 10 8 merozoites of B. divergens and cytokine levels were quantified in the cell culture supernatant by sandwich ELISA. B. divergens GPIs induced the production of the anti-inflammatory cytokines IL-5 and the regulatory cytokine IL-10 ( Fig. 4). The range of produced cytokines was different between the three cell types and varied according to the GPI species. After their stimulation with B. divergens GPIs, PECs produced significantly higher levels of IL-5 ( Fig. 4A upper histogram, p ¼ 0.003 Kruskal-Wallis test), but not of IL-10 ( Fig. 4A lower histogram, p ¼ 0.109 Kruskal-Wallis test) compared to the control condition, certainly due to important variation. In the presence of B. divergens GPIs, RAW 264.7 macrophages showed the lowest increases in IL-5 levels ( Fig. 4B upper histogram, p ¼ 0.015 one-way ANOVA test), but the most significant global increase in IL-10 levels (Fig. 4B lower histogram, p ¼ 0.0003 Kruskal-Wallis test). Increases of IL-5 levels produced by SRDC in response to the GPIs were highly significant ( Fig. 4C upper histogram, p ¼ 0.0001 Kruskal-Wallis test), while increases of IL-10 production by this cell type were modest (Fig. 4C lower histogram, p ¼ 0.0196 Kruskal-Wallis test). Regarding individual GPIs, PECs were significantly stimulated by GPI7 to GPI10 to produce IL-5 and by GPI1, GPI3 and GPI9 to produce IL-10; RAW 264.7 macrophages were significantly stimulated by GPI3 to GPI8 to produce IL-5 and by all GPIs except GPI7 to produce IL-10; SRDC were significantly stimulated by GPI3 to GPI6, GPI9 and GPI10 to produce IL-5 and by GPI7 to produce IL-10 (Figs. 4A, B, C). Similar levels of IL-5 and IL-10 were obtained with higher amounts of GPIs (data not shown). In contrast to GPIs of other protozoan parasites studied until now, GPIs of B. divergens did not enhance the production of IL-1b, IL-12 and TNF-a by any of the cells tested, whereas bacterial lipopolysaccharide used as positive control did (IL-1b: 35.1 ± 1.7 pg/mL with butanol, from 30.4 ± 1.6 to 34.3 ± 1.7 pg/mL with GPIs, 128.1 ± 1.7 pg/mL with LPS; IL-12: 1.9 ± 2.8 pg/mL with butanol, from 1.7 ± 0.6 to 5.5 ± 1.2 pg/mL with GPIs, 51.7 ± 2.3 pg/ mL with LPS; TNF-a: Fig. 4D). To further characterise the cytokine profile induced by B. divergens GPIs, a larger panel of cytokines was quantified by flow cytometry in the supernatant of RAW 264.7 macrophages. Levels of IFN-g (Th1), IL-4 (Th2), GM-CSF and IL-17A (Th17) were under the detection threshold (data not shown). Compared to the control condition, the GPIs induced very low levels of IL-2, but significantly higher levels of IL-23, representative of a Th17 profile (Fig. 4E). Altogether, these results showed that the GPIs of B. divergens orientate antigen presenting cells towards a unique Th1/Th2/Th17 profile compared to the pro-inflammatory profile induced by GPIs of all other protozoan parasites explored until now.
TLR signalling
To determine whether B. divergens GPIs are ligands of TLRs, HEK293T cells modified to express alkaline phosphatase reporter gene after TLR2 or TLR4 ligation were stimulated with B. divergens GPIs. Alkaline phosphatase activity was detected in supernatants of HEK-TLR2 cells stimulated with GPI2 to GPI9 (Fig. 5A, p ¼ 0.024 Kruskal-Wallis test) and in supernatants of HEK-TLR4 cells stimulated with GPI2 to GPI10 (Fig. 5B, p ¼ 0.0483 Kruskal-Wallis test) with higher activities of the last cell line, suggesting a predominant signalling through TLR4.
Modulation of MHC expression and apoptosis
Expression of MHC molecules has been explored on SRDC stimulated with B. divergens GPIs by flow cytometry after labelling with specific antibodies. No difference in MHC class I molecule expression was observed in the presence or absence of GPIs ( Fig. 6A), whereas B. divergens GPIs decreased MHC class II molecule expression at the surface of the cells (Fig. 6B). Although the difference was significant for GPI3 and GPI6, the global statistical analysis is not significant (p ¼ 0.22 Kruskal-Wallis test). Because apoptosis is difficult to evaluate in immortalized cell lines, PECs primary cells have been chosen for the study of apoptosis in response to B. divergens GPIs, determined by flow cytometry after labelling with annexin V-FITC. The percentage of necrotic cells (annexin V-FITC and propidium iodide double positive cells) was less than 2% in all conditions (data not shown). The percentage of apoptotic PECs (annexin V-FITC positive cells and propidium iodide negative cells) was relatively elevated in the control culture (Fig. 6C), certainly due to the culture conditions (no serum to avoid interference with GPIs). All GPIs of B. divergens increased the basal percentage of apoptotic cells, but not significantly (p ¼ 0.09 Friedman test). Individually, apoptosis was significantly increased by GPI1, GPI2, GPI4 and GPI8 (Fig. 6C).
Ex vivo regulation of coagulation parameters by B. divergens GPIs
We finally asked us whether GPIs could regulate the coagulation system. Due to the difficulty to produce large amounts of GPIs, they were tested ex vivo and not administered in vivo. Blood was taken from rats to get enough volume of plasma and levels of fibrinogen, prothrombin time (PT) and APTT were measured after 30 min incubation with GPIs of B. divergens. As shown in Fig. 7A, the coagulation times reflecting the levels of fibrinogen were slightly increased in the presence of GPIs compared to the control (p ¼ 0.02 Wilcoxon test), whereas PT (Fig. 7B) was not increased by the GPIs of B. divergens (p ¼ 0.06 Wilcoxon test). APTT (Fig. 7C) was significantly increased by B. divergens GPIs ex vivo when tested with two different activators (polyphenol activator: p ¼ 0.002 Wilcoxon test, kaolin activator: p ¼ 0.004 Wilcoxon test). These results suggest a direct effect of GPIs on coagulation factors of the intrinsic pathway.
Discussion
By using metabolic labelling, ten different GPIs, including biosynthetic intermediates of B. divergens have been detected for the first time in parasites cultivated in erythrocytes. Only five GPIs have been detected in B. bovis, but the method used to visualize them after TLC (iodine vapors) was not sensitive enough to detect minor spots [21]. The GPI-profile of P. falciparum showed eight distinct peaks identified and all GPI species except one (Pfz) carry a fatty acid on the inositol ring [22]. Amongst other apicomplexan parasites, the GPI of the 17-kDa antigen from Cryptosporidium parvum contains an acylated inositol [23], whereas the inositol ring is not substituted in any of the mature GPIs of Toxoplasma gondii [24]. Myristic acid was the predominant modification of the inositol of GPIs purified from merozoite surface proteins-1 and -2 of the FCBR strain of P. falciparum [25], while palmitic acid and myristic acid represent 90% and 10%, respectively, of the acyl chain on inositol of GPIs from the FCR-3 strain of P. falciparum [26]. In a previous study on B. divergens, the GPI anchor of the Bd37 major surface antigen was identified to have a palmitic acid substitution on the inositol [27,28]. Our results confirm that palmitic acid is exclusively present on GPI4, GPI5, GPI6, GPI7 and GPI10 intermediates in B. divergens. In a previous work, we could find the sequences coding for only two mannosyltransferases (PIGM and PIGV, but not PIGB, the third mannosyltransferase) in the genome of B. microti, suggesting a di-mannose structure [28,29]. We could not rule out that one of the two mannosyltransferases or another enzyme could add a third mannose. However, a glycan structure of Man 2 -GlcN has been identified in the main GPI species isolated from merozoites of B. bovis [21]. Here, we found the same particular structure, confirming that Babesia does not have the conserved GPI core glycan observed in all other eukaryotes. As no homolog of PIGB gene could be found in any piroplasmida genomes sequenced so far, we suggest that the presence of only two mannoses in the core glycan of the GPIs is a key feature of these parasites. Galactose is present in GPIs of Trypanosoma sp. and of Entamoeba histolytica [30,31], but this is the first time that this hexose is identified in GPIs of an Apicomplexa. Further analyses are needed to confirm the carbohydrate structure and to identify the final GPI anchor with ethanolamine phosphate, removed here through hydrolysis. The lipid moiety of GPI4 to GPI10 have palmitic, stearic, eicosanoic, docosanoic and tetracosanoic saturated fatty acids and oleic unsaturated fatty acid, whereas GPI1, GPI2 and GPI3 have only the longer chains (eicosanoic, docosanoic and tetracosanoic fatty acids) and no unsaturated fatty acids. In B. bovis, the structure of the main GPI also contains predominantly docosanoic and tetracosanoic fatty acids [21].
After infection with protozoan parasites, cells of the immune system produce Th1 cytokines playing an important role in the innate immune response and in the development of adaptive immunity. On the other hand, excess of Th1 cytokines is deleterious for the host. GPIs of P. falciparum have been defined as toxins eliciting hypoglycemia and excess of TNF-a production related to pyrexia and cachexia in mice [32]. GPIs of P. falciparum induced the production of IL-1b, TNF-a and NO by thioglycollate-elicited peritoneal macrophages [32,33]. Other studies have shown that GPIs of T. gondii [19,34], T. brucei [35] and T. cruzi [36] also induce Th1 cytokines (IL-12, TNF-a) by macrophages. Surprisingly, no Th1, but Th2/Th17 cytokines were secreted by antigen presenting cells stimulated by B. divergens. It would be interesting to expand the panel of cytokines analysed (i.e. families of IL-6 and TGF-b) to refine the profile. Studies on GPIs of T. brucei and P. falciparum have shown a dose-dependent effect on macrophages [35,37]. In the case of T. gondii GPIs, we have calculated that addition of individual GPI purified from 10 8 tachyzoites to a final volume of 200 mL of medium corresponds to a concentration of 1 mM [38]. Compared to LPS, 100fold higher concentrations of T. gondii GPIs were required to induce similar TNF-a levels by RAW 264.7 macrophages (personal observation). Furthermore, the dose-response was not linear: around 500 pg/mL were produced with GPIs from 10 6 and 10 7 tachyzoites to jump to about 2000 pg/mL with GPIs from 10 8 and 10 9 tachyzoites, suggesting an all-or-nothing threshold to be passed for optimal cell response [34]. The same phenomenon seems to occur with B. divergens GPIs, since increased amounts did not lead to higher cytokine levels (data not shown).
Cytokine production was dependent on the complex B. divergens GPI species/target cell/cytokine. In a recent study, we have also shown that the cytokine pattern produced in response to GPIs of N. caninum depends on the GPI species, the type of cells (cell lines vs. primary cells) and the origin of the cells (murine vs. bovine) [39]. Thus, cells from natural host of B. divergens should be tested to further characterise their biological effects. Concerning the GPI structure, the present results contrast with those obtained with T. gondii and P. falciparum. Indeed, the six different T. gondii GPIs (GPI I to GPI VI) and the 8 different P. falciparum GPI species (Pfa to Pfq) were all able to induce TNF-a production by RAW 264.7 macrophages [34,40]. However, induction of IL-5 or IL-10 production could involve different signalling pathways than TNF-a production. In T. brucei, it has been demonstrated that the galactose side chain of the glycosyl-inositol-phosphate (GIP) moiety is responsible for TNF-a production by macrophages [41]. B. divergens GPI1, with two Gal residues was unable to induce IL-5 in any cell type studied. This is maybe related to the absence of TLR2/4 signalling triggered by this GPI. However, GPI2 with one Gal residue was able to signal through TLR2/4, but also did not induce IL-5 production. TNF-a production induced by P. falciparum GPIs is mediated mainly through TLR2 and to a lesser extent through TLR4 [42]. On the contrary, whole GPIs of T. gondii activated TLR4 in the CHO cell model, but their core glycans and diacylglycerols potentially separated by macrophage phospholipases, activated both TLR2 and TLR4 in macrophages [43]. Complementary experiments are required to determine whether B. divergens GPIs stimulate antigen presenting cells only after their cleavage by specific enzymes. In the past, we demonstrated that GPIs not linked to proteins are expressed at the surface of T. gondii tachyzoites and clustered in lipid rafts [44]. Furthermore, GPIs not linked to proteins have been detected in the culture supernatant of the apicomplexan parasite Neospora caninum, suggesting their secretion by the tachyzoites [39]. Both surface and secreted GPIs not linked to proteins might also exist in B. divergens and, via their recognition by TLRs, contribute to the global response occurring at the parasite synapse when the merozoites are released from the erythrocytes.
The fatty acid moieties could also be responsible for distinct cell responses, but GPI1 and GPI2 have same chains in very close proportions. Lipids composition is the only difference between GPI4, GPI5 and GPI6. The three species induced similar cytokine profiles, but GPI4 significantly increases apoptosis and GPI6 significantly decreased MHC class II expression. A study on the effect of GPIs on MHC expression has demonstrated that GPIs of T. gondii increased expression of MHC class I and II molecules at the surface of bone marrow-derived macrophages [45]. The reduction of MHC class II expression confirms the opposite effect of GPIs of B. divergens on antigen presenting cells. Apoptosis of non-infected cells participates in pathogenesis of diseases due to virus, bacteria, or parasites. For example, host cell exosomes containing HIV Negative Factor or Ebola VP40, caused apoptosis of bystander lymphocytes, contributing to dysregulation of the immune system and higher viral replication [46,47]. In precedent studies, we have demonstrated that GPIs of P. falciparum increased apoptosis of primary rat cardiomyocytes after 48-h incubation [48], whereas GPIs of T. gondii were not able to induce apoptosis of HL60 cells [49]. In the present work, GPIs of B. divergens increased apoptosis. Soluble factors secreted in culture supernatants of macrophages infected with Mycobacterium tuberculosis were responsible for TNF-a-independent apoptosis of bystander Jurkat T cells [50]. In another study, the 38-kDa antigen secreted by M. tuberculosis induced macrophage ER-stress-induced apoptosis through activation of the TLR2/4-MAPK pathway and the production of reactive oxygen species [51]. Since B. divergens GPIs activated TLR2/4, but did not induce TNF-a production, apoptosis of PECs might be due to ER stress.
GPIs of B. divergens prolonged the activated partial thromboplastin time, suggesting a direct effect of GPIs on coagulation factors of the intrinsic pathway. In the same way, GPIs of other Babesia species could be responsible for the increase of APTT observed in vivo in serum of infected animals [8]. GPIs slightly increased coagulation time when the assay used to quantify fibrinogen levels was applied. Since an increase in these levels is not possible ex vivo, we can only conclude that GPIs does not degrade or interfere with fibrinogen. In vivo, GPIs might regulate other coagulation parameters through cell activation, but this could not be evaluated here.
Altogether, our study highlighted a unique biological profile of GPIs of B. divergens on antigen presenting cells compared to GPIs from all other protozoan parasites studied until now. This first study lays the foundations for understanding the roles played by GPIs in inflammation and coagulation during babesiosis. Th1/Th2 balance controls the fate (survival or death) of animals and human beings infected with protozoan parasites, especially the intracellular ones. The Th2 polarization of antigen presenting cells associated with the decrease in MHC class II expression induced by the GPIs of B. divergens are in favour of tolerance for the pathogen. Nevertheless, other molecules from Babesia or from the host might act in synergy with or inhibit the GPIs, as shown for fatty acids [40,52], leading to the production of higher Th1 cytokines during babesiosis.
Financial support
This work was supported by the University of Tours, France (to IDP and FDG), the University of Montpellier, France (to SD and EC), the Deutsche Forschungsgemeinschaft, Germany, project grant SCHW 296/18-2 (to RTS), the Wellcome Trust, United Kingdom, project grant 093228 (to TKS) and the Campus France, France/ Deutscher Akademischer Austauschdienst, Germany, PHC PROCOPE 24931RE (to RTS and EC). The funding source has no involvement in the conduct of the research and preparation of the article.
Author contributions
FDG, RTS and EC designed the research; FDG, TKS, SD, JS, VB and EC performed purification and biochemical studies; FDG, CD performed biological studies; TKS, SD, IDP, RTS and EC contributed to reagents/materials/analysis tools; FDG and TKS wrote the paper and all authors revised its final version.
Compliance with ethical standards
The experimental protocol, carried out in accordance with the European Union Directive (2010/63/EU), was approved by the Valde-Loire Ethics Committee for Animal Experimentation and the French Ministry for Research (permit number: APAFIS#6649-2016090711251954-v2).
Declaration of competing interest
The authors declare that they have no conflicts of interest and no competing interests including employment, consultancies, stock ownership, honoraria, paid expert testimony, patent applications/ registrations and grants or other funding in relation with the contents of this article. | 2019-07-26T08:41:33.078Z | 2019-07-01T00:00:00.000 | {
"year": 2019,
"sha1": "a3ba2610016b20286b93f5cd3b59ec6cd4a989e4",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.biochi.2019.09.007",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "d34f81de3a3c37fba523dd7501deff7e9f11a8d4",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
18347740 | pes2o/s2orc | v3-fos-license | Accuracy Analysis Mechanism for Agriculture Data Using the Ensemble Neural Network Method
: With the rise and development of information technology (IT) services, the amount of data generated is rapidly increasing. Data from many different places are inconsistent. Data capture, storage and analysis have major challenges. Most data analysis methods are unable to handle such large amounts of data. Many studies employ neural networks, mostly specifying the number of hidden layers and neurons according to experience or formula. Different sets of network topologies have different results, and the best network model is selected. This investigation proposes a system based on the ensemble neural network (ENN). It creates multiple network models, each with different numbers of hidden layers and neurons. A model that does not achieve the accuracy rate is discarded. The proposed system derives the weighted average of all remaining network models to improve the accuracy of the prediction. This study applies the proposed method to generate agricultural yield predictions. The agricultural production process in Taiwan is more complex than those of manufacturing or other industries. The Council of Agriculture provides agricultural forecasting primarily based on the planted area and experience to predict the yield, but without consideration of the overall planting environment. This work applies the proposed data analysis method to agriculture. The method based on ENN has a much lower error rate than traditional back-propagation neural networks, while multiple regression analysis has an error rate of 12.4%. Experimental results reveal that the ENN method is better than traditional back-propagation neural networks and multiple regression analysis.
Introduction
Crop production is important for people in Taiwan, while manufacturing industries face more issues than agricultural production.Issues in agricultural production include climatic factors, pests, diseases and the treatment process.Hence, farmers engaged in production, or those indirectly related to agricultural agencies, need to predict their crop yield accurately to avoid imbalances in market supply and demand caused or hastened by harvest crop quality and poor results.The agricultural forecasting provided by the Council of Agriculture is mainly based on the planted area and experience to predict the yield, but does not consider the impact of the plant environment on yield.
To understand the effect of important meteorological parameters, and to predict crop yields effectively, this work adopts stepwise regression and an ensemble neural network (ENN) method for analysis with the aim of improving the accuracy of crop yield prediction.
The rest of this study is organized as follows.The research backgrounds and related works of data mining methods, agricultural production forecasting, stepwise regression, and back-propagation neural networks (BPNs) are presented in Section 2. Section 3 proposes an ENN method to analyze agriculture data.The experimental results and discussions are illustrated in Section 4. Section 5 gives conclusions and future work.
Research Backgrounds and Related Works
The literature review of data mining methods, agricultural production forecasting, stepwise regression, and BPNs is discussed in the following subsections.
Data Mining Methods
Data mining is a part of database knowledge discovery.As the name suggests, it involves accumulating large amounts of data and extracting useful information from them.However, with the current development of information technology, the increasing amount of data, and different data types and sources and diversification, big data has become a major research topic in recent years for governments and industries.Big data technology is still based on traditional data mining methods.The objective of data mining or big data analysis is to identify implicit information from data, and thus enhance the value of information.Data analysis can be conducted using many approaches, such as cluster analysis, classification and statistical analysis.
Cluster Analysis
Fahad et al. [1] divided cluster analysis methods into five types, namely segmentation-based, hierarchical-based, density-based, grid-based and model-based methods, as listed in Table 1.
Classification
A classification model is generated from property values of existing data, then employed to predict the category of new data.The main goal of classification is to analyze the influence of each factor or variable on forecast data values.The result is a supervised learning network, containing neural networks and decision trees [2][3][4].
Statistical Analysis
This is based on mathematical principles, and can be categorized as descriptive statistics and inferential statistics [5][6][7].
Data mining creates high value for enterprises in sectors such as health and medical care, personal location information, retail and manufacturing [8].The proportion of US health care spending is very high.Analyzing the massive amount of health care data would significantly reduce capital costs.The retail sector has employed data mining analysis techniques for a long time: customer purchase records are applied to predict a future purchases list, and to adjust marketing strategies or merchandise display modes.The manufacturing sector, which is the backbone of the global trading industry, has a complex and widely dispersed value chain.Analyzing the available data would enable increased productivity, process improvements and reduced product delivery times.
Agricultural Production Forecasting
Many factors, mainly meteorological and environmental factors, influence crop yield.The variables covering changes in the weather include temperature, the amount of sunlight, and rain.Some studies concluded that temperatures and rainfall affect the growth of crops, thus affecting the final yield.Environmental factors that affect crop growth include latitude and soil.Chen et al. (2008) [9] accumulated data about crop damage, the economic growth rate, pesticide sales, the rate of change in agricultural production, the index of agricultural production and the gross national product to determine the effect of these variables on the amount of fresh fruits and vegetables in the market output as a factor of economic variables.Other investigations have observed that the usage of fertilizer and the mechanization of production are factors that affect crop yield.Some agricultural prediction algorithms utilize neural networks.Zhang et al. (2010) [10] accumulated meteorological and crop growth data, and employed these to compare the performance of artificial neural networks, the k-nearest neighbors algorithm (kNN) and regression methods to predict soybean growth and flowering stages in the schedule model.Their results show that artificial neural networks predicted the soybean growth and flowering stages more accurately than the two other models.Tsai et al. (2004) [11] constructed a production forecast model based on meteorological factors and growth traits factors, and analyzed it using the back-propagation network and other methods.Their analytical results demonstrated that the BPN forecasting performed better than others.Ma et al. employed regression analysis, the genetic algorithm, the back propagation neural network, and regression analysis combined with genetic algorithms to predict sales of pineapple, grapes and wax apples.According to their experimental data, the BPN best predicted wax apple sales, while regression analysis combined with genetic algorithms was most accurate for predicting pineapple and grape sales.
Stepwise Regression
Regression analysis by one or more independent variables is performed on the degree of correlation of a dependent variable to understand the influence of each independent variable.The methods of regression analysis are entering, forward, backward and stepwise regression.
Stepwise regression analysis combines the forward and backward regression return characteristics, beginning with the selected independent and dependent variables with the largest number of relationships.The dependent variables are successively removed from the regression equation, then added back to determine whether they should be included in the equation.Thus, forward and backward regression is utilized to obtain the best regression model [12,13].
Back-Propagation Neural Network
An artificial neural network (ANN) simulates messaging between neurons in a biological neural network.It comprises a plurality of neurons, as depicted in Figure 1. Figure 2 illustrates the network structure, also called the network topology [14][15][16].The traditional BPN which is a supervised learning network can be used for classification and prediction.In the learning stage, the BPN can update the weights among neurons in accordance with the error rate between the predicted output and the actual output in each iteration, and the error rate can be minimized after several iterations.The steps of the BPN method are described in the following [14][15][16].
(3) Setting the input neurons (e.g., Xi in Figure 1) and the output neurons (e.g., Yj in Figure 1).( 4) Calculating the output value of each neuron in the hidden layer in accordance with inputs and the output value of the neuron (e.g., Yj in Figure 1) in the output layer.(5) Evaluating the error rate between the predicted output and actual output.(6) Evaluating the error rate among the value of the output neuron, the output value of each neuron in the hidden layer, and the value of the input neurons.( 7) Updating the weights of neurons in accordance with error rates.(8) Repeating Steps ( 4)-( 7) until convergence.
While the BPN can analyze data and optimize the weights of the neural network, a local optimal solution may be performed by the BPN.Therefore, this study proposes an ENN to combine multiple BPNs with several compositions of data.
Materials and Methods
This investigation designs an accuracy analysis mechanism for agriculture data using the ENN method.The designed mechanism is employed for agricultural applications.Figure 3 shows the architecture of this mechanism.
Data Collection Mechanism
This is the underlying data analysis layer.It accumulates meteorological factors (e.g., relative humidity, precipitation, and air temperature), environmental factors (e.g., planting area, harvested area, harvest and harvest per unit volume), and economic factors (e.g., the cost of production and The traditional BPN which is a supervised learning network can be used for classification and prediction.In the learning stage, the BPN can update the weights among neurons in accordance with the error rate between the predicted output and the actual output in each iteration, and the error rate can be minimized after several iterations.The steps of the BPN method are described in the following [14][15][16]. (1) Setting the parameters (e.g., neural network structure, learning rate, etc.) of the BPN.
(3) Setting the input neurons (e.g., Xi in Figure 1) and the output neurons (e.g., Yj in Figure 1).( 4) Calculating the output value of each neuron in the hidden layer in accordance with inputs and the output value of the neuron (e.g., Yj in Figure 1) in the output layer.(5) Evaluating the error rate between the predicted output and actual output.(6) Evaluating the error rate among the value of the output neuron, the output value of each neuron in the hidden layer, and the value of the input neurons.(7) Updating the weights of neurons in accordance with error rates.(8) Repeating Steps ( 4)-( 7) until convergence.
While the BPN can analyze data and optimize the weights of the neural network, a local optimal solution may be performed by the BPN.Therefore, this study proposes an ENN to combine multiple BPNs with several compositions of data.
Materials and Methods
This investigation designs an accuracy analysis mechanism for agriculture data using the ENN method.The designed mechanism is employed for agricultural applications.Figure 3 shows the architecture of this mechanism.
Data Collection Mechanism
This is the underlying data analysis layer.It accumulates meteorological factors (e.g., relative humidity, precipitation, and air temperature), environmental factors (e.g., planting area, harvested area, harvest and harvest per unit volume), and economic factors (e.g., the cost of production and The traditional BPN which is a supervised learning network can be used for classification and prediction.In the learning stage, the BPN can update the weights among neurons in accordance with the error rate between the predicted output and the actual output in each iteration, and the error rate can be minimized after several iterations.The steps of the BPN method are described in the following [14][15][16]. (1) Setting the parameters (e.g., neural network structure, learning rate, etc.) of the BPN.
(2) Setting the weights (e.g., W i,j in Figure 1) among neurons in the BPN.
(3) Setting the input neurons (e.g., X i in Figure 1) and the output neurons (e.g., Y j in Figure 1).(4) Calculating the output value of each neuron in the hidden layer in accordance with inputs and the output value of the neuron (e.g., Y j in Figure 1) in the output layer.(5) Evaluating the error rate between the predicted output and actual output.(6) Evaluating the error rate among the value of the output neuron, the output value of each neuron in the hidden layer, and the value of the input neurons.(7) Updating the weights of neurons in accordance with error rates.(8) Repeating Steps ( 4)-( 7) until convergence.
While the BPN can analyze data and optimize the weights of the neural network, a local optimal solution may be performed by the BPN.Therefore, this study proposes an ENN to combine multiple BPNs with several compositions of data.
Materials and Methods
This investigation designs an accuracy analysis mechanism for agriculture data using the ENN method.The designed mechanism is employed for agricultural applications.Figure 3 shows the architecture of this mechanism.
Data Collection Mechanism
This is the underlying data analysis layer.It accumulates meteorological factors (e.g., relative humidity, precipitation, and air temperature), environmental factors (e.g., planting area, harvested area, harvest and harvest per unit volume), and economic factors (e.g., the cost of production and the market trading price) which are shown in Table 2 from many different open data sources.Figure 4 illustrates the data preprocessing stage, which involves data integration, data cleaning and data transformation.Each step is presented in the following paragraph.(1) Data integration The data from different databases (e.g., the Agriculture and Food Agency of Council of Agriculture in Taiwan) are collected and stored into a database.
(2) Data cleaning (1) Data integration The data from different databases (e.g., the Agriculture and Food Agency of Council of Agriculture in Taiwan) are collected and stored into a database.
(2) Data cleaning Due to the wide range of sources of information, information may be incomplete, non-conformant or noisy.Therefore, the data are cleaned to ensure the integrity and accuracy of the information (3) Data transformation For data normalization, data transformation is performed to normalize the data by using Equations ( 1)-(3).For instance, the average of the relative humidity during the j-th month can be defined as a 1,j , and the mean and standard deviation of the relative humidity in the historical dataset can be calculated by Equations ( 1) and ( 2), respectively.Then the normalized average of the relative humidity during the j-th month can be expressed as x 1,j by Equation (3).
x i,j " a i,j ´ai
Stepwise Multiple Regression Mechanism
Selecting the input variables of the neural network is a very important issue.Irrelevant input variables may lead to high network error, and indirectly reduce the network model reliability.To discover the relationship between meteorological factors and yields, this work derives a dependent variable from the monthly average temperature, relative humidity, sunshine and precipitation as independent variables.
Ensemble Neural Network Analysis Mechanism
The ENN method is based on BPNs.The ENN mechanism randomly generates a plurality of neural networks, each with a different architecture.For instance, the numbers of hidden layers and hidden layer neurons are generated randomly.Figure 5 illustrates the main process, which is divided into three stages, namely learning, recall and forecast.
Learning Stage
This algorithm generates M neural networks, each with different numbers of hidden layers and neurons in each hidden layer.In the learning stage, the learning data set is input into the networks.Input parameters, including meteorological data entry, contain the previous stage of the regional yield important parameters, environmental factors and economic factors.A neural network is a supervised learning network.In the learning stage, the input layer of the target maps to a known state in the output layer.Table 3 depicts the group summary.Hence, the main objective of this investigation is to construct a neuron coupling model between neurons for the learning stage, by constantly modifying the weights of neurons, in order to establish a correspondence between the input and output data in the study sample through learning.4 presents the group summary.The actual output value is then obtained.This is then compared with the target output value to obtain the accuracy for each network model.This accuracy is reused as the weight in the prediction stage.Furthermore, a threshold is considered and adopted for heuristic design.Any model that does not reach the accuracy threshold is eliminated.
Learning Stage
This algorithm generates M neural networks, each with different numbers of hidden layers and neurons in each hidden layer.In the learning stage, the learning data set is input into the networks.Input parameters, including meteorological data entry, contain the previous stage of the regional yield important parameters, environmental factors and economic factors.A neural network is a supervised learning network.In the learning stage, the input layer of the target maps to a known state in the output layer.Table 3 depicts the group summary.Hence, the main objective of this investigation is to construct a neuron coupling model between neurons for the learning stage, by constantly modifying the weights of neurons, in order to establish a correspondence between the input and output data in the study sample through learning.
Parameter Status
Neurons of input layer Known (learning data set) Weight Unknown (learned through constant learning and revision) Neurons of output layer Known
Recall Stage
Each network model constructs its network architecture model based on the preceding learning stage.The testing data set are entered for each network model which is then reconstructed based on the best correspondence.Table 4 presents the group summary.The actual output value is then obtained.This is then compared with the target output value to obtain the accuracy for each network model.This accuracy is reused as the weight in the prediction stage.Furthermore, a threshold is considered and adopted for heuristic design.Any model that does not reach the accuracy threshold is eliminated.
Parameter Status
Neurons of input layer Known (testing data set) Weight Known (learned through learning stage) Neurons of output layer Unknown (to verify the accuracy of the model output)
Prediction Stage
Any new data to be analyzed are entered into the remaining network models.Each network model determines the output based on the learning results and predictions.These network models which more accurately predict the overall results have a greater impact on the overall result.
Analyses of Experimental Results
This section presents the experimental environments and performs traditional BPNs and ENN to predict agricultural production.
Experimental Environments
All tomato data, meteorology data, environment data and economic data were accumulated.The total data set had 9953 records from the Agriculture and Food Agency of the Council of Agriculture in Taiwan from 1997 to 2014.The meteorological factors included the average air temperature, relative humidity, and precipitation; the environmental factors included the planting area, harvested area, harvest and harvest per unit volume; and the economic factors included the cost of production and the market trading price.In this study, the input parameters include the average air temperature, relative humidity, precipitation, planting area, cost of production, and market trading price; the output is harvest.The tools used in the experimental environments are listed in Table 5.
Experimental Results and Discussions
This study randomly generated five neural network models.Each network model generated up to five random hidden layers, and up to five neurons.Each network model used 60% of the available data for the learning data set, and the remaining 40% for the testing data set.The accuracy threshold was set as 90%, and the learning rate of each neural network was set as 0.1.That is, any model with accuracy below 90% was eliminated.Five tests were run in the learning stage.Table 6 shows the network model and network infrastructure for each test run.
In the first experiment, the accuracy rates of network models 1-5 were 90.81%, 86.70%, 88.10%, 89.87%, 93.30%, respectively.Only network models 1 and 5 had accuracy above 90%.In back-propagation neural technology research and analysis, a network model is only adopted if the accuracy rate of the network model has reached a threshold value.In this case, the model is used for later analysis to verify its prediction accuracy.The experimental conditions and parameters are fixed Sustainability 2016, 8, 735 9 of 11 in this stage.A datum is randomly selected from the data cluster.The traditional BPN model is then run to predict the results of multiple regression analysis and comparison.The regression equation based on regression analysis is defined as Equation (4).Each parameter in Table 2 was adopted into the regression model to predict harvests.This study used the root mean squared error (RMSE) to evaluate the error rate of the prediction method.The error rate of this method is about 12.4% which is higher than the error rates of traditional BPNs and ENNs.
Experimental Results of Traditional Back-Propagation Neural Network Analysis
The same consideration threshold of 90% of the model was compared to the first experiments.The actual production forecast was obtained by Model 1 which includes the neural network structure {1,3,1,2,1}.The output of Model 1 was 179,582 kg, and the actual yield was 191,500 kg.The result from Model 1 was thus 11,918 kg, or 6.64%, less than the actual production.The production forecast with Model 5 (i.e., neural network structure {1,3,2}) was 202,587 kg, which is 11,087 kg greater than the actual yield, giving a network model error of 5.47%.
Ensemble Neural Network Analysis of Experimental Results
The merit of this method is that it also considers the accuracy of the threshold through the network model.In the first experiment, ENNs were run to obtain the output value of Models 1 and 5.The weighted average yield was found to be 191,240 kg.The error rate of the ENN in Experiment 1 was 1.30% which is smaller than the error rate of traditional BPNs.The error rate was under 2% in Experiments 1, 3 and 4. The error rates of the models in Experiments 2 and 5 were higher, so Experiments 2 and 5 had high error rates.However, considering the weighted average significantly reduced the error rate.Figure 6 depicts the error rate for each experiment, and Figure 7 shows the error rate comparisons of BPNs and the ENN.
Conclusions and Future Work
With the advancement of information technology in various fields and the daily growth rate in data, neural networks are being widely adopted in industry, business, science and finance.However, the optimal number of hidden layers and neurons is mostly determined by experience or a formula.Considering a variety of analytical models is not possible.This study utilized stepwise regression analysis and ENN for the design guidelines to use in agriculture forecast analysis.The ENN method randomly creates a plurality of networks for analysis and forecasting and analyzes the results of all network models in order to improve the accuracy of the analysis.Experimental results reveal that the ENNs have the lowest error rate and highest accuracy, followed by traditional BPNs and multiple regression analysis.
Conclusions and Future Work
With the advancement of information technology in various fields and the daily growth rate in data, neural networks are being widely adopted in industry, business, science and finance.However, the optimal number of hidden layers and neurons is mostly determined by experience or a formula.Considering a variety of analytical models is not possible.This study utilized stepwise regression analysis and ENN for the design guidelines to use in agriculture forecast analysis.The ENN method randomly creates a plurality of networks for analysis and forecasting and analyzes the results of all network models in order to improve the accuracy of the analysis.Experimental results reveal that the ENNs have the lowest error rate and highest accuracy, followed by traditional BPNs and multiple regression analysis.
Conclusions and Future Work
With the advancement of information technology in various fields and the daily growth rate in data, neural networks are being widely adopted in industry, business, science and finance.However, the optimal number of hidden layers and neurons is mostly determined by experience or a formula.Considering a variety of analytical models is not possible.This study utilized stepwise regression analysis and ENN for the design guidelines to use in agriculture forecast analysis.The ENN method randomly creates a plurality of networks for analysis and forecasting and analyzes the results of all network models in order to improve the accuracy of the analysis.Experimental results reveal that the ENNs have the lowest error rate and highest accuracy, followed by traditional BPNs and multiple regression analysis.
Figure 3 .
Figure 3. Architecture of accuracy analysis mechanism for agricultural data based on the ENN method.Figure 3. Architecture of accuracy analysis mechanism for agricultural data based on the ENN method.
Figure 3 .
Figure 3. Architecture of accuracy analysis mechanism for agricultural data based on the ENN method.Figure 3. Architecture of accuracy analysis mechanism for agricultural data based on the ENN method.
Figure 4 .
Figure 4.The process of data preprocessing.
Figure 4 .
Figure 4.The process of data preprocessing.
rate.Figure6depicts the error rate for each experiment, and Figure7shows the error rate comparisons of BPNs and the ENN.
Figure 6 .
Figure 6.ENN results compared to experimental results of each experiment.
Figure 7 .
Figure 7.The error rates from BPNs and the ENN.
Figure 6 .
Figure 6.ENN results compared to experimental results of each experiment.
Figure 6 .
Figure 6.ENN results compared to experimental results of each experiment.
Figure 7 .
Figure 7.The error rates from BPNs and the ENN.
Figure 7 .
Figure 7.The error rates from BPNs and the ENN.
Table 3 .
ENN learning stage group summary.Each network model constructs its network architecture model based on the preceding learning stage.The testing data set are entered for each network model which is then reconstructed based on the best correspondence.Table
Table 3 .
ENN learning stage group summary.
Table 4 .
Summary of recall stage group of ENN.
Table 5 .
Tools in experimental environments. | 2016-08-24T23:09:51.855Z | 2016-08-01T00:00:00.000 | {
"year": 2016,
"sha1": "112a9f68fa0c614c7eeec9da6f24acf3aab17aa8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/8/8/735/pdf?version=1470041264",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "112a9f68fa0c614c7eeec9da6f24acf3aab17aa8",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
15744091 | pes2o/s2orc | v3-fos-license | Koszul duality for extension algebras of standard modules
We define and investigate a class of Koszul quasi-hereditary algebras for which there is a natural equivalence between the bounded derived category of graded modules and the bounded derived category of graded modules over (a proper version of) the extension algebra of standard modules. Examples of such algebras include, in particular, the multiplicity free blocks of the BGG category $\mathcal{O}$, and some quasi-hereditary algebras with Cartan decomposition in the sense of K{\"o}nig.
Introduction
For a finite-dimensional Koszul algebra, A, of finite global dimension there is a natural equivalence between the bounded derived category D b (A−gmod) of graded A-modules and the bounded derived category of graded modules over the Yoneda extension algebra E(A) of A, see [BGS]. This equivalence is produced by the so-called Koszul duality functor. If A is quasi-hereditary and satisfies some natural assumptions on the resolutions of standard and costandard modules (see [ADL]), then the algebra E(A) is also quasi-hereditary and the Koszul duality functor behaves well with respect to this structure. Some time ago S. Ovsienko in a private communication expressed a hope that for (some) graded Koszul quasi-hereditary algebras it might be possible that D b (A−gmod) is equivalent to the bounded derived category of graded modules for the extension algebra Ext * A (∆, ∆) of the direct sum ∆ of all standard modules for A. The reason for this hope is the fact that every quasi-hereditary algebra has two natural families of homologically orthogonal modules, namely standard and costandard modules. Both these families generate D b (A−gmod) as a triangulated category. The idea of Ovsienko was to organize the equivalence between the derived categories such that the standard A-modules become projective objects and the corresponding costandard A-modules become simple objects. In particular, it should follow automatically that Ext * A (∆, ∆) is Koszul, and its Koszul dual should be isomorphic to the extension algebra Ext * A (∇, ∇) of the costandard module ∇ for A. In the present paper we define and investigate a big family of graded quasi-hereditary algebras for which Ovsienko's idea works. However, the passage from A to Ext * A (∆, ∆) is not painless. There is of course a trivial case, when A is directed. In this case we have either A ∼ = Ext * A (∆, ∆) or E(A) ∼ = Ext * A (∆, ∆). In all other cases one quickly comes to the problem that the "natural" gradation induced on Ext * A (∆, ∆) from D b (A−gmod) is a Z 2 -gradation and not a Z-gradation. In fact, we were not able to find any "natural" copy of the category of graded Ext * A (∆, ∆)-modules inside D b (A−gmod). However, under special conditions (I)-(IV) see Subsection 2.5, which we impose on the algebras we consider in this paper, we single out inside D b (A−gmod) a subcategory of graded modules over certain Zgraded category (not algebra!), B, whose bounded derived category is naturally equivalent to D b (A−gmod). Additionally, the category B carries a natural free action of Z. The quotient modulo this action happens to be exactly Ext * A (∆, ∆) with the induced Z 2 -gradation. As a consequence, we have to extend our setup and consider modules over categories rather than those over algebras. This forces us to reformulate and extend many classical notions and results (like Koszul algebras, quasi-hereditary algebras, Rickard-Morita Theorem etc.) in our more general setup.
The paper is organized as follows: in Section 2 we collect all necessary preliminaries about the categories and algebras we consider. In Section 3 we get some preliminary information about the quasi-hereditary categories satisfying (I)-(IV). In Section 4 we formulate and prove our main result. We finish the paper with a discussion on several applications of our result in Section 5.
Modules over categories and Rickard-Morita Theorem
Let k be an algebraically closed field and D = Hom k ( − , k) denote the usual duality. Since we need not only modules over algebras, but also those over categories, we include the main definitions concerning them. All categories under consideration will be linear k-categories. It means that all sets of morphisms A (x, y) in such a category, A , are k-vector spaces and the multiplication is k-bilinear. Moreover, we suppose that these categories are small, i.e. their classes of objects are sets. All functors are supposed to be k-linear. An Amodule is, by definition, a functor M : A → Vec (the category of k-vector spaces). For an element, m ∈ M(x), and a morphism, α : x → y, we write αm instead of M(α)m, etc. We denote by A -Mod the category of all A -modules. A representable module is one isomorphic to A x = A (x, ) for some object x. Such functors are projective objects in the category A -Mod and every projective object in this category is a direct summand of a direct sum (maybe infinite) of representable functors. Just in the same way, the functors A x = A ( , x) are projective objects in the category of A op -modules, where A op denotes the opposite category. A set of generators of an A -module, M, is a subset, S ⊆ x∈ob A M(x), such that any element m ∈ M can be expressed as u∈S α u u, where all α u ∈ mor A and only finitely many of these morphisms are nonzero. Especially, { 1 x } is a set of generators of A x , as well as of A x .
Recall that, if a category, C , has infinite direct sums, an object, C, is called compact if the functor C C preserves arbitrary direct sums. For instance, finitely generated modules are compact objects of A -Mod. Suppose now that A is basic, i.e. different objects of A are non-isomorphic and there are no nontrivial idempotents in all algebras A (x, x), x ∈ ob A . We denote by A -mod the category of finite dimensional A -modules., that is those modules M for which all spaces M(x) are finite dimensional and M(x) = 0 for all but a finite number of objects x. Equivalently, x∈ob A M(x) is finite dimensional. If all modules A x and A x are finite dimensional, we call A a bounded category.
We denote by DA the derived category of the category A -Mod; by D + A , D − A and D b A , respectively, its full subcategories consiting of right bounded, left bounded and (twosided) bounded complexes. The shift in DA will be denoted by C • → C • [1]; actually C n [1] = C n+1 . By D per A we denote the full subcategory of DA consisting of perfect complexes, i.e. those isomorphic (in DA ) to bounded complexes of finitely generated projective modules. The perfect complexes are just the compact objects of DA . The category D per A can be identified with the bounded homotopy category H b (A -proj), i.e. the factorcategory of the category of finite complexes of finitely generated projective Amodules modulo homotopy. The projective modules (or rather their canonical images) generate D per A as a triangulated category. We recall the following theorem by Rickard [Ric], which we present in a slightly more general context of k-linear categories, see for example [Ke,Corollary 9.2].
Theorem 2.1 (Rickard-Morita Theorem). Let A and B be two small k-linear categories. Then the following conditions are equivalent: 2. There is a triangle equivalence D * A ∼ → D * B , where * can be replaced by any of the symbols +, −, b or per .
3. There is a full subcategory X ⊂ D per A such that (a) X ≃ B op ; (b) Hom DA (X, X ′ [k]) = 0 for any X, X ′ ∈ X and any k = 0; (c) X generates D per A as a triangulated category.
Moreover, in this case, any equivalence T : X ∼ → B op can be extended to a triangle equivalence F : DA In fact, given an equivalence, Φ : DB Note that, since B T X is a finitely generated projective B -module, then one also has RHom A (X, ) ≃ RHom B (B T X , F ). Thus, for every complex C • of A -modules we have The set of objects of a full subcategory X ⊆ DA satisfying conditions (3b) and (3c) will be called a tilting subset in DA .
Graded categories, graded modules and group actions
Let G be a semigroup. A G-grading of a category, A , consists of decompositions A (x, y) = σ∈G A (x, y) σ given for any objects x, y ∈ A , such that, for every x, y, z ∈ ob A and for every σ, τ ∈ G, A (y, z) τ A (x, y) σ ⊆ A (x, z) στ . A category, A , with a fixed G-grading is called a G-graded category. The morphisms α ∈ A (x, y) σ are called homogeneous of degree σ, and we shall write deg α = σ. If A is a G-graded category, a G-graded module (or simply a graded module) over A is an A -module, M, with fixed decompositions M(x) = σ∈G M(x) σ , given for all objects x ∈ A , such that A (x, y) τ M(x) σ ⊆ M(y) στ for any x, y, σ, τ . We denote by A -GMod the category of graded A -modules and by A -gmod the category of finite dimensional graded A -modules. Again we call elements u ∈ M(x) σ homogeneous elements of degree σ and write deg u = σ. For any graded A -module M and an element, τ ∈ G, we define the shifted graded module M τ , which coincide with M as A -module, but the grading is given by the rule: M τ σ = M τ σ . Obviously, the shift M → M τ is an autoequivalence of the category A -GMod.
We shall usually consider the case, when G is a group (mainly Z or Z 2 ). Such group gradings are closely related to the group actions. We say that a group, G, acts on a category, A , if a map T : G → Func(A , A ) is given such that T 1 = Id, where 1 is the unit of G, and T (τ σ) = T (τ )T (σ) for all σ, τ ∈ G. We do not consider here more general actions with systems of factors, when in the last formula the equality is replaced by an isomorphism of functors. We shall write σx instead of T (σ)x both for objects and for morphisms from A . Given such an action, we can define the quotient category A /G as follows: • The objects of A /G are the orbits of G on ob A .
• The product of morphisms is defined in the obvious way using representatives (one can easily check that their choice does not affect the result).
The action is called free if σx = x for every object x ∈ A and any σ = 1 from G. In this case it is easy to see that This allows us to define a G-grading of A /G. Namely, we fix a representative x in every orbit x and consider morphisms x → σ y as homogeneous morphisms x → y of degree σ. One can verify that, whenever the action is free, the quotient category A /G is equivalent to the skew group category A * G as defined, for instance, in [RR].
Moreover, if the action is free, there is a good correspondence between A -modules and graded A /G-modules. Given an A -module, M, we define the graded A /G-module GM putting GM(x) σ = M(σ x) and, for u ∈ GM(x) σ and α ∈ (A /G) τ (x, y), defining their product as (σα)u. It gives a functor, G : A -Mod → A /G-GMod. Conversely, given a graded A /G-module N, we define the A -module G ′ N putting G ′ N(x) = N(Gx) σ , where x = σ Gx. One immediately checks that G and G ′ are mutually inverse equivalences between A -Mod and A /G-GMod (cf. [RR]). Moreover, the restrictions of these functors to the categories A -mod and A /G-gmod induce an equivalence of these categories as well.
If the category A has already been H-graded with a grading semigroup H and the action of G preserves this grading, the factorcategory A /G becomes H × G graded, and the functors G, G ′ above induce an equivalence of the categories of H-graded A -modules and of H × G-graded A /G-modules.
Actually, any group grading can be obtained as the result of a free group action. Namely, given a G-graded category A , define a new category A with a G-action as follows: • The objects of A are pairs (x, σ) with x ∈ ob A , σ ∈ G.
Obviously, this action is free and A can be identified with A /G as a graded category. Just in the same way, given any graded A -module M, we turn it into an A -module, denoted by M , setting This correspondence gives the same equivalence A -Mod ∼ → A -GMod as above. This allows us to extend all results about module categories to the categories of graded modules. Especially, we can apply the Rickard-Morita Theorem to the category A -GMod (note that the category B from this theorem remains ungraded). We denote by D gr A the derived category of A -GMod. The grading shift M → M σ naturally extends to the category D gr A and commutes with the triangle shift There is an important class of gradings, defined as follows.
Definition 2.2. Let A be a G-graded category. We say that it is naturally graded if the category A defined above contains a full subcategory A Actually, it means that one can prescribe a degree, deg x ∈ G, to every object x ∈ A so that A (x, y) = A (x, y) σ −1 τ whenever deg x = σ, deg y = τ . In this case also A -GMod ≃ σ∈G σ( A 0 )-Mod and every component of this coproduct is equivalent to A -Mod. Finally, if A is a Z 2 -graded category, there are many ways to make A into a Z-graded category, taking a kind of "total" grading. Let ϕ : Z 2 → Z be any epimorphism. We can Al such induces Z-gradings will be called total.
Yoneda categories
For any triangulated category C and any set X ⊆ ob C we define the Yoneda category E = E (X), which is a Z-graded category, as follows: Note that if C = DA and X ⊆ A -Mod, then Hom C (X, Y [n]) = Ext n A (X, Y ) and the product βα defined above coincides with the Yoneda product Ext m If A is a G-graded category and C = D gr A , we also define the graded Yoneda category E gr (X), which is a (Z×G)-graded category, setting E gr (X, Y ) (n,σ) = Hom DgrA (X, Y σ [n]), which coincide with Ext n A -GMod (X, Y σ ) if X and Y are graded A -modules. The product of the elements α : Thus E gr (P) ≃ A op as graded categories.
Koszul categories
In this subsection we consider Z-graded categories A . Moreover, we suppose that A is basic, bounded and positively graded, i.e. A (x, y) n = 0 if either n < 0 or n = 0 and x = y, while A (x, x) 0 = k. In particular, the objects of A are pairwise non-isomorphic and their endomorphism algebras contain no nontrivial idempotents. Then the modules S(x) 0 = top A x = A (x, ) 0 and their shifts S(x) m are the only simple graded Amodules. If we consider them as A -modules without grading, we write S(x) for them. Let S = { S(x) } and S gr = { S(x) m }. We call the Yoneda category E (S) and the graded Yoneda category E gr (S gr ) respectively the Yoneda category and the graded Yoneda category of the positively graded category A and denote them by E (A ) and by E gr (A ) respectively .
Let A + be the ideal of A consisting of morphisms of positive degree, i.e.
, which is a semisimple gradable A -module, hence it splits into a direct sum of copies of S(y) for y ∈ ob A . We denote by ν(x, y) the multiplicity of S(y) in V (x) and define the species (or the Gabriel quiver ) of A as the graph Γ(A ) such that its set of vertices is ob A and there are ν(x, y) arrows from a vertex x to a vertex y. Equivalently, Evidently, the Yoneda category E (A ) is always positively graded. Therefore, the coefficients ν(S(x), S(y)) defining its species are not smaller than dim k E (S(x), S(y)) 1 = dim k Ext 1 A (S(x), S(y)). Thus the species of A naturally embed into those of E (A ).
Then the following properties are equivalent: (ii) For each object x ∈ ob A there is a projective resolution P • (x) of S(x) 0 such that, for every integer n, P −n (x) is a direct sum of modules A y −n , or, the same, is generated in degree −n (such resolution will be called linear).
(iii) For each object x ∈ ob A there is an injective resolution I • (x) of S(x) 0 such that, for every integer n, I n (x) is a direct sum of modules DA y n .
(iv) For all x, y, l, m, and n the inequality Ext n A -GMod (S(x) l , S(y) m ) = 0 implies n = l − m.
Proof. The equivalence of the properties (i)-(v) is straightforward and well known (cf. [BGS, ADL]), at least if A contains finitely many objects (i.e. arises from a graded kalgebra). In the general case the arguments are the same. The equivalence of (v) and (vi) follows immediately from the fact that Γ(A ) embeds into Γ(E (A )) and the last one embeds into Γ(E (E (A ))). It must also be well known, but we have not found any reference for it.
Remark 2.4. We call A weakly bounded if dim V (x, y) < ∞, and both sets {z : V (x, z) = 0} and {z : V (z, y) = 0} are finite for all x, y. If the category A is not bounded, the modules Obviously, there is a natural isomorphism, DDM ≃ M. Especially, the dual modules I x = DA x are just indecomposable injective modules over A if A is weakly bounded. It is easy to see that Proposition 2.3 extends, without any changes, to weakly bounded categories.
The condition Proposition 2.3(vi) is even more powerful than the other conditions in Proposition 2.3. Namely, we have the following (compare with [BGS,Lemma 3.9.2]): Proposition 2.5. Let A be a basic, bounded and positively graded category such that E (E (A )) ≃ A as graded categories. Then A is generated in degree 1.
Proof. For an
in the natural way and hence the latter notation makes sense). Analogous arguments applied to E (A ) give and hence all the inequalities above must be in fact equalities. This means that dim A A 1 = dim A A + /A 2 + and thus A is generated in degree 1. A category, A , satisfying one of the equivalent conditions of Proposition 2.3 (and hence all of them), will be called Koszul category, and the category E (A ) will be called the Koszul dual of A (the word "dual" is justified by the property (vi)). The equivalence of (ii) and (iii) implies that A is Koszul if and only if so is A op .
Let A be a Koszul category of finite global dimension and S(x, l) = S(x) l [−l], where x ∈ ob A , l ∈ Z. The property (iv) shows that the set { S(x, l) } is a tilting subset in D gr A . Hence Rickard-Morita Theorem can be applied to the full subcategory S consiting of these objects. The group Z acts on S : T n S(x, l) = S(x, l + n), and the set { S(x, 0) } can be chosen as a set of representatives of the orbits of Z on ob S . Moreover, This implies the following result (mostly also well known).
Quasi-hereditary categories
Let now A be a bounded category and let a function, ht : ob A → N ∪ {0}, be given. For every object x define the standard module ∆(x) as the quotient of A x modulo the trace of all A y with ht(y) > ht(x), and the costandard module For a set, X, of A -modules, we denote by F (X), the full subcategory of A -mod consiting of the modules which have a filtration with subfactors from X (an X-filtration). We call the category A quasi-hereditary (with respect to the function ht) if End A (∆(x)) = k, all composition subquotients of Rad(∆(x)) have the form S(y), ht(y) < ht(x), and A x ∈ F (∆); or, equivalently, if End A (∇(x)) = k, all composition subquotients of ∇(x)/Soc(∇(x)) have the form S(y), ht(y) < ht(x), and Obviously, in this case both ∆ and ∇ form a set of generators for D per A . The notion of a quasi-hereditary category is a natural generalization to this setup of the notion of a quasi-hereditary algebra, [DR1]. One should not confuse it with the notion of a highest weight category from [CPS]. A highest weight category is the category of modules over a quasi-hereditary algebra (or category). Assume now that A is a quasi-hereditary category. The arguments of [Rin] can be easily extended to show that for each x ∈ ob A there exists a unique (up to isomorphism) indecomposable module T (x) ∈ F (∆) ∩ F (∇), called tilting module, whose arbitrary standard filtration starts with ∆(x).
Assume further that A is positively graded. Following [MO,Section 5] one shows that in this case all simple, projective, standard, injective, costandard, and tilting modules admit graded lifts. For indecomposable modules such lift is unique up to isomorphism and a shift of grading. The grading on A gives natural graded lifts for projective, standard and simple modules such that we have natural projections A x ։ ∆(x) 0 ։ S(x) 0 in A −gmod. Let x ∈ ob A . We fix the grading on I x and on ∇(x) such that the natural inclusions S(x) 0 ֒→ ∇(x) 0 ֒→ I x 0 are in A −gmod. Finally we fix a grading on T (x) such that the natural inclusion ∆(x) ֒→ T (x) is in A −gmod and remark that it follows that the natural projection T (x) ։ ∇(x) is in A −gmod.
We have to remark that the lifts above are not coordinated with the isomorphism classes of modules. For example it might happen that some indecomposable A -module is projective, injective and tilting at the same time. If it is not simple, this module will have different graded lifts when considered as projective module (having the top in degree 0), as injective module (having the socle in degree 0), and as tilting module (having the top in a negative degree and the socle in a positive degree).
The Ringel dual R is defined as a full subcategory of A −Mod whose objects are the T (x), x ∈ ob A . Since all T (x), x ∈ ob A , admit graded lifts, the category R has a natural structure of a graded category (morphism of degree k from T (x) to T (y) are homogeneous morphisms of degree 0 from T (x) to T (y) k ). If A has finitely many objects, we have the characteristic tilting module T = ⊕ x∈ob A T (x) and the category R corresponds to the (graded) algebra End A (T ). In the present paper we will always consider R as a graded category with respect to the above grading. Now we are ready to formulate the principal assumption for the algebras we consider. They are motivated by the study of the category of linear complexes of tilting modules, associated with a graded quasi-hereditary algebra, see [MO]. From now on we assume that (I) for all x ∈ ob A the minimal graded tilting coresolution T • (∆(x)) of ∆(x) 0 satisfies T k (∆(x)) ∈ add ⊕ y:ht(y)=ht(x)−k T (y) k for all k ≥ 0; (II) for all x ∈ ob A the minimal graded tilting resolution T • (∇(x)) of ∇(x) 0 satisfies T k (∇(x)) ∈ add ⊕ y:ht(y)=ht(x)+k T (y) k for all k ≤ 0.
Because of [ADL, Theorem 1], the conditions (III) and (IV) are enough to guarantee that the category A is Koszul, in particular, that it is generated in degree 1.
Basic properties of graded quasi-hereditary categories satisfying (I)-(IV)
During this section we always assume that A is a bounded graded quasi-hereditary category and that (I)-(IV) are satisfied. Proof. We prove (i) using T • (∆(x)) and (I), and the arguments for (ii) are similar (using T • (∇(x)) and (II)). Proceed by induction in ht(x). If ht(x) = 0, then T (x) 0 is a standard module and the statement is obvious. Now assume that the statement is proved for all y with ht(y) = l − 1, and let ht(x) = l. Denote by C the cokernel of the graded inclusion ∆(x) 0 ֒→ T (x) 0 . By (I), C embeds into a direct sum of several T (y) 1 with ht(y) = l − 1, such that the cokernel of this embedding has a standard filtration. From the inductive assumption it follows that every subquotient of every standard filtration of such T (y) 1 has the form ∆(z) k + 1 , where k ≥ 0 and ht(z) = ht(y) − k. Since ht(y) = ht(x) − 1, the statement follows.
Corollary 3.2. The grading on R , induced from the category A -gmod, is positive and R satisfies (I)-(IV). In particular, the category R 0 with the same objects as R and whose morphisms are homogeneous morphisms from R of degree 0, is semi-simple.
Proof. Since each T (x) has both a standard and a costandard filtration, from [DR2, Section 1] it follows that every morphism from T (x) to T (y) is a linear combination of morphisms, each of which corresponds to a map from a subquotient of a standard filtration of T (x) to a subquotient of a costandard filtration of T (y). By Proposition 3.1 all subquotients in all standard filtrations of T (x) 0 live in non-positive degrees and all subquotients in all costandard filtrations of T (y) 0 live in non-negative degrees. This implies that the grading on R , induced from A -gmod, is non-negative. Moreover, from Proposition 3.1 it also follows that the only non-zero graded maps from T (x) 0 to T (x) 0 are scalar multiplications, while there are no non-zero graded maps from T (x) 0 to T (y) 0 if x = y. This implies that the zero component of the grading is semi-simple and hence that the grading is in fact positive. That R satisfies (I)-(I) follows from the fact that (I) and (II) are Ringel dual to (III) and (IV).
(i) The canonical inclusion ∆(x) ֒→ T (x) induces the following isomorphism:
(ii) The canonical projection T (x) ։ ∇(x) induces the following isomorphism: Proof. Again we will prove (i) and (ii) is proved by similar arguments. The inclusion ∆(x) ֒→ T (x) induces the inclusion Hom A −mod (∆(y), ∆(x)) ֒→ Hom A −mod (∆(y), T (x)), and we have only to verify that the latter inclusion is surjective. Set k = ht(x) − ht(y). Any map from ∆(y) to T (x) is induced by the unique (up to scalar) map from ∆(y) to some subquotient of the form ∇(y) of some costandard filtration of T (x). Hence by Proposition 3.1 the inequality Hom A -gmod (∆(y) i , T (x) 0 ) = 0 implies i = −k and k ≥ 0.
Proof. Analogous to that of Proposition 3.1 using (III) and (IV).
Proof. Since A is quasi-hereditary, Ext 1 A (S(x) 0 , S(y) k ) = 0, in particular, implies ht(x) = ht(y). Let us first assume that ht(x) < ht(y). Then Ext 1 A (S(x) 0 , S(y) k ) = 0 implies that S(y) k occurs in the top of the kernel K of the canonical projection A x ։ ∆(x) 0 since all composition subquotients of ∆(x) 0 have the form S(z) m with ht(z) < ht(x). From (III) it follows that the top of K consists of modules of the form S(z) −1 with ht(z) = ht(x) + 1. This proves the necessary statement.
In the case ht(x) > ht(y) one uses the dual arguments with injective resolutions.
Proposition 3.6. Both A and R are standard Koszul in the sense of [ADL], in particular, they both are Koszul.
Proof. Follows from (III), (IV), [ADL,Theorem 1], and Corollary 3.2. Proof. To prove the first statement let us first show that [∆(x) 0 l : S(y) −l ] = 0 implies ht(y) ≤ ht(x) − l. Indeed, let l be maximal such this statement fails for ∆(x) l , that is [∆(x) 0 l : S(y) −l ] = 0 for some y such that ht(y) > ht(x) − l. Using Corollary 3.5 we obtain that Ext 1 A -gmod (S(y) −l , ∆(x) 0 l+1 ) = 0, that is S(y) −l is in the socle of ∆(x) 0 . This implies the existence of a non-zero homomorphism from ∆(y) −l to ∆(x) and hence to T (x) via the canonical inclusion ∆(x) ֒→ T (x). Thus T (x) must contain ∇(y) −l as a subquotient of some costandard filtration. Since ht(y) > ht(x) − l, this contradicts Proposition 3.1. Now let us show that [∆(x) 0 l : S(y) −l ] = 0 implies ht(y) ≥ ht(x) − l. From the definition of ∆(x) it follows that ∆(x) 0 is obtained by a sequence of universal extensions, which starts from S(x) 0 , and where we are allowed to extend with modules S(z) m for ht(z) ≤ ht(x). Applying recursively Corollary 3.5 we see that all simple subquotients, which can be obtained after at most l steps must have the form S(z) m , where −l ≤ m ≤ 0 and ht(x) − l ≤ ht(z) ≤ ht(x). This gives the necessary inequality.
To prove the second statement we observe that dim k Hom A -gmod (A y −l , ∆(x) 0 ) = [∆(x) 0 l : S(y) −l ] for all y ∈ Λ. Because of (i) the image of any homomorphism f ∈ Hom A -gmod (A y −l , ∆(x) 0 ) does not contain simple subquotients S(z) t with ht(z) ≥ ht(y). Hence f factors through ∆(y) −l and the statement follows.
Main theorem
Throughout this section we suppose that A is a bounded graded category, which is quasihereditary with respect to some function ht : ob A → N ∪ {0} and satisfies conditions (I)-(IV). We will use the following notation: We use the same symbols B and B ′ for the full subcategories of D gr A with the sets of objects B and B ′ . We also denote by K the ideal of B consisting of all morphisms ∆(x, l) → ∆(y, m) with κ(x, l) = κ(y, m) and B ver = B /K. (c) F T (x, l) ≃ B ver ∆(x,l) ; (ii) Setting deg∆(x, l) = κ(x, l) defines a natural Z-grading on B , in other words we have B (∆(x, l), ∆(y, m)) = B (∆(x, l), ∆(y, m)) κ(y,m)−κ(x,l) .
(iii) The group Z acts on B in the following way: T n ∆(x, l) = ∆(x, l + n), in particular, B /Z becomes a Z 2 -graded category. Moreover, B /Z ≃ E gr (∆) as Z 2 -graded categories.
(v) There exist total Z-gradings, associated with the Z 2 -gradings from (iii) and (iv) respectively, with respect to which the categories B /Z and B ′ /Z are Koszul.
(vi) The Koszul dual of the Koszul category B /Z is isomorphic to the category B ′ /Z and vice versa.
The proof of this theorem includes several propositions, which will be stated separately. Most of them consist of some statements about the category B (especially, the modules ∆(x, l)) and analogous statements about the category B ′ (especially, the modules ∇(x, l)). We shall always prove the statements about B ; those about B ′ follow by duality (or can be proved quite in the same way).
Proof. Certainly, we may suppose that Since A is quasi-hereditary, the sets of objects B and B ′ generate D gr A per as a triangulated category. We denote by D even and D odd the triangulated subcategories of D gr A generated by { ∆(x, l) | δ(x, l) = 0} and { ∆(x, l) | δ(x, l) = 1} respectively. which coincides with E gr (∆(x), ∆(y)) (n,m) . Thus we have proved statement (iii). Certainly, the statement (iv) follows from (i)-(iii) by duality.
Observe that both B /Z and B ′ /Z are naturally Z 2 -graded and not Z-graded. Since the formula (1) is compatable with the Z 2 -grading above, the isomorphisms (2) between the Yoneda category of B and B ′ and vice versa as ungraded categories give rise to isomorphisms between the graded Yoneda category of B /Z and B ′ /Z and vice versa as Z 2 -graded categories. Now we would like to make this Z 2 -grading into a positive Z-grading. We will do this for B /Z and for B ′ /Z one uses analogous construction: the elements of degree 1 will be non-zero morphisms Hom DgrA (∆(x, l), ∆(y, m)), where ht(x) = ht(y) − 1 and l = m ± 1 (it is easy to see that this grading is given by assigning to ∆(x, l) the degree ht x). This uniquely determines a total Z-grading, induced from the original Z 2 -grading.
Using the positivity of the grading on A it is straightforward to verify that the grading, defined in this way, is positive. Moreover, it is also easy to see the above isomorphisms between the Yoneda category of B /Z and B ′ /Z and vice versa are compatable with this construction (this also follows from the ext-hom duality for standard and constandard modules over quasi-hereditary algebras, see [MO,Theorem 1] and [MO,Theorem 6]). Therefore B ′ /Z is isomorphic to the Yoneda category of B /Z and vice versa, now as Zgraded categories. Applying now Proposition 2.5 and Proposition 2.3, we get both (v) and (vi). This completes the proof of the Main Theorem.
5 Applications of the main result 5.1 Multiplicity free blocks of the BGG category O Let g be a semi-simple finite-dimensional complex Lie algebra with a fixed triangular decomposition g = n − ⊕ h ⊕ n + and λ ∈ h * be an integral dominant weight. Denote by W λ the stabilizer of λ with respect to the dot-action of the Weyl group W of g on h * . Let A λ be the basic associative algebra, whose module category is equivalent to the block O λ of the BGG-category O, which corresponds to λ, see [BGG,So1]. Let ∆ denote the direct sum of all Verma modules in O λ . Let further S denote the set of simple roots associated with W λ and O S denote the corresponding S-parabolic subcategory of O 0 (see [RC, BGS]). Let∆ denote the direct sum of all generalized Verma modules in O S . Finally, let us denote by B λ the basic associative algebra, associated with O S . In [So1,BGS] it was shown that the algebras A λ and B λ are Koszul and even Koszul dual to each other. A quasi-hereditary algebra (or the corresponding highest weight category) is said to be multiplicity free if all indecomposable standard modules are multiplicity free. (iii) The algebra Ext * O λ (∆, ∆) is Koszul and even Koszul self-dual. Proof. The primitive idempotents of A λ are indexed by the highest weights of Verma modules in O λ , which are w · λ, where w is a representative of a cosets W/W λ . For the antidominant µ = w 0 · λ (here w 0 is the longest element of W ) we set ht(µ) = 0 and for all other ν = w · λ we define ht(ν) and the smallest k such that there exist simple reflections s 1 , . . . , s k in W such that ν = s k . . . s 1 · µ.
The primitive idempotents of B λ are indexed by the highest weights of generalized Verma modules in O S , which are w · 0, where w is the shortest representative of a cosets W λ \W . Let w λ 0 be the longest element of W λ . For the weight µ = w λ 0 w 0 · λ we set ht(µ) = 0 and for all other ν = w · λ as above we define ht(ν) and the smallest k such that there exist simple reflections s 1 , . . . , s k in W such that ν = s k . . . s 1 · µ.
Assume now that A λ is multiplicity free. Then the condition (I) for B λ follows from the known structure of usual Verma modules (see for example [Di,Section 7]). Using the usual duality ⋆ on B λ (and on A λ ) we also obtain (II). The conditions (III) and (IV) follow from (I) and (II) since B λ is Ringel self-dual by [So2]. Now Theorem 4.1 implies that Ext * O S ∆ ,∆ is Koszul with Koszul dual Ext * O S ∇ ,∇ , where∇ is the direct sum of all costandard modules in O S . Applying ⋆ induces an isomorphism of these two algebras, which proves (ii).
Further, from the above proof of (ii) and Proposition 3.7 it follows that∆ is directed in the sense of Proposition 3.7. Now [Di,Section 7] implies that B λ is multiplicity free, which gives (i).
Finally, let us prove (iii). Again it is enough to prove (I) for A λ (as A λ has a duality and is Ringel self-dual by [So2]). If (I) is not satisfied, going to the Koszul dual B λ we obtain a "wrong" occurrence of a simple in some standard B λ -module∆(ν). This implies that the original Verma module ∆(ν), which surjects onto∆(ν) must have higher multiplicities. Using the Kazhdan-Lusztig Theorem and induction in ht(ν), we can further assume that the "wrong" occurrence of a simple in∆(ν) 0 is in degree 1. This, in turn, would mean that for some standard A λ -module the condition (I) fails already on the first step. However, in the multiplicity-free case all standard A λ -modules are directed in the sense of Proposition 3.7 by [Di,Section 7]. Further from the Kazhdan-Lusztig Theorem it follows that on the first step of the construction of the tilting module T (ν) we extend ∆(ν) with ∆(ξ) for all ξ such that S(ξ) −1 is a subquotient of ∆(ν) 0 . The directness of the standard modules and the already mentioned fact that all standard A λ -modules have linear tilting coresolutions now imply that the first step of the tilting coresolution of every standard A λ -module is always correct. A contradiction. This completes the proof of (iii) and of the whole theorem.
Remark 5.2. The Koszul grading on both Ext * O S ∆ ,∆ and Ext * O λ (∆, ∆) is given by Theorem 4.1 and can be described as follows: Both algebras are generated by elements of degree 0 and 1, and the elements of degree 0 are just scalar automorphisms of generalized Verma and Verma modules respectively. Let l denote the length function on W . Then for w, w ′ ∈ W the elements of degree 1 are homomorphisms Hom O (∆(w · λ), ∆(w ′ · λ) 1 ) and extensions Ext 1 O (∆(w · λ), ∆(w ′ · λ) −1 ) under the additional condition l(w) = l(w ′ ) + 1. Analogously for generalized Verma modules.
For more information on multiplicity free blocks of O and O S (in particular for classification in the case of maximal stabilizer) we refer the reader to [BC]. Proof. By [So1] we have A λ ∼ = B λ in this case and the statement follows from Theorem 5.1.
We would like to emphasize that the algebras Ext * O S ∆ ,∆ and Ext * O λ (∆, ∆) in Theorem 5.1 are not Koszul dual to each other in general, though the algebras A λ and B λ are.
Some Koszul quasi-hereditary algebras with Cartan decomposition
Let A be a basic quasi-hereditary algebra over k with duality and a fixed Cartan decomposition A = B ⊗ S B op , where B is a strong exact Borel subalgebra of A, see [Ko]. Let Λ be the indexing set of simple A-(and hence also of simple B-) modules.
Proposition 5.4. Assume in the above situation that (1) B is Koszul; (2) there is a function, ht : Λ → {0} ∪ N, such that the l-th term of the minimal injective resolution of the simple B-module L(x), x ∈ Λ, contains only indecomposable injective modules I(y) such that ht(y) = ht(x) − l; (3) A⊗ B − sends indecomposable injective B-modules to indecomposable tilting A-modules.
Then A satisfies (I)-(IV). In particular, for the direct sum ∆ of all standard A-modules we have that Ext * A (∆, ∆) is Koszul and even Koszul self-dual. Proof. Since B is an exact Borel subalgebra of A, the functor A ⊗ B − sends simple Bmodules to standard A-modules and is exact. This implies that the linear injective coresolution of any simple B-module is sent by A ⊗ B − to a linear tilting coresolution of the corresponding standard A-module. This shows that A satisfies (I) and (II) follows by duality. Since B is an exact Borel subalgebra of A, the functor A ⊗ B − sends indecomposable projective B-modules to indecomposable projective A-modules (see [Ko,Page 408]). Thus the linear projective resolution of any simple B-module is sent by A ⊗ B − to a linear projective resolution of the corresponding standard A-module. This shows that A satisfies (III) and (IV) follows by duality. Now Theorem 4.1 implies that Ext * A (∆, ∆) is Koszul with Koszul dual Ext * A (∇, ∇), where ∇ is a direct sum of all costandard A-modules. Koszul self-duality of Ext * A (∇, ∇) follows by applying the duality for A.
We note that the condition (2) is satisfied for example for incidence algebras, associated with a regular cell decomposition of the sphere S n , where ht(x) denotes the dimension of the cell x, see [KM]. All such algebras are also Koszul, see [KM], so the condition (1) is also satisfied. However, the condition (3) for such algebras fails in the general case. | 2014-10-01T00:00:00.000Z | 2004-11-24T00:00:00.000 | {
"year": 2004,
"sha1": "7a29aaf53087550ccfa8351e79d8ed86cade8977",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "https://doi.org/10.1016/j.jpaa.2007.01.014",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "7a29aaf53087550ccfa8351e79d8ed86cade8977",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
239885357 | pes2o/s2orc | v3-fos-license | Anomalous Lattice Thermal Conductivity in Rocksalt IIA-VIA Compounds
Materials with an intrinsic (ultra)low lattice thermal conductivity (k$_L$) are critically important for the development of efficient energy conversion devices. In the present work, we have investigated microscopic origins of low k$_L$ behavior in BaO, BaS and MgTe by exploring lattice dynamics and phonon transport of 16 iso-structural MX (Mg, Ca, Sr, Ba and X = O, S, Se and Te) compounds in the rocksalt (NaCl)-type structure by comparing their lattice transport properties with the champion thermoeletric iso-structural material, PbTe. Anomalous trends are observed for k$_L$ in MX compounds except the MgX series in contrast to the expected trend from their atomic mass. The underlying mechanisms for such low k$_L$ behavior in relatively low atomic mass systems namely BaO, BaS and MgTe compounds are thoroughly analyzed. We propose the following dominant factors that might be responsible for low k$_L$ behavior in these materials: 1) softening of transverse acoustic (TA) phonon modes despite low atomic mass, 2) low lying optic (LLO) phonon modes fall deep into acoustic mode region which enhances overlap between longitudinal acoustic (LA) and LLO phonon modes which increases scattering phase space, 3) short phonon lifetimes and high scattering rates, 4) relatively high density (\r{ho}) and large Gr\"uneisen parameter. Moreover, tensile strain also causes a further reduction in k$_L$ for BaO, BaS and MgTe through phonon softening and near ferroelectric instability. Our comprehensive study on 16 binary MX compounds might provide a pathway for designing (ultra)low k$_L$ materials even with simple crystal systems through phonon engineering.
Introduction
Discovery of materials with low lattice thermal conductivity gained tremendous interest due to their potential applications in thermoelectrics, [1][2][3][4][5] thermal barrier coatings, 6-9 thermal insulation 10,11 and thermal energy management. 12 Extensive efforts have been put forward by researchers in this direction to develop suitable materials for energy conversion applications during the last decade. Recently, binary alkaline-earth chalcogenides, MX (Mg, Ca, Sr, Ba and X = O, S, Se and Te) are being received considerable attention due to their potential applications in multifarious fields such as catalysis, microelectronics, 13 optoelectronics (light emitting, laser and magneto optical devices) [14][15][16] and thermoelectrics [17][18][19] despite their simple crystal structure. Bulk MX & their 2D counterparts, 17-20 p-type PbTe & MTe nanocrystals, 21 CaTe-SnTe, 22 heavily doped SrTe with PbTe 23,24 and BaTe-PbTe 25 are found to have excellent thermoelectric figure of merit (zT) in the range of 0.5-1. 32. 19,26 This shows the viability of these materials as power generating thermoelectric (TE) devices and are capable of converting waste heat into electricity. In general, the conversion efficiency is characterized by a dimensionless quantity i.e., zT = Sσ 2 T/k, where k = k e +k L ; S, σ, T are the Seebeck coefficient, electrical conductivity, and absolute temperature, and k e and k L are the electronic and lattice thermal conductivities, respectively. The complex interdependence among these S, σ and k parameters becomes challenging to discover the high zT materials. Therefore, materials with an intrinsic ultralow k (especially k L ) provide a pathway for discovering high zT materials without degrading its charge transport. The MX compounds are extensively studied from theoretical perspective, which mainly focused on exploring structural phase transitions, 27 46 and MTe (M = Mg, Ca, Sr, Ba and Pb) 47 compounds. Therefore, a detailed and comparative study on phonon transport of MX compounds provide insights to achieve (ultra)low k L materials through phonon engineering, which is essential for discovery of high zT materials. In the present work, we shed more light on understanding lattice dynamics, phonon transport and mechanical properties of 16 MX compounds at ambient conditions. Interestingly, we observe anomalous trends in k L for CaX (CaS > CaO > CaSe > CaTe), SrX (SrSe > SrO > SrS > SrTe) and BaX (BaTe > BaSe > BaS > BaO) series.
Especially, the observed anomalous 48 trend in BaX (partly in SrX and CaX 46 ) series is in contrast to the expected trend from their atomic mass. Overall, among 16 compounds, we found BaO, BaS and MgTe exhibit low k L behavior over the studied temperature range of 300-800 K despite their low atomic mass in rocksalt NaCl-type (B1) structure. The underlying mechanisms for such abnormal trends and low k L behavior are extensively discussed through the computed lattice dynamics, phonon lifetimes, scattering rates, phonon group velocities at 300 K and mechanical properties. We also have investigated the effect of tensile strain on lattice dynamics and phonon transport of BaO, BaS and MgTe compounds are discussed extensively.
The rest of the paper is organized as follows: In the next section, we briefly describe the computational details, methodology, various parameters used to perform the computation and crystal structure. Results and discussions concerning anharmonic lattice dynamics, lattice thermal conductivity and mechanical properties of the 16 MX compounds. Finally, we propose important observations that will be helpful to achieve (ultra)low k L in general and in particular for the MX compounds and finally summarized major outcomes of the present study.
Computational details, methodology and crystal struc- Lattice dynamics and thermal conductivity of MX compounds are calculated by considering harmonic (2 nd ) and anharmonic (3 rd ) inter atomic force constants (IFCs) using the temperature dependent effective potential (TDEP) 50-52 method. In the present work, we have considered the expansion of inter atomic force constants (IFCs) up to 3 rd order and the corresponding model Hamiltonian is given as follows: where p i and u i are momentum and displacement of atom i, respectively. Φ αβ ij and ψ αβγ ijk are 2 nd and 3 rd order force constant matrices, respectively. To compute these harmonic (2 nd ) and anharmonic (3 rd ) inter atomic force constants (IFCs), we have performed ab initio molecular dynamics (AIMD) simulations as implemented in the VASP at 300 K. The AIMD calculations were run for 5000 MD steps with time-step of 1 fs (i.e., 5 ps) with 128 atoms (4×4×4) supercell for all the MX compounds. For 2 nd and 3 rd order IFCs, interactions up to 9 th nearest neighbors were included to ensure the convergence of calculated lattice dynamics and phonon transport properties. The temperature was controlled with a Nosé-Hoover thermostat. 53,54 The lattice thermal conductivity is calculated by iteratively solving the full Boltzmann transport equation (BTE), including three-phonon and isotope scatterings from the natural distribution on a 25×25×25 q-point grid.
The thermal conductivity tensor is given by where C λ is the contribution per mode λ = (s, q) to specific heat, α and β are Cartesian components, v β and τ β are phonon velocity and scattering time, respectively.
The scattering rates are calculated from a full inelastic phonon Boltzmann equation which is given by The left-hand side represents the phonon difussion induced by the thermal gradient ∇T and n 0λ is the equilibrium phonon distribution function. While the right-hand side corresponds to the collision term for three-phonon interactions. v λ is the phonon velocity in mode λ, P + λλ ′ λ ′′ and P − λλ ′ λ ′′ are three phonon scattering rates for absorption (λ + λ ′ → λ ′′ ) and emission (λ → λ ′ + λ ′′ ) processes, respectively.
Binary alkaline-earth chalcogenides, MX (Mg, Ca, Sr, Ba and X = O, S, Se and Te) compounds except MgSe and MgTe crystallize in the face centred cubic (FCC) rocksalt NaCl (B1)-type structure (see Figure 1) having space group F m3m with Z = 4 formula units (f.u.) per unit cell at ambient conditions. 55-61 MgSe and MgTe exhibit rich polymorphism and they crystallize in rocksalt (B1), zinc-blende (B3), wurtzite (B4) and NiAs (B8)-type structures. The X-ray diffraction measurements reveal that MgTe crystallizes in B3 62 and B8 63 structures at ambient conditions. First principles calculations disclose that B3 38 phase for MgSe and both B3 38 and B8 64-66 for MgTe are thermodynamically stable structures at ambient conditions. Moreover, rocksalt-type B1 structure is dynamically stable (meta-stable) for both MgSe and MgTe compounds. Therefore, in the present work, we have considered B1 structure for all the MX compounds which allow us for direct comparison of the calculated properties among these 16 systems under investigation. Table 1 presents the calculated ground state equilibrium lattice constant for MX compounds in comparison with reported X-ray diffraction measurements 55-60,67-70 and previous first principles calculations 17,19,38,[71][72][73][74][75][76][77][78] and there is a good agreement among them. In addition, we also calculated the electron localization function (ELF) for MgO, BaO, MgTe and PbTe compounds. As shown in Figure 1 The distinct chemical bonding nature strongly influences k L of these materials.
Results and Discussion
Anharmonic lattice dynamics and thermal conductivity Exploring lattice dynamics including anharmonic effects is crucial for understanding phonon transport in materials. As a first step, we have computed phonon dispersion curves ( Figure 2) of MX compounds at 300 K and thoroughly analyzed them. As shown in Figure 2, no imaginary frequencies are found along high symmetry directions of the Brillouin zone indicating that all the investigated materials are dynamically stable. MX materials consist of 2 atoms per primitive cell resulting in (3N; N = number of atoms per primitive cell) 6 vibrational modes of which 3 are acoustic and 3 are optic modes. Dipole-dipole interactions are crucial for polar materials to describe phonon spectra correctly. These interactions are incorporated into dynamical matrix through calculated Born effective charges and high frequency dielectric constants which in turn produces a splitting between longitudinal optic (LO) and transverse optic (TO) phonon modes ( Figure 2). Due to this LO-TO splitting, the three phonon optic modes split into two degenerate TO (ω T O ) and one LO (ω LO ) modes along Γ-direction. Large LO-TO splitting is observed in particular for MO compounds and it is increasing from MgO < CaO < SrO < BaO, while it is decreasing from MO > MS > MSe > MTe compounds (see Figure 2 & Table S1). The MX compounds exhibit similar phonon band features and showed a significant phonon softening with increasing atomic mass from Mg → Ca → Sr → Ba and O → S → Se → Te and these features are consistent with previous first principles lattice dynamical calculations. [17][18][19]47 The calculated lattice thermal conductivity (k L ) as function of temperature (300-800 K) is presented in Figure 3 Table 1), the obtained k L values are overestimated compared to the ones obtained at the experimental lattice constant over the studied temperature range and this clearly demonstrates sensitivity of k L towards lattice constant(s) (see Figure S3). The predicted anomalous trends are originated from phonon softening observed from phonon dispersion curves (see Figure 2). As shown in Figure 2c Table 2). Overall, we observed two important aspects from phonon dispersion curves, which might Figure S4. For all the MX compounds, a large portion of the phonon MFPs fall above the minimum inter-atomic distance or so called Ioffe-Regel limit. Therefore, phonon Boltzmann transport theory is good enough to describe thermal transport in MX compounds. In crystalline materials, the heat transport can be understood as the propagation of phonons and their scatterings among themselves. Since k L ∝ τ (ω) and v(ω), therefore, materials with low τ (ω) and v(ω) expected to have low k L .
As illustrated in Figure 4, phonon lifetime decreases from MgO > MgS > MgSe > MgTe over the entire frequency range and the same trend is followed for k L in MgX. While CaO has relatively shorter phonon lifetimes than CaS in the frequency range of ∼ 2-8 THz (see Figure 4b), which might be reason for low k L of CaO thus results in the anomalous trend (CaS > CaO > CaSe > CaTe) for k L in CaX series. This trend is consistent with the previous lattice thermal conductivity study 46 on CaX compounds using ShengBTE. SrSe and SrO possess relatively highest and shorter phonon lifetimes in the frequency range of ∼ 1-4 THz and As shown in Figure S5, total scattering rates are obtained through summation of absorption, emission and isotope scattering rates from the three phonon processes for all the 16 MX compounds. The absorption scattering rates are largely dominated in the low frequency region (for instance, below 3 THz for BaO). In the low frequency region, phonon scattering processes probably occur through conversion of low energy phonon to a high energy phonon with an absorption of a phonon. The contribution of emission scattering rates gradually increase with frequency and are largely dominated at high frequency region, where phonon scattering processes probably occur through conversion of high energy phonon to a low energy phonon with an emission of a phonon. Finally, a moderate contribution from isotope scattering rates are observed over the entire frequency range, for instance, BaX compounds (see Figure S5).
Effect of tensile strain on lattice thermal conductivity
Out of 16 MX compounds, three of them (BaO, BaS and MgTe) found to have low k L (< 6 Wm −1 K −1 ) over the studied temperature range of 300-800 K. As illustrated in Figure 5, we compared phonon dispersion curves, phonon lifetimes, scattering rates and k L of BaO, BaS and MgTe compounds with PbTe. Softening of acoustic modes due to its high atomic mass (see Figure 5a), high scattering rates (see Figure 5b) and short phonon lifetimes (see Figure 5c) of PbTe responsible for its low k L behavior over BaO, BaS and MgTe compounds.
The obtained k L values follow exactly the decreasing order of phonon lifetimes for these four compounds, which is given as follows: BaS > BaO > MgTe > PbTe (see Figure 5c & d).
Based on this trend and observed trends for other MX compounds (see Figure 4) clearly show that phonon lifetime (τ ) is a dominating factor to determine k L behavior in iso-structural compounds with the same crystal symmetry.
We then considered these three BaO, BaS and MgTe compounds to investigate the effect of tensile strain on lattice dynamics and phonon transport. Further to lower k L , we applied tensile strain, which is an effective strategy to achieve (ultra)low k L in materials. We have systematically increased the obtained equilibrium lattice constant up to 6% but we observed soft phonon modes with tensile strain ≥ 5% of the equilibrium lattice constant for BaO, therefore, we studied the effect of tensile strain up to 4% for these three compounds. As illustrated in Figures 6a, 7a & 8a, with increasing strain, acoustic and TO phonon modes are getting softened, which increases the coupling strength between acoustic and TO phonon modes. This eventually increases phonon-phonon scattering rates with increasing strain (see Figure 6b, 7b, 8b), which causes a reduction in k L over the studied temperature range.
The phonon lifetime decreases (see Figure 6c, 7c, 8c) significantly due to high scattering rates for both acoustic and low lying TO modes with increase in tensile strain which is responsible for further lowering of k L (see Figure 6d, 7d, 8d). The obtained k L values for 4 % of tensile strain at 300 K are ∼ 2.06, ∼ 2.38, ∼ 1.05 Wm −1 K −1 for BaO, BaS and MgTe, respectively. The (ultra)low k L of strained MgTe might be a better candidate for energy conversion applications. From the present and previous studies, 46 one can expect a similar behavior for other MX compounds with application of tensile strain.
Elastic constants and mechanical properties
To explore inter atomic bonding strength, lattice anharmonicity and mechanical stability of MX compounds, we have calculated second order elastic constants (C ij ). Since all the studied MX compounds crystallize in the cubic (F m3m) structure, they have three independent elastic constants such as longitudinal (C 11 ), transverse (C 12 ) and shear (C 44 ) due to symmetry constraints (C 11 = C 22 = C 33 , C 12 = C 13 = C 23 , C 44 = C 55 = C 66 and C ij = C ji ). The calculated second order elastic constants are given in Table S2 and are consistent with the available ultrasonic pulse echo 80-82 and Brillouin scattering measurements 83 as well as with previous first principles calculations. 19,42,78,79,[84][85][86][87][88] The obtained elastic constants satisfy the Born stability criteria 89,90 indicating the mechanical stability of all these MX compounds.
We then computed bulk (B) and shear (G) moduli from the calculated elastic constants with Voigt-Reuss-Hill (VRH) approximation using equations 5 and 6, respectively. Later, B and G values are used to calculate Young's modulus (E) using equation 7. Since MgO has the highest E value, it is the stiffest material among the 16 MX compounds.
The calculated C ij 's, E, B, G moduli decrease from MO to MTe (M = Mg, Ca, Sr, Ba), which indicates the weak electrostatic/interatomic interactions in the lattice with increase in atomic size i.e., from Mg to Ba and O to Te. Therefore, the materials with higher atomic size can be easily deformed under mechanical stress thus results in low elastic moduli or soft lattice for systems with higher atomic mass.
The typical values of Poisson's ratio (σ) are 0.1 and 0.25 for covalent and ionic materials, respectively. 91 The obtained σ values span in the range of 0.18-0.28, which infer a strong ionic contribution in the interatomic bonding for these MX compounds (see Figure 1). MgO Figure 9, BaO has the highest γ σ , which indicates relatively high anharmonicity of BaO over other MX compounds which in turn leads to low k L . We then calculated sound velocities (v l , v t , v m ) and Debye temperature (Θ D ) using the following relationships: Here for BaX compounds (see Figure S8) also follow their atomic mass trend consistent with sound velocities. This results strongly suggest that phonon lifetime is the dominant factor over group velocity which is mainly responsible for the observed anomalous trends in MX compounds along with LLO and low frequency acoustic modes.
Conclusions
In summary, we have systematically investigated lattice dynamics, phonon transport and mechanical properties of 16 binary systems with rocksalt-type structure and compared their properties with an efficient thermometric material, PbTe. We predicted anomalous trends for
Lattice thermal conductivity of MX compounds
We have also calculated lattice thermal conductivity (k L ) for MX compounds at the experimental lattice constant (see Table 2) by varying metal cation with fixed chalcogen atoms (see Figure S1) and vice versa (see Figure S2). As shown in Figure S1, Table S3: Calculated Young's modulus (E, in GPa), Bulk modulus (B, in GPa), Shear modulus (G, in GPa), density (ρ, in gr/cc), sound velocities (v l , v t and v m , in km/s), Debye temperature (Θ D , in K), Poisson's ratio (σ) and Grüneisen parameter (γ) for MX compounds. S-14 Table S3 continued | 2021-12-31T16:07:24.013Z | 0001-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "bf8c70c217e322559a175090661508d9c650f4ac",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "938e88a7fc977b3233d85b927cf2e20adbe66d56",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
258986053 | pes2o/s2orc | v3-fos-license | The Level of Breeding Value of Cattle of the Auliekol Breed, Calculated by the Blup Method
: Currently, the assessment of the breeding value of livestock provides for new approaches to determine the genetic qualities of animals that sustainably transmit to offspring. This is achieved by using the phenotypic data of the ancestors, the closest lateral relatives, as well as the animals themselves and their descendants, which, according to all canons, is facilitated by the so-called BLUP methodology. The purpose of the study is to establish the estimated breeding values of Auliekol cattle by productive indicators. For the Auliekol cattle breed, the estimated breeding values in terms of live weight at birth varied in the range: The first year from -3.47 to ++9.03; the second year -5.18 to +12.60; the third year -5.22 to +12.52; in 2021-5.18 to +12.52. The percentile distribution of the calculated EBV of the live weight of the Auliekol cattle breed at weaning in 2018 varied in the range from -19.76 to +53.63, in 2019-23.88 to +63.65, in 2020 -24.33 to +63.01, in 2021 -23.58 to +62.53. The EBV in terms of live weight at one year of age in 2018 varied from -39.37 to +71.86, in 2019 -34.44 to +90.83, in 2020 -33.34 to +90.24, in 2021 -34.21 to +90.81. The estimated breeding value in terms of live weight for adult animals in 2018 ranged from -43.11 to +57.72, in 2019 -97.40 to +105.45, in 2020 -95.57 to +106.99, in 2021 -176.40 to +110.53. The EBV for the indicator of the milking capacity of mothers of the Auliekol breed in 2018 ranged widely from -28.89 to +65.89, in 2019 -26.18 to +52.71, in 2020 -24.01 to +51.55, in 2021 -23.67 to +52.75. For the Auliekol cattle breed, the EBV in terms of the average daily gain from birth to 12 months of age is in the range of the minimum indicator from -198.29 to -242.68, the maximum indicator from +235.88 to +348.71 .
Introduction
Increasing the production of high-quality beef is one of the most important and difficult tasks of agrarian science and practice, resolving which requires increased efficiency of using available breeding resources of beef cattle.The relevance of increasing meat production in the shortest possible time and therefore intensification of specialized meat cattle breeding is dictated by the need to expand the meat export potential of the country in order to ensure its food safety.Meat cattle breeding in Kazakhstan is based mainly on breeding the animals of the Kazakh whiteheaded breed; thus, its improvement largely depends on the volume of high-quality beef production (Nassambaev et al., 2018).
For Kazakhstan, the Auliekol breed is of particular interest in the development of domestic beef cattle breeding.
The aim of breeding work in cattle breeding, in particular when breeding Auliekol cattle, is to change the genetic fund of animals and improve their traits.
The Auliekol breed of cattle was bred in 1992 in the Kostanay region of the Republic of Kazakhstan by a complex, reproductive crossing of three meat breeds, such as the Aberdeen Angus, Sharolese and Kazakh whiteheaded breeds, which differed in characteristics such as precocity, large body weight, and ease of calving.
The means of changing the gene pool is selection, which uses productivity as the main indicator for changing this trait at the genetic level (Spanov et al., 2019).
The breeding value of livestock is one of the links in the implementation of the breeding program in herd populations to form targeted hereditary traits in animals and select desirable individuals when determining the breeding value of bulls (Spanov et al., 2019).
In this regard, the improvement and application of modern methods for evaluating bulls, taking into account the increase in the share of high-productive beef cattle in Kazakhstan, is an acute issue for science and practice (Kuliev et al., 2020).
In the Republic of Kazakhstan, selection and breeding work is aimed at breeding animals with specified zootechnical parameters, adapted to modern technology (Oraz et al., 2022).
The main factor in the manifestation of the genetic potential of the meat productivity of cattle is rational feeding rates that affect the metabolism, namely, the average daily gain (Musaeva et al., 2020).
In the beef cattle breeding of Kazakhstan, the Kazakh white-headed breed has been well-studied in the breeding plan (Bozymov et al., 2019).
The need for increased production of high-quality beef is primarily dictated by the requirements of the external market.In this aspect, improving the genetic processes in the population extends the real possibility of intensifying the breeding process and allows the development of new science-based programs for improving the stud and productive qualities of meat animals.In improving cattle of the Kazakh whiteheaded breed, the main method of improving the hereditary qualities today is purebred breeding by stud lines.The importance of the method of breeding by lines is in the fact that it allows fast fixation and development of the desired traits, which are characteristic of individual animals, in many descendants.This requirement primarily arises when a new breed or a new stud type of cattle appears (Nassambaev et al., 2018).
The knowledge of genotypic processes occurring in populations broadens the real possibility of intensifying the selection process and allows the development of new, scientifically-based programs to improve the breeding and productive qualities of beef cattle.Nowadays, molecular genetic methods of analysis are used to establish the origin of breeding animals (Nugmanova et al., 2020).
At this stage, the evaluation of animals by the BLUP method becomes more important, the effectiveness of which indicates the level of their breeding value by phenotypic indicators and the distribution of their degree of genetic potential.
Currently, in the Republic of Kazakhstan, the calculation of the breeding value of cattle of the Auliekol breed by the method of indices has not been carried out.Thus, all of the above determined the relevance of the research.
The Aim of the Research
This is to establish the estimated breeding values of the Auliekol cattle breed according to productive indicators.
Materials and Methods
The research was carried out at the population level of Auliekol breeding cattle bred in the Republic of Kazakhstan.Formation of data for analysis, the materials of the "republican livestock system" database of the information and analytical system were used.
Assessment of genetic qualities-an index assessment of the genetic breeding value of beef cattle was carried out using the method of the best linear unbiased prediction-BLUP.
For this, mixed linear biometric Animal Models (AM/MME) were built for each estimated productive trait: Live weight at birth, live weight at weaning, the milking capacity of cows at weaning of the calf, and live weight at one year of age.These models took into account the contributions of many factors and effects to the estimated productive trait: Fixed and genetic effects, environmental factors, seasonal factors, and random and unaccounted effects.The influence of all factors included in the model was taken into account simultaneously in the course of calculations.
The BLUP method was carried out based on data on productivity and zootechnical events of breeding cattle of beef breeds from farms registered in the Database of the Information Analytical System (DB-IAS).Initial indicators of the productivity of cattle of the studied breed for evaluation by the BLUP method: Live weight at birth, live weight at weaning and live weight at one year of age.Fixed effects of influence took into account: Differences in the content of individuals on farms; years and seasons of calving; sex and age group of calves; mother's age; type of birth (single, twin).
The biometric model of the animal considered additive genetic effects due to parental qualities in generations taken up to three ancestors, the sex of the animal, the effects of the herd, and the effects of the year and season of birth (Abdelmanova et al., 2021;Nikonova et al., 2021).
The recommendations of the international nongovernmental non-profit organization FAO regarding the assessment of the breeding value of livestock have been studied (Henderson, 1975).
When evaluating servicing bulls, statistical approaches and methods are mainly used: Assessment of the genetic breeding value of an animal according to a mixed biometric model Animal Model/Mixed Model Equation (AM/MME) using the classical method of the Best Linear Unbiased Prediction (BLUP).
The initial indicators of the live weight of young animals at birth, and at weaning were adjusted in accordance with the age of the mothers, which affect the studied indicators.Table 1 shows adjusted values for live weights at birth and weaning.
Similarly, live weight at weaning was adjusted to 210 days of age and body weight per year by 365 days of age.Initial data adjustments were made according to formulas (1-3): The Estimated Breeding Value (EBV) of the productive indicators of animals of the Auliekol breed was determined for 2018-2022.The index values were further interpreted as an assessment of the own genetic productivity of each evaluated animal relative to the corresponding average values.
Auliekol cattle are characterized by a harmonious and proportional physique and a strong and dense constitution.The suit is predominantly light and ashy (gray) in different shades.The lumpy head is short with a broad forehead.The upper line (back and loin) is straight and wide, the backbone is of medium massiveness, and a welldeveloped posterior third of the trunk with sufficient muscularity contributes to a higher yield of valuable cuts in the carcass.Strong and upright limbs have an average length, and the udder is rounded and full.The absence of horns and a calm disposition ensure good adaptability of animals of the new genotype and contribute to its suitability for the technology of loose keeping in large groups with the mechanization of the main production processes for the care of livestock, which increases labor productivity in the industry.Bulls-producers of the Auliekol breed at the age of 5 years and older reach a live weight of 950-1050 kg, which is 15-28% higher than the breed standard (class I) in Kazakh whiteheaded bulls (820 kg).The live weight of full-aged cows of the Auliekol breed is 500-600 kg, which exceeds the indicators of the Kazakh white-headed breed in the best breeding farms by 4-6% and in some years the superiority reaches 19% and higher.Auliekol cows are characterized by good maternal qualities and high milk production (live weight of a calf at 205 days of age), which exceeds the standard (class I) of the Kazakh white-headed breed by 12.3%.
Auliekol cows have sufficiently high reproductive qualities, which ensures the yield of calves in the range of 90-97 calves per 100 queens.Heifers are mostly inseminated at 17-20 months.At the same time, calving in first born heifers and cows takes place without childbirth.
Results
One of the ways to effectively improve the breeding and productive qualities of the Auliekol breed of beef cattle is to determine the genetic value of breeding bulls, selection, on this basis, the best, and their widespread use in breeding and commodity herds.The theoretical basis for the selection of the Auliekol breed of beef cattle in terms of growth intensity is population genetics, which allowed us to identify sufficiently high: Genetic variability, the heritability of this trait, and the correlation between the growth rate at a young age of the producer bull itself and its descendants.
Highly productive European breeds of cattle are widely used in various natural and climatic zones.Animals are brought into areas with similar climates and environmental conditions that are more or less different from the climate where the imported breed was formed.Animals are forced to adapt to new conditions of existence.The main natural and climatic factors acting on the body are air temperature, humidity, atmospheric pressure, etc. Mostly, these factors act in the form of a complex, but some become dominant in certain conditions (Kayumov et al., 2021).
Based on the research results, a methodology was developed for calculating the index score using the BLUP AM statistical method with the construction of a genetic model of the animal, and predicted breeding values were calculated for 5 productive indicators: Live weight at birth, at weaning, and at 12 months.age, at the age of 5 years, the milking capacity of cows.
For the Auliekol cattle breed, the estimated breeding values in terms of live weight at birth varied in the range: The first year from -3.47 to +9.03; the second year from -5.18 to +12.60; the third year -5.22 to +12.52; in 2021 -5.18 to +12.52 (Table 2).
The general conclusion on further work with the breed should be considered a priority for increasing milking capacity.Its positive result will significantly increase the live weight of young stock when weaned from mothers, that in the future will significantly affect the energy of their growth and increase in live weight at 12 and 18 months of age.This is confirmed by the high positive correlation between these traits, both in bulls and in heifers.In the second stage of the research, in the course of the breeding experiment using a common database and the BRBCB software program, we tested the proposed methodology for assessing bulls in the quality of offspring based on the selection index.When breeding meat cattle breeds, it is necessary to use animals in reproduction, which inherit high growth energy and the ability to actively convert the nutrients of plant foods to the development of muscle tissue.To identify them we should use the multi-year database of reliable data and the electronic operating system that could quickly analyze a large amount of information.This is connected with the fact that the manifestation of quantitative traits is due to the interaction of genetic and paratypical factors.If at this interaction between relatives, there is a similarity in quantitative traits, it indicates a significant genetic influence, and such animals are the most desirable for breeding (Asylbekovich et al., 2019).The distribution by percentiles of the calculated EBV values of the live weight of animals of the Auliekol breed at weaning in 2018 varied in the range from -19.76 to +53.63, in 2019 -23.88 to +63.65, in 2020 -24.33 to +63.01, in 2021 -23.58 to +62.53 (Table 3).
Currently, there is a need to study the dynamics of spermatological indicators of the semen of stud bulls and to determine the importance of stud bulls' origin within each breed, as well as to study the possibility of predicting their sperm productivity (Nassambaev et al., 2019).
In the process of calculating the EBV of animals of the Auliekol breed in 2021-2022, their accuracies were obtained (Table 7).
For the Auliekol cattle breed, the values of the calculated EBVs in terms of average daily gain from birth to 12 months of age lie in the range for the minimum indicator from -198.29 to -242.68, for the maximum indicator from +235.88 to +348.71 (Table 8).
Table 9 shows the values of the breeding value indices for three indicators (live weight at birth, at weaning, and at 12 months of age) for 10 heads of bulls and 10 heads of heifers of the Auliekol breed.
It was found that bulls significantly outperform heifers in live weight at birth, at weaning, and at one-year-old age.
At birth, bulls have a live weight of 24-30 kg, at weaning 207-235 kg, and at one-year-old age 35-335 kg.In heifers, the live weight was 22-24 kg at birth, 185-210 kg at weaning, and 255-280 kg at one-year-old age.
The breeding value index was -0.62 and 2.46 for live weight at birth in bulls, and -1.26 and 1.63 for heifers.
The breeding value index was -3.33 and 4.40 for weaning in bulls, and -1.86 and 2.42 for heifers.
At one year of age, the breeding value index was, respectively: Bulls -3.79 and 18.78; heifers 4.30 and 11.65.
Discussion
In the conditions of Kazakhstan, on the basis of the conducted research, the values of the indices of the breeding value of cattle of the Auliekol breed according to the main productive indicators have been established at the population level.The obtained data on the live weight of young animals at birth, and at weaning were adjusted in accordance with the age of the mothers, which affect the studied indicators.Previously, the indices of the breeding value of cattle of the Hereford breed of the Kazakh population were determined using generally accepted research methods.At the same time, it was found that the share distribution of accuracy for the calculated breeding value index in 2021 according to the productive indicators of Hereford breed animals contains zero values for the most part for the indicators of dairy cows (Bissembayev et al., 2022).Similar data were obtained by us for the Auliekolsky breed of cattle.Based on the research results, a methodology was developed for calculating the index score using the BLUP AM statistical method with the construction of a genetic model of the animal, and predicted breeding values were calculated for 5 productive indicators: Live weight at birth, at weaning, and at 12 months age, at the age of 5 years, the milking capacity of cows.An increase in the proportion of non-zero accuracy values of the EBV of Auliekol cattle indicates a better filling of the database on productive indicators over the past 5 years (2018)(2019)(2020)(2021)(2022).The obtained results provide an opportunity to analyze and rank the studied individuals of the Auliekol cattle breed according to the level of their breeding value of genetic qualities, with a purposeful selection of parental pairs.It is proposed to use the data obtained in the large-scale assessment of cattle of the Auliekol breed of cattle according to the indices of breeding value.When working with herds of the Auliekol breed of beef cattle, a comprehensive assessment of animals is used, 234 which is possible when a certain age is reached.The use of an assessment based on the indices of breeding value makes it possible to evaluate an animal at an early age.According to the results of the conducted research, it was found that the breeding value of the Auliekol breed of beef cattle in different age periods is manifested differently and this must be taken into account.The established index of breeding value of the Auliekol cattle breed is recommended to be used as an addition to traditional methods of breeding and management of breeding work.
Conclusion
The conducted studies on the evaluation of the breeding value index of cattle of the Auliekol breed allowed for the first time to calculate the breeding value indices: According to the indicators of live weight at birth, at weaning, at 12 months of age and adult livestock; according to the milk productivity of mothers; according to the average daily increase in live weight from birth to 12 months of age.The accuracy is calculated for the index of breeding value of productive indicators of the studied breed of cattle.Based on the conducted research, taking into account the novelty of the results obtained, it is proposed to continue research on the study of cattle of the Auliekol breed, covering the entire available breeding stock.This will increase the reliability of the evaluation of animals according to the indices of breeding value, and minimize the receipt of negative results when conducting breeding and genetic work.
Table 1 )
СМг = Adjusted live weight at one year of age, kg Мг = Live weight at one year of age, kg Вг = Animal age when weighed at one-year-old, days
Table 1 :
Adjusted values for indicators of live weights of the calf, taking into account the age of the mother Correction for live weight at weaning, kg -
Table 3 :
Percentile of calculated EBV values of live weight of Auliekol cattle breed at weaning Year of observation -
Table 4 :
Percentile distribution of calculated EBV values of live weight of animals of the Auliekol breed at one year of age Year of observation -
Table 5 :
Percentile distribution of calculated EBV values of live weight of Auliekol breed animals aged 5 years and older Year of observation -
Table 6 :
Percentile distribution of calculated EBV values of the milking capacity of Auliekol breed mothers Year of observation -
Table 7 :
Percentile distribution of calculated accuracy for EBV of productive indicators of Auliekol breed animals, according to 2021-2022 data Accuracy of EBV of live weight, kg -
Table 8 :
Percentile distribution of calculated EBV values of average daily growth of Auliekol breed animals EBV, g/day
Table 9 :
The results of the index evaluation of the live masses of animals of the Auliekolsky breed based on the results of their own calculations Live weight, kg Identification
Table 10 :
Average values of productivity indicators of Auliekol cattle | 2023-05-31T15:14:16.231Z | 2023-02-01T00:00:00.000 | {
"year": 2023,
"sha1": "7499600e4714a4bbf363104c452ff7f282383cbd",
"oa_license": "CCBY",
"oa_url": "https://thescipub.com/pdf/ojbsci.2023.226.235.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6827d856638d564d0112bc292b744bcd1cdcb6e8",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
14803359 | pes2o/s2orc | v3-fos-license | Exploitation of an Arabic Language Resource for Machine Translation Evaluation: using Buckwalter-based Lookup Tool to Augment CMU Alignment Algorithm
Voss et al. (2006) analyzed newswire translations of three DARPA GALE Arabic-English MT systems at the segment level in terms of subjective judgmen+F925t scores, automated metric scores, and correlations among these different score types. At this level of granularity, the correlations are weak. In this paper, we begin to reconcile the subjective and automated scores that underlie these correlations by explicitly grounding MT output with its Reference Translation (RT) prior to subjective or automated evaluation. The first two phases of our approach annotate {MT, RT} pairs with the same types of textual comparisons that subjects intuitively apply, while the third phase (not presented here) entails scoring the pairs: (i) automated calculation of MT-RT hits using CMU aligner from METEOR, (ii) an extension phase where our Buckwalter-based Lookup Tool serves to generate six other textual comparison categories on items in the MT output that the CMU aligner does not identify, and (iii) given the fully categorized RT & MT pair, a final adequacy score is assigned to the MT output, either by an automated metric based on weighted category counts and segment length, or by a trained human judge.
Introduction
analyzed the newswire translations of three DARPA GALE Arabic-English machine translation (MT) systems at the segment level in terms of subjective judgment scores, automated metric scores, and the correlations among these different score types. At this level of granularity, while one automated metric 1 clearly correlated better than the other automated metrics with the subjective judgment scores, overall the correlations were weak. In this paper, we begin to reconcile the subjective and automated scores that underlie these correlations by explicitly "grounding" MT output segments with their Reference Translation (RT) prior to subjective or automated evaluation The first section of the paper introduces our approach to tackling MT evaluation at the segment level where we exploit our Buckwalter-based Lookup Tool (BBLT) to augment the "search space" of a reference translation (RT) with BBLT translations of the original source segment. The full approach consists of three stages: (i) an automated calculation of "MT-RT hits" using the CMU aligner from METEOR, followed by (ii) an extension phase where the BBLT serves to help identify six other categories of matches and non-matches on items in the MT output that the CMU aligner did not handle, and then (iii) given the fully category-annotated {RT, MT} pair, a final adequacy score is assigned to the MT output, either by an automated metric based on weighted category counts and segment length, or by a trained human judge. We describe the first two stages of our approach and the six annotated categories as they apply to the {RT, MT} pair for one Arabic MT input segment. In the Results and Ongoing Work section, we 1 METEOR Lavie and Agarawal 2007) show how these two stages yield various combinations of annotation categories on the outputs of six different current Arabic-English MT engines. We conclude the paper by reviewing the weak correlation results from Voss et al. (2006) as they relate to our plans to test for correlations on subjective judgments collected on color-coded annotated {RT, MT output} pairs with various automated metrics run on these pairs.
Approach
Before describing the software and computational steps for phases (i) and (ii) of our approach, we describe the color-coded annotations that are generated during these phases to document various types of textual comparisons that subjects intuitively apply to {RT, MT output} segment pairs when scoring them for translation adequacy.
Category Annotations
The categories are described below for the omniscient annotator who is annotating text in {RT, MT output} pairs as shown in Figure 1. We expect, as in the development and application of all annotation schemes, that these category definitions will require iterative refinement after being assessed for inter-annotator reliability. In phase (iii) categories are weighted in text-based automated metric alternatives that correlate with subjective judgments.
SOURCE:
اﻟﺒﻮﻳﻨﻎ ﻃﺎﺋﺮات وﺗﺤﺘﺎج 003 -737 ﺑﻄﻮل ﻣﺪرج اﻟﻰ 0022 ﻣﺘﺮا اﻻﻗﻼع او ﻟﻠﻬﺒﻮط اﻻﻗﻞ .ﻋﻠﻰ RT: A Boeing 737-300 requires a runway that is at least 2200 meters long for take off and landing MT: The Boeing-737-300 aircraft to included the length of at least 2200 metres landing or take off. Automated metrics such as BLEU, NIST, and METEOR identify the correct translations in terms of (i) "exact hits" and "synonym/stemmed hits" where the MT output correctly matches the RT text. In Figure 2. below, category (i) tokens are annotated in green in the RT text and MT output, after being aligned by the CMU software, matching literally or on synonym from WordNet or by stemmed matching of literal or synonym.
But these metrics do not give credit for other types of correctly translated items in the MT output: (ii) "RT gaps" where the MT engines correctly output text content that the human reference translator did not capture, either by mistake or by intentionally opting to omit content believed to be obvious to an English speaker. In Figure 3, category (ii) tokens are annotated in blue, as occurs with the word "aircraft" in the MT output, that is missing in the RT. The BBLT analysis identifies this inconsistency because it displays all tokens in SL with their own column, as can be seen in Table 2, second column from the left, for the "aircraft" token.
(iii) "paraphrase hits" where the MT correctly outputs content equivalent semantically to the RT, but not literally identical. In Figure 3. the RT phrase "at least 2200 meters long" corresponds semantically to the MT output phase "the length of at least 2200 metres". The non-literal, but semantic correspondence is annotated in blue in the RT and the MT output. BBLT together with WordNet can identify correspondences such the "long"/"length" in the example. 2 We expect that ultimately a source of monolingual paraphrases and alternative equivalent multi-word expressions can be added to this identification task (Ellsworth and Janin, 2007).
(iv) "RT-MT dual divergences"
where the MT is literally correct, but does not match the RT term even though the MT and RT terms correspond in this context without distorting the meaning due to colloquial or idiomatic expressions. The terms are annotated in purple, for example in Figure 3 with "and" in the RT and "or" in the MT output. BBLT provides the terms for spotting these divergences.
(v) "NFW transliterations"
where the MT correctly retains terms in its output for which it has no translation, typically new names. These out-of-vocabulary (OOV) terms should not be discarded by MT engines even though they are not fully correct, because they may be adequate spell-outs of names that MT user will work with. The OOV, transliterated term "Alam" in the MT output and its translated term in the RT "found out" are annotated in purple, for example in Figure 5c. The BBLT can be run with its transliteration feature on, enabling a non-Arabic reader to see transliterations aligned with their translations.
2 WordNet defines synset with "length (a section of something that is long and narrow)" Furthermore automated metrics do not explicitly identify two types of MT errors (vi) "MT gaps" where the MT output is incorrect by failing to contain content corresponding to content word(s) in RT. These terms in the RT are annotated in yellow and do not have a corresponding term in the MT.
The BBLT analysis will identify these since all content words in SL will have a column in the output and this term will not show. For example, in Figure 3. the RT verb "requires" does not have a corresponding term in the MT output. (Note that some MT systems, in being optimized for a particular automated metric, end up dropping NFWs to boost their score. This pattern can be detected by the BBLT analysis that finds RT items that match in the BBLT table, but fail to match MT items, to identify "MT drops."
(vii) "MT lexical selection errors"
where particular word translated is incorrect for the context. BBLT may identify these since alternate translations of a word may be in other rows of the column of the SL token and share no terms in WordNet synsets. (The BBLT analysis enables us to distinguish such errors from the "MT hallucinations" of statistical MTs, where the lexical selection driven by training data does not correspond to any Buckwalter or dictionary translation of any of the SL words.) These forms of incorrect terms are annotated in red in both the RT and the MT outpur. For example in Figure 3., the MT "included" is a mistranslation of the RT "runway" as can be seen in BBLT. When the mistranslations are close with some shared semantics, then the annotation is in gray as shown in Figure 5b., where the RT "found out" and the MT output "aware of" are both in gray.
Annotation Algorithm
The process for annotating the {RT, MT output} pairs starts with (i) the CMU alignment phase and then proceeds to (ii) a BBLT analysis phase.
CMU Alignment
We start by inputting a pair of RT and MT segments into the automatic word aligner from CMU's METEOR (also used within CMU's MEMT algorithm), for a first-pass analysis of the exact hits and synonym/stemmed hits, in category (i) above. The results of this phase for the RT and MT from Figure 1 are shown in Table 1 below. 1 1 artificial 2 2 exact 3 3 exact 9 10 exact 11 12 exact 12 13 wn_synonymy 15 16 exact 16 17 exact 18 14 exact Figure 1.
RT MT CMU category
The numbers in Table 1. stand for word positions in the RT and MT. For example, RT 18 and MT 14 (in last row) correspond the match terms "landing". To illustrate the hits found in this way, the corresponding items in the RT and MT segments are annotated in green in Figure 2. We can also see that token 12 for "meters" in the RT column of Table 1 matches token 13 in the MT for "metres:" the algorithm reconciles the different spellings via a WordNet synonymy check.
RT: A Boeing 737-300 requires a runway that is at least 2200 meters long for take off and landing
MT:
The Boeing 737-300 aircraft to included the length of at least 2200 metres landing or take off Keep the CMU results as generated by the aligner, now in the fifth column. For the exact matches and WordNet synonym/stem matches, the sixth, seventh, and eight columns are filled with "accept", blank, and "MT correct". For the "artificial" matches, the sixth column is marked "Review", since a human needs to compare the RT and MT items of that row for scoring. Typically the "artificial" matches are pairs of closed class words that are not translations of each other. We allow the human reviewer to assign partial credit if it is clear that words correspond to each other, as in the "a" and "the" in the given example. Only the human review and credit allocation in the sixth column of "artificial" rows need be done manually.
BBLT Analysis
In the second phase, the source language sentence is input to the BBLT (available both as web service and as GUI) and a GUI table result appears, see Table 2 for the source segment in Figure 1. The analysis that follows extends the matched alignment that occurs between the RT and BBLT, and between the MT and BBLT. The BBLT Results Screen shows the English meanings in a table where each column corresponds to an Arabic token in the input sequence, but presented in reverse order. That is, the right-to-left Arabic order of the original input sequence is transformed in the Results Screen table into a left-to-right order.
For each such "BBLT match alignment" of the CMU non-matched words in the RT or the MT for which there is also a corresponding column in the BBLT GUI Each new row is binned into one of the categories (ii)-(vii) identified above. The algorithm for filling the first/second and third/fourth column pairs of these new rows is based on content inspection of the corresponding BBLT column. For example, the word "aircraft" shows up in BBLT as well as in position 4 of the MT, but no equivalent is present in the RT, so the first//second RT pair is left blank and the third/fourth pair is filled with "4" and "aircraft". This is categorized as (ii) RT gap and colored blue, since the word is a correct translation but the human reference translator opted not to include it. The (iii) MT paraphrase case is illustrated in augmented CMU+BBLT Table 3 in the RT "13 long" and the MT pair "8 length".
RT:
A Boeing 737-300 requires a runway that is at least 2200 meters long for take off and landing.
MT:
The Boeing-737-300 aircraft to included the length of at least 2200 metres landing or take off.
Results and Ongoing Work
We now show how these two stages result in a range of different categories on the outputs of six different current Arabic-English MT engines. Figure 4. presents source language segment, a reference translation, and then the machine translation outputs for that same input segment. Figures 5a through 5f show the pairwise color-coded annotation of the MT and RT pairs.
While all the MTs translated the subject of the sentence correctly, only MT 1 is successful in situating the full subject NP at the front of the sentence. MT2 selected a partially correct translation for the leading verb and MT5 found the correct translation, but both left it in sentence-initial position. MT3 transliterated the leading verb and MT 6 mistranslated it, and again both also left it in sentence-initial position. MT4 found the verb but appears to have moved the sentence-final termporal expression to the front of the sentence, leaving the verb-subject order unchanged. Given the preponderance of verb-initial sentences in Arabic, it is quite surprising that only one MT engine handled this construction correctly.
Similarly while all the MTs indicate a start date of a battle and a time of Wednesday, only MT 1 and MT 4 are successful in moving the time out of sentence-final position to get the correct verb-event reading where the time modifies the knowing/finding out, not noun phrase-event of the start date of the battle. The sequence of RT-MT shared color coding for adequacy (recall that green and blue indicate correct matches and gray indicates a partial match) and the fluency of the text within a singly-colored sequence indicate that MT 1 should be subjectively judged the best translation and MT 6 the worst (recall that red is error and yellow is missed terms).
Given the ease with which we can "see" and rank MT outputs for their translation adequacy with this color-coded annotation, the next challenge in our phase (iii) research is to identify the set of annotated textual comparisons that subjects use in judging annotated MT output so that these can be incorporated into automated evaluation metrics. We will know that we have made progress in reconciling the subjective and automated scores when we can revisit the translations from the scatterplots in Figure 6 (from Voss et al. 2006) and show that these weak correlations can be improved with annotated text comparisons relevant both to subjects judging adequacy and to MT developers in need of sensitive, well-calibrated automated metrics for training and optimizing their MT engines.
Source Language Text:
ا ﺑﺘﺎرﻳﺦ اﻟﺠﻴﺶ ﻗﺎﺋﺪ ﻋﻠﻢ اﻻرﺑﻌﺎء ﻳﻮم اﻟﻤﻌﺮآﺔ ﺑﺘﺪاء . Reference Translation: The Army commander found out about the start date of the battle on Wednesday.
MT 1: the army leader knew on Wednesday in the clash beginning date.. MT 2: Aware of the army commander on the battle beginning on Wednesday. MT 3: Alam, the army commander by the battle beginning history on Wednesday MT 4: Day of Wednesday knew commander of the army in date of start the battle. MT 5: know leader army with date starting the battle wednesday. MT 6: the flag of the Army Commander on beginning the battle on Wednesday Table 4. BBLT Results Screen for input sequence in Fig. 4, where the original input is right-to-left on the input line, but the results table reverses words into left-to-right order. | 2014-07-01T00:00:00.000Z | 2008-05-01T00:00:00.000 | {
"year": 2008,
"sha1": "7f508d27ac25124a65c249905df5b22783ba1ec7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "7f508d27ac25124a65c249905df5b22783ba1ec7",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
234128426 | pes2o/s2orc | v3-fos-license | Vibration based Data Analysis of Single Acting Compressor through Condition Monitoring and Multilayer Perceptron – A Machine Learning Classifier
The air compressor is one of the desired mechanical equipment used for producing compressed air, which is utilized for performing various industrial and domestic functions. Its operation involves several rotating and fluctuating members which fail due to several miscellaneous reasons as the members prone to dynamic working environment quite frequently. The deficiencies create huge impact over the overall performance and thus leads to economic losses associated with system seizure. It is now essential to predict the occurrence of faults at earlier stages in order to avoid major shutdowns. Hence, in this article, a data modelling study using a machine learning algorithm is proposed. Initially, the vibration signals are measured as physical parameters from the compressor test rig as it contains critical information regarding the system working conditions instantly. The statistical features were extracted from the acquired signals and by using the J48 algorithm the most prominent features were selected. These selected features were classified using Multilayer Perceptron and its performance in fault classification was presented
Introduction
Air compressor is one of the important mechanical equipment which are widely used in critical industrial and domestic applications. The faults associated with such crucial systems indulge in process malfunctions which may also lead to severe critical causalities. Hence, monitoring of the system conditions continuously has gained importance and which is done through condition monitoring that helps to detect, analyze and diagnose the roots of the failures in advance. The system conditions are monitored through the real time physical parameters associated with the systems, where a change in signal pattern insists upon a considerable development of system faults. Over the years, many researchers have conducted various fault diagnosis study using different techniques and methodologies to predict the occurrence of faults in advance. Some of the literatures pertaining to the compressor fault condition monitoring are discussed through the section below. IOP Publishing doi: 10.1088/1757-899X/1012/1/012032 2 Data capturing is the heart of fault diagnosis where the physical parameters are measured to effectively diagnose the condition of the machine [1]. The compressor system was divided into four phases namely, piston head, non-return valve (NVH), opposite of non-return valve and opposite of flywheel side. On each phases, six sensor positions are considered. A statistical approach was carried out on those 24 positions and by the values of peak amplitude, standard deviation, variance and root mean squared (RMS). The results suggested that, the collection of data is found effective when the sensor is positioned over the piston head [2].
The valves are considered as the frangible part of reciprocating compressor where the periodic failure happens. Therefore, the valve fault diagnosis is crucial to avoid major causalities and downtime losses [3]. The valve cracks and valve breaks are the most common valve fault conditions, Kurt Pitchler et al.., presented a diagnostic approach to detect the faults through vibration signals. At first, the obtained information was changed into multi-dimensional vector space and thereby metric data was defined. They have utilized data to calculate the variation in gap between the testing compressor and an actual compressor. Higher the variation indicates an abnormality in the working of compressor. [4]. Yuefei Wang conducted an experiment to diagnose the typical compressor valve faults. Leakage, valve flutter, delayed closing and improper fit was the conditions investigated through acoustic emission and simulated valve motion [5]. S Meenakshi Sundaram conducted a fault diagnosis study to diagnose rotating machine faults. A total of 24 fault classes was investigated through acoustic and vibratory parameters. Here, the decision tree of C4.5 algorithm was utilized to narrow down the most contributing features for classification and those features was fed into Ada-boost algorithm for classification. The results indicate that, 90% automation was achieved in determining rotating machine fault diagnosis [6]. Milad Golmoradi presented a fault diagnosis study on air compressor. Daubechies wavelet transform was used for the analysis of the compressor conditions and through J48 algorithm, the 93.33% accuracy was obtained in classifying compressor faults [7]. Kotha prashanth et al.., conducted a study on vibration based fault monitoring in compressor system. A total of 4 test conditions was investigated under all tree based classifiers and it was found that Random Forest Tree classifier showed the highest fault classification result of 86% accuracy in classifying compressor faults [8]. Sumit Kumar Sar et al.., conducted a fault diagnosis study on a rotating machinery through vibration signals. This study has considered room mean square (RMS), kurtosis and crest factor as an input features and those features effectiveness in fault classification was compared with the machine learning classifiers like; Probabilistic Neural Network (PNN), the Decision tree, K-nearest neighbour classifier and Radial Basis Network (RBN) classifier. They concludes that, the decision tree performs better as compared with the other classifiers and they have suggested it as an important tool for the problems that dealt with non-linear data classification [9]. Abdenour Soualhi conducted a study on bearing health monitoring. Hilbert -Huang Transform technique was implemented along with Support Vector Machines (SVM) and Regression classifiers. The experimental results gives the clear indication that, the degradation state of bearings was detected efficiently through the developed technique [10] W S Yang et al.., conducted a fault diagnosis study on air compressor using Probabilistic Neural Network (PNN) as classifier. At first, the features were extracted using the Wavelet packets (WPD) and the Continuous wavelet transform (CWT) as the feature extraction tool and the results indicates that, the lifting wavelet transform produces a comprehensive result in reflecting the fault conditions and this method is suggested as a tool for online fault diagnosis of air compressor [11]. Joshuva et al.., conducted a study on wind turbine blade condition monitoring through statistical features. Here, total of six different blade faults was investigated through vibration signals. The statistical feature extraction technique was used and the J48 decision tree algorithm is used for the selection of features and Rough Set Theory (RST) feature classification tool is used as a classifier. The classification accuracy was found to 75.5% and since it produces a minimal mean absolute error, they suggested that RST can be employed to identify the blade conditions [12]. The step by step process of the data modeling study are displayed through the Figure 1.
Experimental studies
A data modeling study on the compressor system is developed to monitor the dynamic characteristics while operating under five different fault conditions. This section explains in detail about the experimental setup and the procedure adopted for the effective conduct of the experiment.
Experimental setup
A single-stage reciprocating air compressor illustrated in Figure 2 is taken as an experimental setup. The overall experimental arrangement includes; a piezoelectric transducer, a data acquisition (DAQ) module, signal processing setup with NI Lab View. The vibration signals are acquired for six different test conditions of the compressor using an accelerometer sensor at 500g range, 100mV/g sensitivity and with a resonant frequency of 40 Hz. [13]. Figure 3 shows the pictorial representations of the faults created.
Statistical feature extraction process
Once the vibration signals were collected from the experimental test rig, these signals are processed to extract meaningful information through the number of features present internally. Here, in this article, the statistical features are extracted using descriptive statistics [14]. This extraction process is effective enough to detect the changes in vibration signals for any kind of mechanical failures and thus it has been used to detect the fault occurrence of an air compressor system. [15]. Standard deviation, skewness, standard error, kurtosis, sum, mean, range, sample variance, mode, medium, minimum and maximum were the features computed out of this extraction process. Out of this extracted features, the most contributing ones are identified through the feature selection process and those features will be served as an input to the classifier in order to determine the fault classification performance.
Decision tree based feature selection
Several data mining process are handled nowadays to retrieve collectible information from the available data set. One such technique is the decision tree. Its structure comprises of the root, a number of nodes, and leaves and it follows the "Top-Down Induction on Decision Tree" system where top nodal feature contribution towards fault classification is high when compared with the other subsequent nodal positions. The existence of an attribute in a decision tree structure gives information about the importance of an associated attributes during classification. The C 4.5 algorithm follows two important phases during feature selection namely; a building phase and a pruning phase. The following subsection provides a detailed explanation of these phases.
Building phase
This phase explains in detail about the construction of the decision tree [16]. Here, the top nodal position showcases the feature with a maximum contribution towards fault classification. Now, the root node is partitioned into two internal nodes or decision nodes (binary decision tree) based on the trail on an attribute, and those internal nodes are connected through the branches. These attributes are selected through the estimated entropy-based information gain. For each and every partitioning, an additional node is attached to the existing structure and this process continues until an identical class appear over the partitioning.
Pruning phase
The features that present in the structure of the decision tree don't exhibit equal contribution in classification. Therefore the features which exhibit the low or negligible contribution are to be pruned for an effective fault classification outcome. Here, the J48 algorithm follows an error-based pruning process where the error rate is calculated at each nodal position of a decision tree based on the overall aggregate of misclassification. From the calculated error rate, the features with the lowest error rate values were selected for further classification, and the features that exhibit the maximum error rate are pruned out of the decision tree. Figure 4 shows the post pruned decision tree structure where only 7 out of 12 input features are present.
Feature classification process
It is a class of feed-forward artificial neural networks (ANN) and it is often called as a neural network or a multilayer perceptron (MLP). This classifier can be trained to approximate virtually any smooth and measurable functions. And moreover it doesn't make prior assumptions as in case of other classifiers concerning to the data distribution. This classifier is suitable for processing a non-linear functions which can also be trained to accurately generalize when given with a new and unseen data for a non-linear applications [17]. The structure of a MLP possess more than one hidden layer of the perceptron. Each perceptron is a single neuron model (Figure 5a) which accepts the weighted input for further activation. The multilayer perceptron neural network model possesses three important layers namely; input, hidden and output layers (Figure 5b). The multilayer perceptron neural network model possesses three important layers namely; input, hidden and output layers (Figure 5b). The hidden and output layer uses an activation function for its operation. Here the data sets are trained through back-propagation; a supervised learning technique. Where a weighted biased sum is added to the given inputs and the overall activation function used as a transfer function for producing the output [18]. Initially, the neural networks are trained on the datasets and the procedure for the preparation of data for the training is as follows; Input data should be numerical The network process the input and upon activating the activation function, the neurons finally produces the desired output value The obtained results are compared with the expected outcome and the deviation or error in calculation were determined The network model is trained again and again to have an effective outcome
Result and discussion
The vibration data were acquired for six different test conditions of air compressor and the statistical features associated with the signals were extracted using descriptive statistics. And the decision tree (J48 algorithm) is used for the selection of the most contributing features for fault classification from the computed features. And it is clear that, minimum, standard error, range, mean, skewness, maximum and kurtosis were the features present out of the given input features (Figure 3).
Effect of the number of input features
The standard rule of decision tree is that, the feature which contributes to the maximum in fault classification present at the top nodal position and the subsequent positions contribution reduces as compared to the previous nodal position of a decision tree. Therefore, it is essential to study the effect on the number of input features of decision tree in fault classification. Hence the features are combined from top nodal position towards the bottom nodal position of the decision tree (top to bottom hierarchy) and the classification accuracy for each and every combination were determined through J48 algorithm. Table 1 shows the suitable combinations and their corresponding classification accuracy Figure 6 gives the graphical plot generated for the variation in learning rate at momentum value as 0.2. Table 2. The Table 3 shows the detailed class wise accuracy for each classes which indicates the classifiers performance with different rate of global measures [18]. From the confusion matrix (Table 2), it is clear that, 575/600 samples were classified correctly to their respective classes and 25/600 samples were misclassified. The diagonal elements shows the instances that are correctly classified out of the given input instances and the non-diagonal elements shows the instances that are misclassified. This misclassification in confusion matrix indicates that during the machine learning process, 25 samples were misclassified or identified as other faulty condition of the study. In condition outer valve fluttering (OVF), the total input samples were correctly classified to their respective classes, but in condition inlet & outlet valve fluttering (IOVF), 10/100 samples were misclassified into valve plate fluttering (VPL -9) and the remaining one as GOOD condition. In condition valve plate leakage (VPL) and pressure relief valve (PRV), 5/100 samples were misclassified. In PRV, 5 samples are misclassified as the condition GOOD and in VPL, 4 samples are misclassified as the condition IOVF and the remaining one sample is misclassified as the condition OVF.
Conclusion
A vibration based fault diagnosis on air compressor system have been performed in this article. A total of six different compressor test conditions were investigated using 100 samples of vibration data. From the acquired signals, the statistical features have been extracted and the decision tree of J48 algorithm have been utilized as a feature selection tool and the best contributing features were selected and served as an input to the classifier Multilayer perceptron to measure its effectiveness towards fault classification outcomes. The important highlights of the obtained results are listed below, Out of 12 statistical features; minimum, standard error, mean, range and skewness alone are 10 sufficient enough to produce a better classification outcomes The algorithm produces 95.83% as the classification accuracy with a computational time of 0.56 seconds Hence, the results clearly indicates that, the compressor fault classification under statistical features exhibits the substantial result and performance. And it can be implemented in real time applications for an effective multi fault classification outcomes. | 2021-05-11T00:04:32.860Z | 2021-01-08T00:00:00.000 | {
"year": 2021,
"sha1": "128a93c4d43e0d68112606d7b5a7a650c9eed44c",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/1012/1/012032",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "fd30ea90adc3a4aa6eec5891915ff5369b44c22f",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
} |
269135667 | pes2o/s2orc | v3-fos-license | The Inhibition of Fibrosis and Inflammation in Obstructive Kidney Injury via the miR-122-5p/SOX2 Axis Using USC-Exos
Background: Fibrosis and inflammation due to ureteropelvic junction obstruction substantially contributes to poor renal function. Urine-derived stem-cell-derived exosomes (USC-Exos) have therapeutic effects through paracrine. Methods: In vitro, the effects of USC-Exos on the biological functions of HK-2 and human umbilical vein endothelial cells were tested. Cell inflammation and fibrosis were induced by transforming growth factor-β1 and interleukin-1β, and their anti-inflammatory and antifibrotic effects were observed after exogenous addition of USC-Exos. Through high-throughput sequencing of microRNA in USC-Exos, the pathways and key microRNAs were selected. Then, the antifibrotic and anti-inflammatory effects of exosomal miR-122-5p and target genes were verified. The role of the miR-122-5p/SOX2 axis in anti-inflammatory and antifibrotic effects was verified. In vivo, a rabbit model of partial unilateral ureteral obstruction (PUUO) was established. Magnetic resonance imaging recorded the volume of the renal pelvis after modeling, and renal tissue was pathologically analyzed. Results: We examined the role of USC-Exos and their miR-122-5p content in obstructive kidney injury. These Exos exhibit antifibrotic and anti-inflammatory activities. SOX2 is the hub gene in PUUO and negatively related to renal function. We confirmed the binding relationship between miR-122-5p and SOX2. The anti-inflammatory and antifibrotic effects of miR-122-5p were inhibited, indicating that miR-122-5p has anti-inflammatory and antifibrotic effects by inhibiting SOX2 expression. In vivo, the PUUO group showed typical obstructive kidney injury after modeling. After USC-Exo treatment, the shape of the renal pelvis shown a remarkable improvement, and inflammation and fibrosis decreased. Conclusions: We confirmed that miR-122-5p from USC-Exos targeting SOX2 is a new molecular target for postoperative recovery treatment of obstructive kidney injury.
Introduction
Congenital obstructive nephropathy is caused by structural abnormalities of the urinary system [1] and is the main cause of chronic kidney disease (CKD) in children [2].The most common lesion location is the ureteropelvic junction (UPJ).The progression of the lesion involves renal interstitial inflammation and fibrosis, both of which can damage kidney function.Moreover, because congenital obstructive kidney disease is mostly caused by incomplete unilateral ureteral obstruction (UUO), many patients do not have typical symptoms in the early stages, leading to some older children who have already experienced a certain degree of renal interstitial fibrosis and renal dysfunction when seeking medical treatment.Given the unique nature of the child population, this study focuses more on the growth and long-term function optimization of the kidneys.Even after surgical removal of the obstruction [3], new treatment methods are needed to reduce the progression of renal injury.Although relieving obstruction has remarkable therapeutic effects on both acute kidney injury (AKI) and CKD caused by obstructive kidney disease, long-term renal sequelae after relieving obstruction may eventually result in progression to end-stage kidney disease [4].
miRNAs are often regarded as key factors regulating intercellular communication, and on the basis of the inhibitory effect of miRNAs on target mRNA gene expression, exosomes are used to deliver specific miRNAs, targeting damage or disease-related target genes to achieve precise molecular-targeted therapy for damage repair and disease [12,13].Bone marrow mesenchymal stem cells (BMSCs) overexpressing miR-let-7c selectively localized in the damaged kidney and up-regulated the expression of miR-let-7c.Compared with the control treatment, miR-let-7c-BMSC treatment alleviated renal injury and remarkably down-regulated type IV collagen, matrix metalloproteinase-9, and transforming growth factor-β1 (TGF-β1), and TGF-β1 receptor expression in kidneys with UUO [14].Adipose-derived mesenchymal-stem-cell-derived exosomes improved diabetic nephropathy by transferring miR-26a-5p to high-glucose-induced mouse glomerular podocytes (MPC5), improving the viability of MPC5 cells, while inhibiting their apoptosis [15].
Urine-derived stem cells (USCs) are a newly discovered type of stem cell that was first reported by Zhang et al. [16] in 2008.They are a subset of cells with mesenchymal stem cell characteristics isolated from urine.Compared to other mesenchymal stem cells, USCs exhibit better potential to differentiate into urinary system tissue [17][18][19].USCs can serve as an ideal source of seed cells for tissue damage repair, mainly due to their simple, low-cost, and noninvasive acquisition procedures [20].With continuous in-depth research, miRNAs have been found to be key molecules in the therapeutic e ffect of USC-Exos.USC-Exos can protect against AKI through exosomal miR-146a-5p, which targets the 3′ untranslated region (UTR) of interleukin-1 (IL-1) receptor-associated kinase 1 and subsequently inhibits nuclear factor κB signaling and infiltration of inflammatory cells to protect renal function [21].The application of USC-Exos in obstructive kidney injury has not yet been reported.
However, neither the UUO or ischemia-reperfusion injury models used in AKI studies nor the drug-induced (gentamicin or streptozotocin) models used in CKD studies are fully suitable for the pathological changes of UPJ obstruction (UPJO) [22,23].Therefore, this study found that a partial UUO model (PUUO model) shows more similarities to the etiology and pathological changes of UPJO.This study aims to investigate whether USC-Exos and exosomal miR-122-5p have an active therapeutic effect on renal fibrosis and inflammation induced by PUUO/TGFβ1 + IL-1β in vivo/in vitro.Through screening data from the Nephroseq V5/GEO (Gene Expression Omnibus) database and predicting the binding sites of miR-122-5p target genes in the miRTarBase database, as well as experimental validation in this study, we found that SOX2 may be a negative regulatory factor related to obstructive kidney injury, and miR-122-5p can specifically bind to SOX2, exerting therapeutic effects by targeting SOX2 to inhibit its expression.miRNA delivery based on exosomes may provide a new molecular targeted therapeutic approach for the treatment of obstructive kidney injury.
Cell culture and identification of USCs
Fresh clean middle urine samples were obtained from 10 healthy volunteers.A total of 200 ml of urine from each donor was obtained in one experiment.The sample was centrifuged at 400g for 10 min and then washed once with phosphate-buffered saline (PBS).The cells were seeded in 6-well plates and allowed to grow for 7 d.By this time, the cell colonies could be observed, and USCs were passaged and expanded after another week.The cells were used for subsequent experiments when they were subcultured to passages 3 to 5 (P3 to P5).USCs were cultured in high-glucose Dulbecco's modified Eagle medium and renal epithelial cell growth medium with 10% fetal bovine serum and 1% penicillin/streptomycin at 37 °C and 5% CO 2 .HK-2 cells and human umbilical vein endothelial cells (HUVECs) were purchased from the National Collection of Authenticated Cell Cultures (Shanghai, China).HK-2 cells were cultured in Dulbecco's modified Eagle's medium/ F12 (Gibco, USA) with 10% fetal bovine serum and 1% penicillin/ streptomycin at 37 °C and 5% CO 2 .HUVECs were cultured in RPMI 1640 medium with 10% fetal bovine serum and 1% penicillin/streptomycin at 37 °C and 5% CO 2 .The identification of surface markers, which involved fluorescein isothiocyanate (FITC)-conjugated antibodies against CD73, CD90, CD34, and CD45, phycoerythrin (PE)-conjugated antibody against CD146, and Alexa-Fluor-488-conjugated antibody against human leukocyte antigen (HLA)-DR (BioLegend, San Diego, USA), was performed by flow cytometry.Pluripotency markers and renal markers, including Nano G (monoclonal; 1:1,000; ab109250, Abcam, Cambridge, UK), anti-Wilm's tumor-1 (anti-WT-1; monoclonal; 1:1,000; ab267377, Abcam), and nephrin (anti-nephrin; polyclonal; 1:1,000; ab235903, Abcam), were detected by Western blotting.The multilineage differentiation of USCs was determined using adipogenic, osteogenic, and chondrogenic medium and examined by Oil red O staining, Alizarin red staining, and toluidine blue staining.
Isolation, characterization, and tracing of exosomes
The isolation of exosomes was performed as in a previous study [24].Exosomes were resuspended in PBS after removing the supernatant.Nanoparticle tracking analysis, transmission electron microscopy (TEM), and Western blotting were used for the identification of exosomes.For confirmation that exosomes can be absorbed by target cells/tissues in vitro/in vivo.USC-Exos were incubated with 1 μM phycoerythrin-conjugated hexadecylamine 26/1,1'-dioctadecyl-3,3,3',3'-tetramethylindotricarbocyanine iodide (PKH26/DiR) (Sigma-Aldrich, St. Louis, MO, USA) in Diluent C (Sigma-Aldrich) for 5 min, and excess dye was removed.The PKH26/DiR fluorescently labeled USC-Exos were subsequently added to the serum-free medium of HK-2 cultures and incubated overnight, and the supernatants were used to isolate PKH26/DiRlabeled USC-Exos in the same procedure as above.The nuclei were labeled with Hoechst 33342 (UE, China).The DiR-labeled USC-Exos were intravenously administered to experimental animals.In vitro tracing images were taken with an inverted fluorescence microscope (Leica, Wetzlar, Germany).Tracing in vivo was performed by an animal imaging system (NightOWL LB 983, Berthold Technologies Bioanalytics, Germany).
Cell proliferation assay
The thymine nucleoside analog 5-ethynyl-2′-deoxyuridine (EdU) can penetrate into replicating DNA during cell proliferation.Cell proliferation can be accurately reflected by detecting the combination of EdU and fluorescent dyes.HK-2 cells and HUVECs were incubated with 50 μM EdU from an EdU assay kit (UE) for 2 h.HK-2 cells and HUVECs were fixed with 4% paraformaldehyde.Then, the cells were stained and labeled with Click T mixture and Hoechst 33342 in an EdU assay kit.Images were taken with an inverted fluorescence microscope.
Scratch wound assay
HK-2 cells and HUVECs at 2 × 10 5 cells per well were seeded in a 6-well plate.When the cells grew to be fused into a monolayer, a straight-line wound was made on the fused monolayer cells using a sterile 200-μl pipette tip.Serum-free medium with different concentrations of USC-Exos and/or pathway inhibitors was then added to each well.Photos were taken and recorded at 2 time points of 0 and 24 h using an inverted microscope with an Axiocam 305 color digital camera and ZEN 2011 software (Carl Zeiss, Oberkochen, Germany).
Transwell assay
HK-2 cells and HUVECs at 1 × 10 5 cells per well were seeded in the transwell upper chamber, and USC-Exos and/or pathway inhibitors were added to the lower chamber.After culture for 24 h, HK-2 cells and HUVECs were fixed with 4% paraformaldehyde and stained with crystal violet.Images were obtained under a light taken with an Axiocam 305 color digital camera and ZEN 2011 software.
Tube formation assay
HUVECs (1 × 10 5 cells per well) were seeded on a 6-well plate containing Matrigel matrix (Corning, USA).This matrix was stored on ice at all times.Then, HUVECs were treated with different concentrations of USC-Exos (as mentioned before).After incubation at 37 °C with 5% CO 2 for 2 h, images were acquired with an inverted microscope with an Axiocam 305 color digital camera and ZEN 2011 software.The total tube length and branch points were calculated by ImageJ software.
miRNA isolation and high-throughput sequencing
miRNA high-throughput sequencing was performed by GENESKY Company (Shanghai, China).Exosomal total RNA was extracted using the miRNeasy Mini Kit (QIAGEN) for miRNA sequencing analysis.The quality and purity of the extracted RNA were determined using an Agilent 2100 Bioanalyzer (Agilent Technologies, USA).Total input of a single sample of ≥20 ng of total RNA, raw data of ≥10 M reads per sample, and a base ratio of Q30 > 80% meeting the above standards were sequenced and analyzed.The miRNA sequencing library was constructed using the TruSeq Small RNA sample Preparation Kit (Illumina, USA).We analyzed the expression of miRNAs using Illumina HiSeq 2500 (Illumina, USA) in the final step.
Functional enrichment analysis
Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analyses of miRNAs identified through high-throughput sequencing were performed using the Database for Annotation, Visualization, and Integrated Discovery.The enrichment analysis was visualized by using R software.
PUUO animal model
The PUUO model has been proven to be a reasonable model to study hydronephrosis caused by UPJO [25].It was established using Japanese long-eared white rabbits (male; weight, 3,000 to 3,500 g) purchased from the animal experiment center of Harbin Medical University.After lying prone, the rabbits were anesthetized with intraperitoneal pentobarbital (4 mg/kg).A straight incision was made along the left side of the spine, the muscle space was passively separated, the ureter was found along the psoas major muscle, and a sterilized polyethylene plastic tube with a length of approximately 1 cm and an inner diameter of approximately 0.8 mm was cut longitudinally.Then, the ureter was inserted approximately 1 cm below the UPJ, ligated, and fixed at both ends with surgical sutures.Suturing after reduction of the ureter was performed.Antibiotics were injected intramuscularly for 3 d after surgery.After 2 weeks, we determined the formation of hydronephrosis by magnetic resonance imaging (MRI), and the cannula was removed.Then, the animals were randomly divided into 4 groups: (a) sham group: Only the abdomen was opened and closed without modeling (n = 6).(b) PUUO group: Examination should be conducted directly after 2 weeks of modeling (n = 6).(c) PBS treatment group: PBS with the same volume as USC-Exos was injected through the ear margin vein after removal of the cannula (n = 6).(d) USC-Exo treatment group: USC-Exos (400 μg/kg•d) were injected through the vein of ear margin after removal of the cannula (n = 6).In the latter 2 groups, the obstruction was relieved 2 weeks after modeling, and the animals were then treated for 1 week.MRI was performed, and euthanasia drugs were injected intravenously.Renal histopathology was assessed.Renal damage (glomerular atrophy and tubular dilatation) was scored according to the following criteria: 0 = damage area < 25%, 1 = damage area of 25% to 50%, 2 = damage area of 50% to 75%, and 3 = damage area > 75% [23].
MRI examination
All rabbits were scanned by MRI at 2 weeks following the treatment to assess the presence of hydronephrosis and then scanned at 1 week after relieving obstruction.On the basis of the MRI images (sagittal, coronal, and transverse sections), the left renal pelvis volume (RPV) of each rabbit was quantified in accordance with a previously described method, following the calculation formula "maximum anteroposterior diameter" × "maximum length diameter" × "maximum transverse diameter" × 0.523 (Table 1) [26].An Achieva 3.0T TX (Philips) MRI machine was used.
Data processing of differentially expressed genes
The gene expression datasets analyzed in this study were from the GEO database.Only 2 datasets in the GEO database were carefully searched.The gene expression profiles are as follows: GSE45304 was based on the Agilent GPL7202 platform (Agilent-014868 Whole Mouse Genome Microarray 4x44K G4122F), and GSE96102 was based on the Agilent GPL4134 platform (Agilent-014868 Whole Mouse Genome Microarray 4x44K G4122F).All of the data are freely available online, and the miRNA expression profile in this study has already been uploaded to the Sequence Read Archive database (project no.PRJNA871972).The GEO2R online analysis tool was used to detect the differentially expressed genes (DEGs) between PUUO and sham/normal samples, and the adjusted P value and |logFC| were calculated.Genes that met the cutoff criteria, adjusted P < 0.05 and |logFC| ≥ 0.3, were considered DEGs.Statistical analysis was carried out for each dataset, and the intersecting part was identified using the Venn diagram webtool.
Clinicopathological correlation analysis
Clinical characteristic data related to SOX2 and kidney disease were obtained from the Nephroseq V5 database.Pearson's correlation analysis between SOX2 and glomerular filtration rate (GFR), urinary protein, and serum creatinine in patients with CKD was performed using the Nephroseq v5 online database.
The statistical analysis was carried out using GraphPad Prism 9.0 (GraphPad Software Inc., La Jolla, CA, USA).
Protein-protein interaction network
The protein-protein interaction (PPI) data of miRNA with the top 20 target genes and intersecting genes between GSE45304 and GSE96102 were analyzed by STRING (The Search Tool for the Retrieval of Interacting Genes).The screening of miRNA target genes was based on the miRDB, miRTarBase, and TargetScan databases by running Perl script.The PPI network and relationship between miRNAs and target genes were drawn by Cytoscape software and sorted by degree value.Nodes with a higher degree of connectivity tend to be more essential in maintaining the stability of the entire network.CytoHubba, a plugin in Cytoscape, was used to calculate the degree of each protein node.In this study, the top 50 miRNA target genes and top 10 genes of GSE45304 and GSE96102 were identified as hub genes.
Dual luciferase reporter assay
Human embryonic kidney 293T cells (1 × 10 5 cells per well) were seeded on a 24-well plate, and transfection was performed when the cells achieved 70% to 80% confluency.Thirty minutes before transfection, the complete medium was replaced with serum-free medium.Lipofectamine 3000 (CN2507605, Invitrogen by Thermo Fisher Scientific) was used to transfect the cells.The cells were lysed 48 h after transfection, and miRNA expression was detected by chemiluminescence using the Dual-Luciferase Reporter Assay System (Beyotime).With Renilla luciferase as the internal reference, the fluorescence ratio was calculated by dividing the relative light units for firefly luciferase by the relative light units for Renilla luciferase.The inhibitory effect of the miRNA on target gene expression was compared according to the obtained ratio.
Quantitative real-time polymerase chain reaction
Total RNA was extracted from cells/tissues using a fast total RNA extraction kit (Seven Biotech, China).Quantitative real-time polymerase chain reaction (qRT-PCR) was performed using 2× SYBR Green qPCR Master Mix (Seven Biotech) according to the manufacturer's instructions.The reaction volumes contained 1 μl of diluted cDNA solution, 5 μl of SYBR Green, and 0.5 μl each of forward, reverse and RT primer, and double-distilled H 2 O was added to replenish the volume to 10 μl.qRT-PCR was performed on a CFX96 touch (Bio-Rad) with the following cycling scheme: 5 min at 95 °C followed by 40 cycles of 15 s at 95 °C, 25 s at 60 °C, 1 s at 60 °C, and 1 s at 95 °C.C t values were calculated with automatically set thresholds and baselines, and those higher than 30 were excluded from the analysis.The primers used for qRT-PCR are listed in Table 2.
Statistical analysis
All data are expressed as the means ± SD.The mean value of each group was analyzed and compared by ordinary one-way analysis of variance (ANOVA) and Tukey's multiple comparisons test.*P < 0.05, **P < 0.01, ***P < 0.001, and ****P < 0.0001 indicate statistical significance.
Characterization of USCs
Flow cytometry was used to detect common surface markers of mesenchymal stem cells, which were CD73-, CD146-, and CD90-positive and CD34-and CD45-negative cells (Fig. 1A).HLA-DR, a marker of low immunogenicity, was also found (Fig. 1A).USCs could differentiate into adipocytes, osteoblasts, and chondroblasts in vitro (Fig. 1B).Primary USCs were cultured for 5 to 7 d, and a few tiny colonies could be observed, with spindle-shaped cells (Fig. 1C).Pluripotency markers of USCs included Nano G (Fig. 1D).Nano G is an endogenous transcription factor that plays an important role in maintaining stem cell pluripotency [27,28].Western blot analysis proved that USCs expressed special renal markers, including WT-1 and nephrin (Fig. 1D), suggesting a possible renal source of USCs.
Characterization and tracing of USC-Exos
Nanoparticle tracking analysis revealed the mean diameter of USC-Exos to be 112.7 nm (Fig. 1E).TEM revealed that USC-Exos were round or elliptical vesicular structures (Fig. 1F).
USC-Exos promote proliferation and migration of HK-2 cells
First, we confirmed that HK-2 cells are epithelial cells using CK-19 fluorescence labeling (Fig. 2A).We investigated the functional role of USC-Exos in cell proliferation and migration.The EdU assay confirmed that USC-Exos remarkably enhanced the proliferation of HK-2 cells at increasing concentrations (Fig. 2B).The scratch and transwell assays confirmed that USC-Exos also promoted the migration of HK-2 cells at increasing concentrations (Fig. 2C and D).The results were statistically analyzed, and a statistical chart was drawn (Fig. 2E).
USC-Exos promote proliferation, migration, and angiogenesis of HUVECs
To determine whether USC-Exos have a functional role in angiogenesis in vitro, we used Matrigel matrix culture medium to assess the effects of USC-Exos on the tube formation of HUVECs.The results demonstrated that USC-Exos remarkably enhanced tube formation at increasing concentrations compared to that of the group (0 μg/ml) in vitro (Fig. 2F).The EdU assay showed that USC-Exos remarkably enhanced the proliferation of HUVECs at increasing concentrations (Fig. 2G).The scratch and transwell assays showed that USC-Exos also promoted HUVEC migration at increasing concentrations (Fig. 2H and I).The above results were statistically significant, and the specific P value is indicated in the statistical chart (Fig. 2J and K).
USC-Exos inhibit fibrosis and inflammation in HK-2 cells
Then, we investigated whether USC-Exos affected fibrosis and inflammation in HK-2 cells.First, in vitro fibrosis and inflammation models were established by TGF-β1 and IL-1β.Western blot analysis showed that the combined use of TGF-β1 and IL-1β had a more negative effect than either alone (Fig. 3A and B).This finding is more consistent with the reality that 2 adverse factors always exist at the same time when tissue injury occurs.TGF-β1 and IL-1β remarkably increased the protein expression of COX-2, α-SMA, and vimentin and decreased the protein expression of ECAD.In subsequent experiments, Western blot analysis showed that USC-Exos protected against fibrosis and inflammation induced by TGF-β1 and IL-1β at increasing concentrations (Fig. 3C and D).
miRNA sequencing and bioinformatics analysis
We screened out all miRNAs in USC-Exos through miRNA sequencing and sorted them according to the expression level.
In this study, we selected the top 20 highly expressed miRNAs for display, which are shown in Fig. 4A.Among the top 20 highly expressed miRNAs, 7 miRNAs that play a therapeutic role in renal diseases were selected by consulting relevant literature with the name of each miRNA, renal diseases, nephropathy, renal fibrosis, and kidney injury as keywords.The miRNAs were as follows: miR-122-5p, miR-26a, miR-30, miR-10a, miR-10b, let-7a, and miR-200b.Related research is shown in Table S1.In addition, this study verified the expression of these 7 miR-NAs with potential therapeutic effects in USC-Exos through qRT-PCR in vitro.The results were consistent with the results of high-throughput sequencing.The highest expression of miR-122-5p was found in USC-Exos, but miR-200b-3p with a high expression ranking in high-throughput sequencing was not found to have remarkably increased expression in this experiment (Fig. 4B).The relationship between miR-122-5p and its target genes was drawn by Cytoscape and is shown in Fig. 4C.
A functional enrichment analysis was carried out and included biological processes, cellular components, and molecular functions from the GO analysis and KEGG pathways.The biological processes were mainly enriched in "mesenchyme development" and "mesenchymal cell differentiation", suggesting that USC exosomal miRNAs may play a positive role in the development and differentiation of mesenchymal stem cells.The main enriched cellular component was "RNA polymerase II transcription regulator complex", and the main enriched molecular function was "DNA binding transcription factor binding".The KEGG analysis suggested that the "PI3K-AKT signaling pathway" and "MAPK signaling pathway" were the main pathways (Fig. 4D and E).Consequently, we investigated whether USC-Exos play a role through these pathways in the following experiments.
USC-Exos regulate the biological function of HK-2 cells via the PI3K-AKT and MAPK pathways
On the basis of the results of KEGG enrichment analysis, the PI3K-AKT, MAPK-ERK1/2, and MAPK-p38 pathways were selected for verification.We pretreated HK-2 cells with LY294002, PD98059, and SB203580 for 30 min.Then, USC-Exos (100 μg/ ml) were added for the EdU, scratch, and transwell assays.The ability of USC-Exos to promote cell proliferation and migration decreased remarkably with inhibition of these pathways (Fig. 5A to D).Western blot analysis showed that TGF-β1 and IL-1β decreased the expression of p-AKT and p-ERK1/2.USC-Exos increased the expression of p-AKT and p-ERK1/2 in HK-2 cells.This result indicated that USC-Exos activated the PI3K-AKT and MAPK-ERK1/2 signaling pathways after being absorbed by HK-2 cells.Interestingly, we found that both USC-Exos and TGF-β1 + IL-1β increased the expression of p-p38, while USC-Exos inhibited the overactivation of p-p38 induced by TGF-β1 and IL-1β.In addition, we pretreated HK-2 cells with the PI3K-AKT inhibitor LY294002, the MAPK-ERK1/2 inhibitor PD98059, and the MAPK-p38 inhibitor SB203580 for 30 min.Western blot analysis showed that LY294002, PD98059, and SB203580 inhibited the phosphorylation of AKT, ERK1/2, and p38, respectively (Fig. 5E and G).
We also investigated the protein expression of fibrosis and inflammation with inhibition of various pathways.The results showed that the fibrosis-related proteins α-SMA and vimentin were increased and ECAD was decreased upon inhibition of the PI3K-AKT and MAPK signaling pathways.The expression of the inflammation-related protein COX-2 was also remarkably increased with inhibition of the p38 and MAPK-ERK1/2 signaling pathways (Fig. 5F and H).When these pathways were blocked, the antifibrotic and anti-inflammatory effects of USC-Exos were remarkably reduced.Blocking the p38 signaling pathway has the most significant effect on inflammation, and blocking the PI3K-AKT signaling pathway has the most significant effect on fibrosis or epithelial interstitial transformation.
USC-Exos promote renal tissue repair and inhibit fibrosis and inflammation in vivo
First, we determined the distribution of USC-Exos in vivo.DiRlabeled USC-Exos (400 μg/kg) were injected intravenously into experimental rabbits.Since rabbits are large experimental animals, organs, including the heart, lung, liver, spleen, and kidneys, were removed 2 and 24 h after the addition of DiR-labeled USC-Exos and imaged by an animal imaging system under the same conditions.Two hours after injection, the DiR-labeled USC-Exos were mainly distributed in the spleen, liver, and lungs, and a small amount was also distributed in the kidney.Twenty-four hours after injection, the tissue exchange and inactivation of USC-Exos in the spleen, liver, and lung ended, while the distribution and release of USC-Exos in the kidney were relatively stable (Fig. 6A).
The MRI results showed that the PBS group and USC-Exo group displayed less hydronephrosis than the PUUO group after relieving the obstruction.The effect of USC-Exos in promoting morphological recovery of the renal pelvis was more significant.The left RPV of the sham group was the lowest, followed by the USC-Exo group, whereas the PUUO group had the highest RPV.This finding is similar to the clinical situation.The renal pelvis retracts slightly by simply removing the obstruction, but the effect is not significant.In contrast, the USC-Exo group had better alleviation of hydronephrosis than the PBS group (Fig. 6B and Table 1).The ability of USC-Exos to promote the morphological recovery of the renal pelvis was significant.
We used anti-CD31 to label angiogenesis in vivo.The angiogenesis in the PUUO group decreased remarkably, and that in the PBS group and USC-Exo treatment group increased, among which the effect of the USC-Exo treatment group was greater (Fig. 6C and D).Compared with the PUUO group, the latter 2 groups showed an increasing trend, but there was no statistically significant difference.
In addition, we tested the representative indicators of inflammation through immunofluorescence.Through immunofluorescence, we found that the expression of IL-6 increased in the PUUO group, while the expression of IL-10 was remarkably decreased.IL-6 is a recognized proinflammatory factor, and IL-10 is an anti-inflammatory factor.However, USC-Exos inhibited the expression of IL-6 and promoted the expression of IL-10 compared to those of the PUUO group.Although the PBS group also had similar results, they were not as obvious as those in the USC-Exo group (Fig. 6C and D).These findings fully demonstrated the anti-inflammatory capacity of USC-Exos in vivo.
To further investigate whether USC-Exos attenuated renal damage and inhibited fibrosis and inflammation, we performed an in vivo study.There were significant pathological changes in the glomeruli and tubules in the PUUO group.H&E staining was used to assess the number of nephrons and the degree of damage to glomeruli and tubules.The results of the PUUO group showed that glomerular atrophy was obvious, many renal tubules were seriously dilated, the number of nephrons was remarkably reduced, the average injury area percentage was close to 50%, and the pathological score was the highest (Fig. 6E).Compared with the PUUO group, the PBS group and USC-Exo group showed improved glomerular morphology, degree of renal tubular expansion, pathological score, and number of nephrons.However, the pathological score and number of nephrons in the USC-Exo group were better than those in the PBS group.The number of nephrons is generally considered directly related to renal function.Masson staining showed that a large amount of collagen was deposited around the renal tubules in the PUUO group, suggesting that severe renal interstitial fibrosis occurred.Furthermore, the renal interstitial fibrosis area of the USC-Exo group was remarkably lower than that of the PUUO group (Fig. 6F).
In the immunohistochemistry experiment, we found that TGF-β1 and IL-1β were highly expressed in the PUUO group.USC-Exos inhibited PUUO-induced TGF-β1 and IL-1β production (Fig. 6G and I).The occurrence of renal interstitial fibrosis is largely due to epithelial-mesenchymal transformation.Therefore, we tested representative indicators of epithelial-mesenchymal transformation.The immunohistochemistry results showed that the expression of ECAD (renal tubular epithelial marker) decreased in the PUUO group, while the expression of α-SMA (fibroblast marker) was remarkably increased.Compared with the PUUO group, the USC-Exo group showed remarkably inhibited expression of α-SMA and enhanced expression of ECAD (Fig. 6H and I).These results suggested the potential role of USC-Exos in inhibiting renal interstitial fibrosis in UPJO.We also found that the expression of COX-2 was increased in the PUUO group.The activity of COX-2 in normal cells is very low.When cells are stimulated by inflammation, its expression level in inflammatory cells can rise to 10 to 80 times the normal level, resulting in an inflammatory response and tissue damage.USC-Exos also inhibited the expression of COX-2 after PUUO (Fig. 6H and I).
Exosomal miR-122-5p promotes proliferation and migration of HK-2 cells
According to our miRNA high-throughput sequencing and qRT-PCR analysis, miR-122-5p was the most highly expressed miRNA in USC-Exos.Through relevant literature, we found that miR-122-5p is a potential therapeutic miRNA in renal disease [29][30][31][32].The EdU assay confirmed that miR-122-5p remarkably enhanced the proliferation of HK-2 cells, and when miR-122-5p was inhibited, cell proliferation remarkably decreased (Fig. 7A).The scratch and transwell assays confirmed that miR-122-5p also promoted the migration of HK-2 cells, and when miR-122-5p was inhibited, cell migration remarkably decreased (Fig. 7B and C).This study found that in the renal tissue of the PUUO modeling group and inflammation and fibrosis induced by TGF-β1 + IL-1β of HK-2, the expression level of miR-122-5p was remarkably lower than that of the sham group and control group, indicating the potential therapeutic effect of exogenous addition of miR-122-5p on damaged renal tissue (Fig. 7D).Therefore, we successfully transfected HK-2 cells with mimics to overexpress miR-122-5p (Fig. 7E).The results were statistically analyzed, and a statistical chart was drawn (Fig. 7F).
Exosomal miR-122-5p inhibits fibrosis and inflammation and activates related pathways
Then, we investigated the protein expression of fibrosis and inflammation after transfection with miR-122-5p mimics.The results showed that the fibrosis-related protein vimentin was decreased in the miR-122-5p mimic group.The inflammation-related protein COX-2 was also decreased (Fig. 7G).After transfection with miR-122-5p mimics, the protein levels of p-AKT and p-ERK1/2 were remarkably increased.This result indicated that miR-122-5p mimics can also activate the PI3K-AKT and MAPK-ERK1/2 signaling pathways (Fig. 7H).
SOX2 may be a negative regulatory factor in obstructive kidney injury
To further determine the functions of miR-122-5p, we searched the miRNA target prediction analysis database MiRTarBase and found 532 genes as potential targets of miR-122-5p as described before.Then, we found 2 gene expression files (GSE45304 and GSE96102) from the GEO database that were related to PUUO [33,34].GSE45304 contained 3 PUUO samples and 3 sham/ normal samples.GSE96102 contained 36 PUUO samples and 39 sham/normal samples (Table S2).On the basis of P < 0.05 and |logFC| ≥ 0.3, a total of 1,232 DEGs were selected from GSE45304: 427 up-regulated genes and 805 down-regulated genes.In GSE96102, 714 DEGs were selected: 164 up-regulated genes and 550 down-regulated genes.All DEGs were identified by comparing PUUO samples with sham/normal samples.Then, we intersected their up-regulated DEGs and down-regulated DEGs in GSE45304 and GSE96102 and target genes of miR-122-5p separately.A Venn diagram was generated to show the intersection of the DEG profiles (Fig. 8A).SOX2 is an intersecting gene between up-regulated genes in PUUO and miR-122-5p target genes.
In addition, protein interactions among the DEGs of GSE45304 and GSE96102 were predicted with STRING tools.A total of 22 nodes and 28 edges were involved in the PPI network, as presented in Fig. S1.The top 10 genes evaluated by connectivity degree in the PPI network were identified (Table S3).Among the intersecting genes of GSE45304 and GSE96102, SOX2 has the highest degree, which means that the difference is the most significant (Fig. S1).
To verify the potential roles of SOX2 in renal disease, we conducted correlation analysis and subgroup analysis between SOX2 and clinical features using the Nephroseq v5 online tool.The results showed that the mRNA expression of SOX2 was positively correlated with renal disease.SOX2 was highly expressed in patients with CKD and had greater distribution in the tubulointerstitium than in the glomeruli (Fig. 8E).Thus, SOX2 may be a factor leading to renal tubulointerstitial fibrosis.In addition, SOX2 was negatively correlated with the GFR and positively correlated with serum creatinine and urine protein (Fig. 8F).This study found that in the renal tissue of the PUUO modeling group and inflammation and fibrosis induced by TGF-β1 + IL-1β of HK-2 cells, the expression level of SOX2 was remarkably higher than that of the sham group and control group, indicating the negative regulatory effect of SOX2 in obstructive kidney injury.
To further confirm this conclusion, we transfected the SOX2 overexpression plasmid into the HK-2 cell line.The transfection efficiency of overexpression was verified by transfection with green fluorescent protein (GFP)-labeled plasmids, qRT-PCR, and Western blotting (Fig. 8H and J).The results showed that SOX2 overexpression remarkably increased the expression of inflammation-and fibrosis-related proteins (Fig. 8J).The qRT-PCR results of COX-2 and α-SMA were consistent with the Western blot results (Fig. 8I).Overexpression of SOX2 remarkably decreased the expression of p-AKT and p-ERK1/2 (Fig. 8K).
USC-Exos inhibit fibrosis and inflammation through exosomal miR-122-5p/SOX2
In addition to the evidence from bioinformatics analysis demonstrating the relationship between miR-122-5p and SOX2, we also found 3 predicted targets of miR-122-5p in the 3′UTR of the SOX2 transcript in miRTarBase.We selected a prediction target with the smallest minimum free energy.To verify the direct binding of miR-122-5p to the 3′UTR of the SOX2 gene, we then cloned the wild-type and mutant 3′UTR of SOX2 downstream of a firefly luciferase cassette in a luciferase reporter vector (Fig. 8B).Cotransfection of the miR-122-5p mimic with the wildtype reporter plasmids in human embryonic kidney 293T cells remarkably reduced the luciferase activity, and this effect was remarkably reversed by cotransfection with mutant reporter plasmids (Fig. 8C).These results indicated that miR-122-5p from USC-Exos may bind the SOX2 mRNA 3′UTR and thereby inhibit SOX2 expression via posttranslational repression.To further confirm the regulatory relationship between miR-122-5p and SOX2 in vitro, we transfected HK-2 cells with the miR-122-5p mimic.Then, decreased SOX2 expression was confirmed by qRT-PCR and Western blotting (Figs.7G and 8D).Furthermore, Western blot analysis showed that cotransfection with the miR-122-5p mimic and SOX2 overexpression plasmid antagonized the anti-inflammatory and antifibrotic effects of miR-122-5p (Fig. 8L).We also tested the expression levels of p-AKT and p-ERK1/2 after cotransfection with the miR-122-5p mimic and SOX2 overexpression plasmid.Western blot analysis also showed that the miR-122-5p mimic could increase the protein expression of p-AKT and p-ERK1/2, and the addition of the SOX2 overexpression plasmid remarkably inhibited this effect (Fig. 8M).This result indicated that miR122-5p plays an anti-inflammatory and antifibrotic role and activated PI3K-AKT and MAPK signaling pathways by targeting SOX2.
Discussion
UPJO is the most common cause of hydronephrosis [35] and the most common cause of obstructive kidney injury in children.Children with hydronephrosis often do not have specific clinical symptoms, which also leads to varying degrees of renal function damage and changes in renal fibrosis in some patients, especially older children.Even after the obstruction is relieved by surgery, the renal fibrosis and perioperative inflammation caused by long-term obstruction may still affect the renal function of the affected side.Therefore, after the surgical removal of the obstruction, additional intervention measures are needed to prevent the long-term effects of renal fibrosis and inflammation on the renal tissue.Research has shown that USCs play a positive role in tissue damage repair and have good effects on acute and chronic kidney injury diseases [36][37][38].The tissue damage repair effect of USCs is mainly mediated by the paracrine release of USC-Exos.The effect of USC-Exos is mainly attributed to extracellular-derived miRNAs.To demonstrate whether USC-Exos inhibit fibrosis and inflammation caused by obstructive kidney injury and alleviate kidney injury, we performed in vitro and in vivo experiments, with the following results.
In vitro, USCs express pluripotent stem cell markers such as Nano G. USCs also express surface markers of mesenchymal stem cells, including CD73 and CD90, but do not express the hematopoietic stem cell markers, CD34 and CD45.The flow cytometry results showed that USCs expressed CD146, which is a podocyte marker.CD146 was expressed in parietal cells and podocytes of glomerular tissue and blood vessels of the human renal cortex but not in renal tubular epithelial cells or ureteral mucosa [39].This study also investigated specific renal biomarkers, including WT-1 and nephrin.These positive renal biomarkers suggest a possible source of USCs.
USC-Exos were isolated from USC culture medium and identified as exosomes based on the expression of exosomespecific markers (such as TSG101, HSP70, and CD9) and the size of exosomes.PKH26 fluorescently labeled USC-Exos surround the target cells, confirming that USC-Exos can be effectively absorbed by the target cells.When ureteral obstruction occurs, renal parenchyma compression leads to reduced blood perfusion, and dilation of the collecting duct and distal tubules leads to interstitial fibrosis.By promoting the proliferation and migration of HK-2 and HUVECs, USC-Exos can effectively promote angiogenesis and reduce fibrosis.It was found that the higher the concentration of USC-Exos used, the stronger the effect.TGF-β1 and IL-1β were used to establish fibrosis and inflammation models in vitro and PUUO models in vivo, and the anti-inflammatory and antifibrotic effects of USC-Exos were validated at both the cellular and tissue levels.
This study aimed to determine the functional mechanism of USC-Exos.Therefore, we determined the type and expression of miRNAs in USC-Exos through high-throughput sequencing.We selected the top 20 miRNAs to predict their target genes and used these target genes for GO and KEGG enrichment analysis.We found that the main enriched biological processes were "mesenchymal development" and "mesenchymal differentiation", indicating that USC-Exos and exosomal miRNAs may play a positive role in the development and differentiation of mesenchymal stem cells.KEGG enrichment analysis indicated that the main pathways included the PI3K-AKT and MAPK pathways.In subsequent pathway validation, this study found that USC-Exos can activate the PI3K-AKT and MAPK-ERK1/2 pathways and inhibit overactivation of the p38-MAPK pathway.In a study on acute lung injury induced by lipopolysaccharide, it was found that exosomal miR-150 can inhibit the excessive activation of phosphorylated p38 induced by lipopolysaccharide, thereby treating acute lung injury [40].
Although this study knows the downstream pathway of USC-Exos, it is still unclear which components in USC-Exos exert therapeutic effects.There are many components in exosomes, including proteins, bioactive lipids, and RNA.In particular, miRNA is abundant in exosomes and has been proven to be the main component and key molecule in exosomes.The role of exosomes may be the result of the synergistic effect of multiple miRNAs and their target genes.Through sequencing analysis, this study identified some miRNAs that are highly expressed in USC-Exos, some of which have a positive impact on the disease, while others have a negative impact.Therefore, this study reviewed previous relevant studies based on the obtained miRNA results.Through high-throughput sequencing, the most abundant miRNA was determined to be miR-122-5p.Among the top 20 highly expressed miRNAs, 7 may play therapeutic roles in kidney diseases, including miR-122-5p [29][30][31][32], miR-26a [41,42], miR-30 [43,44], miR-10a [45], miR-10b [45], let-7a [46], and miR-200b [47].Some of their target genes may be involved in inhibiting renal fibrosis and promoting cell proliferation.Examples include pyruvate kinase M (PKM), TGFBR, zinc finger E-box binding homeobox 1 (ZEB1), and E2F Transcription Factor 2 (E2F2).Representative miRNAs with negative effects are also expressed in USC-Exos, such as the overexpression of miRNA-21-5p, which is believed to be closely related to end-stage renal disease with vascular calcification [48].Therefore, although miRNA-21 is expressed in USC-Exos, it is a typical negative regulatory factor in kidney disease.Among the potential therapeutic exosomal miRNAs in this study, miR-122-5p was the most abundant miRNA.This finding was confirmed by high-throughput sequencing and qRT-PCR.In addition, this study found that the expression of miR-122-5p in renal tissue of the PUUO modeling group was remarkably lower than that of the sham group.In in vitro experiments, when inflammation and fibrosis were induced by TGF-β1 and IL-1β in renal tubular epithelial cells, the expression of miR-122-5p was also remarkably lower than that of the blank control group.Moreover, the role of miR-122-5p in kidney diseases and its ability to promote cell proliferation and migration have been reported [29][30][31][32].In this study, miR-122-5p was also observed to promote the proliferation and migration of HK-2 cells and HUVECs.In in vitro experiments, after transfection with miR-122-5p mimics, the expression of inflammation-and fibrosisrelated proteins remarkably decreased, while the expression of the pathway proteins phosphorylated AKT and phosphorylated ERK1/2 remarkably increased.This result suggests that the antiinflammatory and antifibrotic effects of USC-Exos, as well as the activation of the PI3K-AKT and MAPK-ERK1/2 pathways, may be mainly achieved through miR-122-5p.
Therefore, this study further explored the downstream mechanism of miR-122-5p derived from USC-Exos and found that it can down-regulate the expression of SOX2.Previous studies have found that miR-122-5p can bind to SOX2, and miR-122-5p-mediated down-regulation of SOX2 is associated with cervical cancer [49].SOX2 is a member of the SRY-related HMG box (SOX) family of transcription factors involved in regulating embryonic development and determining cell fate.The expression level of SOX2 is correlated with the degree of renal tubulointerstitial fibrosis and renal tubular cell damage [50,51].In previous studies, it has been confirmed that SOX2 is highly expressed in patients with CKD and negatively correlated with renal function (such as GFR, creatinine/urea nitrogen, proteinuria, etc.) [52][53][54].This study also confirmed that the expression of SOX2 in the kidney tissue of the PUUO modeling group was remarkably higher than that of the sham group.In in vitro experiments, when inflammation and fibrosis were induced by TGF-β1 and IL-1β in HK-2 cells, the expression of SOX2 was also remarkably increased.The possible negative regulatory role of SOX2 in obstructive kidney injury has been strongly supported.This study also verified that miR-122-5p can directly bind to the 3′UTR of SOX2 through a double luciferase reporter gene experiment.After transfection with the miR-122-5p mimic, the expression of SOX2 was also remarkably reduced.To verify the therapeutic effect of miR-122-5p targeting SOX2, we constructed a SOX2 overexpression plasmid.After overexpression of SOX2 in vitro, inflammation and fibrosis were remarkably enhanced, while the pathway proteins p-AKT and p-ERK1/2 were suppressed, indicating that SOX2 is positively correlated with inflammation and fibrosis and plays a negative regulatory role in obstructive kidney injury.
When both the SOX2 overexpression plasmid and the miR-122-5p mimics were transfected simultaneously, the inhibitory effects of miR-122-5p on inflammation and fibrosis and activation of related pathways were both antagonized by SOX2.The mechanism of the miR-122-5p/SOX2 axis derived from USC-Exos in inhibiting inflammation and fibrosis after obstructive kidney injury discovered in this study may provide new molecular targets for the treatment of obstructive kidney injury.
Recent in vivo studies have described the therapeutic effects of USC-Exos on AKI and CKD.There is currently no report on the effect of USC-Exos on renal fibrosis and inflammation in UPJO postoperative hydronephrosis models.Thus, this experiment established a PUUO animal model representing the pathological changes of UPJO.The USC-Exos labeled with DiR can reach the kidneys and be absorbed by the affected kidney tissue, maintaining stability within 24 h.This study found that the CD31 fluorescence intensity of the PUUO group remarkably decreased and increased in the USC-Exo group.CD31 is mainly used to evaluate angiogenesis in tissues.This finding demonstrates the ability of USC-Exos to promote angiogenesis.IL-10 has the same fluorescence results as CD31.However, IL-6 showed the opposite result.IL-6 is a recognized proinflammatory factor, while IL-10 is an anti-inflammatory factor.From the pathological results, we found that the damage to the glomerulus and renal tubules after PUUO modeling is significant, and the number of nephrons is also remarkably reduced, with an increase in collagen deposition.Although there was some recovery after the obstruction was relieved, the effect was better in the USC-Exo group.The MRI results showed that the renal pelvis was remarkably dilated after PUUO modeling.Even if the obstruction was relieved, the shape of the renal pelvis was not effectively restored, and the intervention of USC-Exos remarkably promoted the restoration of renal pelvis morphology.The changes in inflammation and fibrotic indicators detected by immunohistochemistry were also consistent with the in vitro experiments.Especially after PUUO, the expression of TGF-β1 and IL-1β was increased, which is consistent with the use of TGF-β1 and IL-1β as a cell model in vitro.The expression of TGF-β1 and IL-1β in vivo was also inhibited by USC-Exos.
Limitation of this study is that it only analyzed and discussed the top 20 expressed miRNAs in USC-Exos.It is currently unclear whether there are poorly expressed but important miRNAs.In addition, this study did not experimentally validate all 7 miRNAs with potential therapeutic effects but only selected miR-122-5p with the highest expression level and therapeutic potential for validation.Moreover, the therapeutic effect of miR-122-5p was not verified in vivo.In subsequent studies, we will aim to validate these miRNAs one by one and explain which aspects of kidney disease are closely related to them.
Conclusion
The results of this study indicate that in the PUUO model and TGF-β1 + IL-1β joint use in vitro and in vivo, USC-Exos can effectively internalize into HK-2 cells and HUVECs, promoting their proliferation, migration, and angiogenesis.USC-Exos reduce renal fibrosis and inflammation by activating the PI3K-AKT and MAPK pathways through the miR-122-5p/SOX2 axis derived from USC-Exos.These findings provide molecular therapeutic targets for patients with obstructive kidney injury.
Fig. 2 .
Fig. 2. USC-Exos promoted proliferation, migration, and angiogenesis with increasing concentrations.(A) CK-19-labeled HK-2.(B and G) Effect of different concentrations of USC-Exos on the proliferation of HK-2 cells and HUVECs by EdU assays.(C, D, H, and I) Effect of different concentrations of USC-Exos on the migration of HK-2 cells and HUVECs by transwell assays and scratch assays.(E and K) Columnar statistics for proliferation and migration.(F) Effect of different concentrations of USC-Exos on the angiogenesis of HUVECs by tube formation assay.(J) Columnar statistics for tube formation.Scale bars, 100 μm.Data are represented as the means ± SD. *P < 0.05, **P < 0.01, ***P < 0.001, and ****P < 0.0001.ns, not significant.
Table 2 .
Specific primers used for qRT-PCR analysis | 2024-04-15T05:18:35.358Z | 2024-04-10T00:00:00.000 | {
"year": 2024,
"sha1": "937217bb2947ce3de71d84353be366371a577bc3",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.34133/bmr.0013",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "937217bb2947ce3de71d84353be366371a577bc3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52807041 | pes2o/s2orc | v3-fos-license | Long Non-coding RNAs Expression Profile in HepG2 Cells Reveals the Potential Role of Long Non-coding RNAs in the Cholesterol Metabolism
Background: Green tea has been shown to improve cholesterol metabolism in animal studies, but the molecular mechanisms underlying this function have not been fully understood. Long non-coding RNAs (lncRNAs) have recently emerged as a major class of regulatory molecules involved in a broad range of biological processes and complex diseases. Our aim was to identify important lncRNAs that might play an important role in contributing to the benefits of epigallocatechin-3-gallate (EGCG) on cholesterol metabolism. Methods: Microarrays was used to reveal the lncRNA and mRNA profiles in green tea polyphenol(-)-epigallocatechin gallate in cultured human liver (HepG2) hepatocytes treated with EGCG and bioinformatic analyses of the predicted target genes were performed to identify lncRNA-mRNA targeting relationships. RNA interference was used to investigate the role of lncRNAs in cholesterol metabolism. Results: The expression levels of 15 genes related to cholesterol metabolism and 285 lncRNAs were changed by EGCG treatment. Bioinformatic analysis found five matched lncRNA-mRNA pairs for five differentially expressed lncRNAs and four differentially expressed mRNA. In particular, the lncRNA AT102202 and its potential targets mRNA-3-hydroxy-3-methylglutaryl coenzyme A reductase (HMGCR) were identified. Using a real-time polymerase chain reaction technique, we confirmed that EGCG down-regulated mRNA expression level of the HMGCR and up-regulated expression of AT102202. After AT102202 knockdown in HepG2, we observed that the level of HMGCR expression was significantly increased relative to the scrambled small interfering RNA control (P < 0.05). Conclusions: Our results indicated that EGCG improved cholesterol metabolism and meanwhile changed the lncRNAs expression profile in HepG2 cells. LncRNAs may play an important role in the cholesterol metabolism.
transcripts over 200 nucleotides long, have a variety of important physiological processes, including X-chromosome inactivation, genomic imprinting, and embryonic stem cell differentiation. [11] Recently, lncRNAs have been recognized to play an important role in cardiac development [12,13] and be associated with susceptibility to coronary artery disease. [14,15] Thus, we hypothesize that lncRNAs may also be involved in the regulation of cholesterol metabolism. In the present study, we performed microarray analysis using green tea polyphenol(-)-epigallocatechin gallate in cultured human liver (HepG2) cells treated with EGCG to clarify the effects of EGCG on hepatic cholesterol metabolism and explore the role of lncRNAs in the regulation of cholesterol metabolism.
Long non-coding RNA and mRNA microarray
Total RNA was extracted using TRIZOL Reagent (Life Technologies) following the manufacturer's instructions and quality assessment was conducted using Bioanalyzer 2100 (Agilent technologies, Santa Clara, CA, US) and Total RNA integrity number values were between 8.9 and 9.8 (average: 9.3). Total RNA were amplified, labeled and purified using Affymetrix Amplication and labeling kits followed the manufacturer's instructions to obtain the biotin labeled complementary DNA. One hundred ng of total RNA were processed in parallel with an external microarray quality control A RNA to control robustness of data. Labeled DNA mean yield was 7.19 mg (min: 6.27 mg; max: 7.57 mg). Affymetrix GeneChip ® Human Transcriptome 2.0 ST microarrays were hybridized with 4.7 mg of labeled DNA. Arrays were hybridized in the Gene Chip Hybridization Oven 640 (Affymetrix) for 16 hours at 45°C rotating at 60 rpm. Washing, staining, and scanning of the arrays were done with the GeneChip Expression Wash, Stain, and Scan Kit (Affymetrix) and the Gene Chip Fluidics Station 450 (Affymetrix). Quantile normalization and subsequent data processing were performed using the Command Console Software 3.1 (Affymetrix) with default settings. Differentially expressed mRNA and lncRNAs were considered significantly differentially expressed when fold-change was >1.5 or < −1.5, and P < 0.05.
Long non-coding RNAs targets prediction
Differentially expressed lncRNAs were selected for target prediction. Two independent algorithms were used. The first algorithm searches for target genes acting in cis. With the help of gene annotations at University of California, Santa Cruz (UCSC) (http://genome.ucsc.edu/), lncRNAs and potential target genes were paired and visualized using UCSC genome browser. The genes transcribed within a 10 kbp window upstream or downstream of lncRNAs were considered as cis target genes. [16] The second algorithm is based on mRNA sequence complementarity and RNA duplex energy prediction, assessing the impact of lncRNA binding on complete mRNA molecules. It uses the BLAST software for first round screening. Finally, RNAplex in contrast to similar programs, can recover short, highly stable interactions between two RNAs, by introducing a per nucleotide penalty and thus was used to choose trans-acting target genes. [17] RNAplex parameters were set as -e −20. Then we integrated the predicted potential lncRNA targets above with the differently expressed mRNAs in the profile.
Quantitative real-time polymerase chain reaction validation
Quantitative real-time polymerase chain reaction (qRT-PCR) was carried out for verification of the microarray data. The qRT-PCR was performed on a real-time detection instrument ABI PRISM7900 system (Applied Biosystems, Foster City, CA, USA) using 2 × PCR master mix (SuperArray Bioscience, Frederick, MD, USA) at the following conditions: 3 minutes at 95°C, 40 cycles: 30 seconds at 95°C and 40 seconds at 60°C. They were used to quantitate relative amounts of product using glyceraldehyde 3-phosphate dehydrogenase as an endogenous control. Expression ratios were subjected to a log 2 transform to produce fold change data. Student's t-test was used to test for significant differences between control and EGCG-treated groups. (Statistical Package for the Social Sciences (SPSS) version 17, SPSS Inc., Chicago, IL, USA). The primers used are listed in Table 1.
Small interfering RNA to knockdown long non-coding RNA AT102202
Three different Small is interfering RNAs (siRNAs) that targeted AT102202 RNA, and a scrambled siRNA control were purchased from Life Technologies. The siRNA molecules are 21 base-pair double-stranded RNA oligonucleotides with proprietary chemical modifications. The BLOCK-iT RNA interference (RNAi) designer was used to find gene-specific 21 nucleotide siRNA molecules. It uses gene specific targets for RNAi analysis and reports up to 10 top scoring siRNA targets. The freeze-dried siRNAs were dissolved in RNase free-water and stored as aliquots at 20ºC. These siRNAs were respectively transfected into HepG2 cells with lipofectamine 2000 (Invitrogen) according to the manufacturer's instruction. Knockdown efficiency was tested 24 hours after transfection.
The siRNA with the sequence cucuuguugaaugucuugutt (siRNA 124) with the optimal concentration of 18 nmol/L yielded the highest degree of AT102202 knockdown and thus was selected for subsequent functional studies. Briefly, a total of 250,000 cells were cultured in serum free medium (without antibiotics) for 24 hours. siRNA 124 was transfected at 18 nmol/L concentration into HepG2 cells with lipofectamine 2000 (Invitrogen). Cells were incubated for 24 hours at 37°C in a CO 2 incubator, and the transfected HepG2 cells were treated for 24 hours with EGCG (10, 25 mM). The level of expression of predicted target gene was assessed by qRT-PCR.
Effects of epigallocatechin-3-gallate on hepatic cholesterol metabolism and long non-coding RNA expression
To comprehensively investigate the effects of EGCG on hepatic cholesterol metabolism and lncRNA expression, we performed microarray analysis using HepG2 cells treated with 25 mmol/L EGCG. The microarrays contain probes to target nearly 40,000 ncRNAs and 240,000 mRNAs. In total, we identified 2737 differentially expressed transcripts with a ±1.5-fold. As shown in Table 2, the expression levels of 15 genes categorised in sterol metabolic process were changed by EGCG treatment, among which, the highest expression level was LDL receptor (2.6-fold) and the lowest expression level was 3-hydroxy-3-methyl glutaryl coenzyme A reductase (HMGCR) (−3.5-fold). The results suggest that EGCG directly affects cholesterol metabolism in hepatocytes. In addition, a total of 285 lncRNAs were differentially expressed after EGCG treatment, among which 29 were changed with a ±2-fold [ Table 3].
Potential targets of the differentially expressed long non-coding RNAs
Since lncRNAs regulate the expression of its target genes; the next step is to construct a relationship between the expression profile of the mRNA involved in the cholesterol metabolism and the differentially expressed lncRNAs via target prediction programs. As a result, we found five matched lncRNA-mRNA pairs for five differentially expressed lncRNAs and four differentially expressed mRNA [ Table 4]. In particular, the lncRNA AT102202 and its potential targets mRNA-HMGCR were identified. AT102202 is a length of 303 nucleotides lncRNA, containing four exons which three exons highly overlap with the HMGCR gene exons 4-6 (from UCSC genome database), indicating that HMGCR is potential cis-regulated by AT102202.
Quantitative real-time polymerase chain reaction validation
In the present study, we focused on investigating the effect of EGCG on HMGCR, AT102202 and LDL receptor expression, and qRT-PCR was carried out to confirm the effect of EGCG on expression levels of these genes. As expected, the addition of 10 and 25 mM of EGCG significantly increased the level of expression of AT102202 and LDL receptor, meanwhile decreased HMGCR expression [ Figure 1]. Furthermore, we confirmed that the expression of AT102202 and its predicted target gene-HMGCR was linked.
Knockdown of long non-coding RNA AT102202 in green tea polyphenol(-)-epigallocatechin gallate in cultured human liver cells
To investigate the functional role of AT102202, we used siRNA to downregulate AT102202 expression in HepG2 cells. Three different siRNA molecules were tested for their knockdown efficiency, the most efficient of which (siRNA 124) was selected for subsequent functional studies [ Figure 2]. To determine the optimal concentration for knockdown, several different concentrations of siRNA were examined. When these cells were transfected with 18 nmol/L of siRNA, at least 60% AT102202 silencing was observed. Therefore, subsequent functional studies were performed with a maximum of 18 nmol/L siRNA.
Given the correlated expression of AT102202 and HMGCR, we next aimed to determine the effect of AT102202 knockdown on HMGCR expression in HepG2 cells treated with or without EGCG. Using qRT-PCR, we determined the expression of HMGCR following siRNA-mediated knockdown of AT102202. As a result, we found that the level of HMGCR expression was significantly increased following AT102202 knockdown relative to the scrambled siRNA control [ Figure 3]. These results suggest that AT102202 regulates HMGCR expression, and EGCG inhibits the HMGCR expression partially through by AT102202.
dIscussIon
In this study, the microarray analysis reveals that EGCG improves cholesterol metabolism directly through up-or down-regulate multiple genes involved in cholesterol biosynthesis and uptake. In addition, many lncRNAs were differentially expressed after EGCG treatment, and we identified one such transcript, AT102202, which mapped within the HMGCR gene. Knockdown of AT102202 resulted in a markedly increase of HMGCR expression. These findings suggest that lncRNAs play an important role in the regulation of cholesterol metabolism.
In the present study, EGCG has been shown to greatly decrease HMGCR expression and increase LDL receptor expression. HMGCR is the rate-regulating enzyme in the cholesterol biosynthetic pathway and the primary site of cholesterol feedback regulation. [18] LDL receptor is important for mediating cellular LDL uptake and mainly regulated by cholesterol feedback. [19] When the levels of hepatocellular sterols drop, the key transcription factors-sterol regulatory element-binding proteins enters the nucleus, where it activates the expression of LDL receptor. [20] Thus, the decrease in HMGCR expression, leading to LDL receptor up-regulation and subsequently cholesterol uptake by hepatic cell, is an important contributor to the efficacy of EGCG on improvement of LDL-cholesterol. Due to the central role in cholesterol synthesis, HMGCR is the target of several hypocholesterolemic drugs, of which statins are the most extensively studied and among the most widely prescribed drugs worldwide. [21,22] Recently, in efforts to identify nonconventional treatments for hypercholesterolemia, tea catechins have been tested successfully both in vitro and vivo as cholesterol-lowering agents. [7][8][9][10] EGCG, the most pharmacologically active molecule of green tea catechins, was found to potently inhibit the in vitro activity of HMGCR by competitively binding to the nicotinamide adenine dinucleotide phosphate binding site of the enzyme [23] and decrease hepatic HMGCR expression at transcriptional levels. [24] However, the mechanism by which EGCG regulates this rate-limiting enzyme in cholesterol synthesis remains unclear.
Increasing evidence has confirmed lncRNAs to be one of the most important factors controlling gene expression. [25] Therefore, we evaluated the lncRNA expression profile in HepG2 to reveal the potential role of lncRNAs in cholesterol metabolism. Microarray techniques revealed a set of differentially expressed lncRNAs in HepG2 cells, indicating that EGCG may potentially regulate gene expression through by lncRNAs other than microRNAs. [26] ↑ TUCP lncRNAs: Long non-coding RNAs; EGCG: Epigallocatechin gallate. Recent studies demonstrated that lncRNAs can guide changes in gene expression either in cis (on neighboring genes) or in trans (distantly located genes) manner that is not easily predicted based on lncRNA sequence. [27,28] By target prediction programs, we constructed the relationship between the lncRNA and mRNA involved in cholesterol metabolism and found five matched lncRNA-mRNA pairs for five differentially expressed lncRNAs and four differentially expressed mRNA.
Noteworthily, we identified one lncRNA-AT102202 and its potential targets mRNA-HMGCR. AT102202 is mapped within the HMGCR gene locus, which has prompted the hypothesis that lncRNAs AT102202 may have cis-acting effects within HMGCR gene locus. Following siRNA-mediated knockdown of AT102202 in HepG2 cells, a significant increase in HMGCR expression level was observed, confirming that HMGCR was cis-regulated by AT102202. Since lncRNAs regulate gene expression by a variety of mechanisms, including chromatin modification, transcription, post-transcriptional processing, [11] the mechanism by which AT102202 regulate HMGCR expression remain unclear. Recently, the long intergenic RNA HOTAIR was shown to regulate metastatic progression in human breast cancer. This RNA recruits Polycomb Repressive Complex 2 to specific target genes in the genome that lead to histone H3 lysine 27 trimethylation and epigenetic silencing of metastatic suppressor genes. [29] In addition, a number of studied lncRNAs, at transcriptional level, influenced the expression (either positively or negatively) of the local protein-coding gene by RNAi, recruiting and modulating the activities of RNA-binding protein, or recruitment of activator and repressor proteins (transcription factors). [30][31][32] Thus, whether AT102202 regulate HMGCR expression through by epigenetic effects or transcriptional regulation needed to be further explored in future.
In contrast to the group of cis-regulatory lncRNAs, there are a couple of examples of lncRNAs that exert their transcriptional effects across chromosomes in trans. Here we have introduced the program RNAplex algorithms which reduces the time needed to localize putative hybridization sites, mainly by neglecting intramolecular interactions and by using a slightly simplified energy model. [17] As a consequence, we found that acetyl-CoA acetyltransferase 2, which is involved in absorbing dietary cholesterol and in storing cholesteryl esters as lipid droplets, [33] may also be regulated by lncRNA (N342928) in trans, but confirmation and elucidation of this relationship requires further study.
The present study indicates that EGCG improves cholesterol metabolism through decrease in HMGCR expression and up-regulation of LDL receptor. In addition, we found that lncRNA AT102202 may play an important role in the regulation of HMGCR expression. Furthermore, the roles of other differentially expressed lncRNAs in cholesterol metabolism from the array data need further verification and analysis. references Figure 3: The expression of 3-hydroxy-3-methylglutaryl-CoA reductase (HMGCR) mRNA expression following AT102202 knockdown (siRNA124 at 18 nmol/L) in HepG2 cells with EGCG (10 or 25 mmol/L) treatment for 24 hours. The level of HMGCR expression was measured using quantitative real-time PCR and error bars indicate the standard error of the mean for 6 technical replicates and expression values are normalized to scramble siRNA controls. | 2018-04-03T00:31:44.510Z | 2015-01-05T00:00:00.000 | {
"year": 2015,
"sha1": "f2747b60e68a01e9a48f40cb6565d0ae80eedad7",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/0366-6999.147824",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f2747b60e68a01e9a48f40cb6565d0ae80eedad7",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
257371557 | pes2o/s2orc | v3-fos-license | Maximizing Women's Motivation in Domains Dominated by Men: Personally Known Versus Famous Role Models
Two studies (n = 1,522) examined the impact of role models in sport and science, technology, engineering, and mathematics (STEM) domains where gender discrimination has resulted in a lack of high-profile women. We examined the role of gender matching of personally known and famous exemplars on women's and men's motivation. Participants nominated a woman or man in sport (Study 1) or STEM (Study 2) who was either famous or known to them personally; they then indicated the extent to which they perceived this individual to be a motivating role model. Women and men were more motivated by personally known (vs. famous) role models. For famous exemplars, both women and men were most motivated by same-gender models (Studies 1 and 2). For personally known exemplars, men were similarly motivated by same- and other-gender models (Studies 1 and 2), but women were more motivated by same-gender models in sport (Study 1). Mediation analyses indicated that personally known (vs. famous) exemplars and, for women, same- (vs. other-) gender exemplars, were perceived as more attainable future selves and consequently were more motivating (Study 2). Given that there are fewer famous women in domains dominated by men, it is important to know if women can be inspired by personally known rather than famous individuals. These studies provide insight into the kinds of exemplars that are most motivating for women and may serve as a guide for educators and other practitioners seeking to provide the best role models for girls and women in domains dominated by men. Additional online materials for this article are available on PWQ's website athttp://journals.sagepub.com/doi/suppl/10.1177/03616843231156165.
In 2018, Donna Strickland became the first woman to win the Nobel Prize for Physics in over 60 years; media stories highlighted the significance of her success, noting that she would serve as a valuable role model for future generations of women in science (Casey, 2018;Nusca, 2018). The fact that this achievement was so unusual, however, points to a challenge for women seeking role models in fields dominated by men. Given that women's successes are often undervalued or underreported (Biscomb & Matheson, 2019;Coche, 2015;Cooky et al., 2015;Sherry et al., 2016), it may be difficult for aspiring women to find inspirational same-gender examples in such domains. When famous same-gender exemplars are unavailable, women may instead seek inspiration from exemplars of success whom they know personally-a chemistry teacher rather than a Nobel prize winner; a local baseball coach rather than a major league baseball superstar. It is unclear, however, whether such personally known exemplars will be as motivating as celebrities. In the present research, we examined this directly, assessing the importance of gender matching in determining the motivating impact of both personally known and famous exemplars.
Importance of Same-Gender Role Models
In their motivational theory of role modeling, Morgenroth et al. (2015) define role models as "individuals who influence role aspirants' achievements, motivation, and goals by acting as behavioral models, representations of the possible, and/or inspirations" (p. 4); they argue that role aspirants will be inspired by role models to the extent that they perceive those models to represent goal embodiment, attainable achievements, and desirable outcomes. For women in fields dominated by men, who see few examples of same-gender others in high-level positions, it may be especially important to have women role models who embody their goals and serve as inspirational representations of possible future success. These models provide valuable evidence that such achievements are attainable for women in domains traditionally dominated by men (Midgley et al., 2021) and may provide a useful template guiding women's behavior as they seek similar success (Lockwood, 2006). Further, women who have achieved a high level of success may illustrate that such achievements are not only possible, but also desirable; women may be more likely to admire same-gender exemplars of success, and consequently be more motivated to adopt behaviors aimed at pursuing similar accomplishments (Morgenroth et al., 2015). Thus, same-gender role models may be particularly motivating for women in fields dominated by men, by representing attainable success, providing guides to behavior, and activating inspiration.
Indeed, a growing body of research suggests that women benefit from exposure to successful same-gender role models in fields traditionally dominated by men. For example, in one set of studies examining athletic role models, men were similarly motivated by women and men, but women were more motivated by women than men (Midgley et al., 2021). Role models who are women boost other women's motivation and performance (Herrmann et al., 2016;Marx et al., 2013;Midgley et al., 2021) and enhance their interest in pursuing science, technology, engineering, and mathematics (STEM) fields, in particular (Olsson & Martiny, 2018;Pietri et al., 2020;Shin et al., 2016;Stout et al., 2011). Matching role models on gender may be particularly effective because women are more likely to identify with successful women than men (Lockwood, 2006;Midgley et al., 2021) and because successful women provide compelling evidence refuting traditional stereotypes that men are superior in STEM fields (Midgley et al., 2021;Van Camp et al., 2019;Young et al., 2013). Thus, same-gender role models may serve as embodiments of women's goals and provide valuable representations of what is possible, which in turn may encourage women to acquire similar skills, reinforce their goals, and motivate them (Morgenroth et al., 2015).
Famous Versus Personally Known Role Models
Despite clear evidence that same-gender role models are more motivating than different-gender role models, it is less clear whether such same-gender exemplars are most motivating when they are famous or whether they may also be motivating when they are personally known. In fields that are heavily dominated by men, this distinction is important because women may find it difficult to identify relevant high-profile examples of other women's success stories. Given that there are no women's professional teams at the highest levels of many sports, for example, it may not be possible for aspiring women athletes to identify same-gender celebrity role models who are comparable to role models who are men; there are no examples of famous women who have played on winning teams in the World Series, Stanley Cup, or Superbowl. Even when professional women's teams exist, as with the WNBA, they are not afforded the same media prominence as men's leagues (Spruill, 2017;Yip, 2018), and players are consequently less likely to be household names. Indeed, when asked to nominate a role model in sports, both men and women participants were most likely to generate names of athletes who are men (Giuliano et al., 2007;Midgley et al., 2021), even though women found samegender examples to be most motivating (Midgley et al., 2021).
Similarly, whereas individuals may find it easy to generate examples of well-known men in STEM fields (e.g., Albert Einstein, Stephen Hawking), they likely have access to fewer such prominent examples of women (e.g., Marie Curie, Katherine Johnson). Although women's share of prestigious international awards in science and engineering has increased in the past 20 years, women received only 19% of these awards from 2016 to 2020 (Meho, 2021). As of 2019, out of the 210 Nobel Prizes in science and economic fields since 1901, only 22 had gone to women (Guterman, 2019), and in 2021, all seven of the Nobel science prize (Physics, Chemistry, and Medicine) winners were men (Nobel Prizes 2021, n.d.). Thus, although women may benefit from same-gender role models, a lack of support for women's achievements may make it difficult for them to identify well-known examples of women who have achieved success in domains traditionally dominated by men.
Given the dearth of publicly lauded women in domains dominated by men, it is important to consider whether personally known role models would be as motivating as famous role models. It is possible that, if their achievements are less spectacular than those of celebrity examples, nonfamous role models may be less inspirational; a college professor's modest success may be less motivating than that of a Nobel laureate. To the extent that a famous exemplar's achievements are more publicly praised, they may seem more admirable and desirable; a desirable role model may in turn be more inspiring (Morgenroth et al., 2015). Alternatively, it may be that the success of a personally known role model appears to be more attainable, and thus may actually be more motivating than a celebrity's success; past research suggests that role models are most inspiring when their success appears within reach (Diel et al., 2021;Lockwood & Kunda, 1997). In one study, for example, participants were more successful in a math task after they were exposed to a nonfamous than a famous scientist (Hu et al., 2020, Study 2). Moreover, when asked to nominate a role model, participants were more likely to generate the name of someone they knew personally than someone famous (Lockwood, 2006;Midgley et al., 2021). Accordingly, although personally known role models may receive less public adulation for their success, we expected that they would nevertheless be more inspirational than would famous role models, for both women and men.
To date, research has not directly compared the impact of famous and personally known role models. A number of studies have examined the impact of role models unknown to participants, including successful exemplars who are famous (Giuliano et al., 2007;Hoyt, 2013;Hoyt & Simon, 2011;Hu et al., 2020, Study 1;Latu et al., 2019, Study 1;Midgley et al., 2021), nonfamous and created for the purpose of the study (Betz & Sekaquaptewa, 2012;Herrmann et al., 2016;Hu et al., 2020, Studies 2 and 3;Lockwood & Kunda, 1997Marx et al., 2013;Marx & Ko, 2012;Shin et al., 2016), nonfamous confederates (Cheryan et al., 2011(Cheryan et al., , 2013Stout et al., 2011, Studies 1 and 2), or real but not famous (Pietri et al., 2020). These studies suggest that same-gender role models who are not personally known can have a positive impact on women. For example, women who read about a (fictional) highly successful woman in their own field rated themselves more positively than those who read about a man (Lockwood, 2006). Additionally, exposure to a positive famous (Latu et al., 2019;Stout et al., 2011, Study 2) or nonfamous (Marx & Ko, 2012) same-gender (vs. other-gender) role model has been associated with better performance on subsequent tasks.
Other studies focusing on the impact of personally known models also suggest that gender is important (Stout et al., 2011, Study 3;Young et al., 2013). For example, women in science and engineering courses who viewed their science professor, who was a woman, as a role model showed a decrease in their implicit endorsement of gendered science stereotypes, and were more likely to show increased implicit science identities (Young et al., 2013). Similarly, women students identified more with math and expected to receive higher grades when their professor was a woman than a man (Stout et al., 2011, Study 3). These studies did not, however, compare the impact of personally known relative to famous role models, nor did they measure motivation. In one study that did include both famous and personally known examples (Wohlford et al., 2004), participants reported that their personally known role models had a greater influence on them than their famous role models; however, this study did not examine whether gender matching of these personally known role models was more important for women than men. Indeed, because participants were not randomly assigned to consider role models who were women or men, the value of matching personally known and famous role models on gender remains unclear. In addition, because participants were undergraduate students from a range of disciplines, this study did not examine how the impact of personally known or famous role models, either same-or othergender, might differ for women and men in fields traditionally dominated by men.
Gender Matching and Whether Role Models Are Famous Versus Personally Known
In domains dominated by men, whether a role model is personally known or famous might interact with whether that model is same-or other-gender in determining the model's motivational impact. Past research suggests that, in domains traditionally dominated by men, such as athletics, women are more motivated by same-gender role models than men (Midgley et al. 2021). However, the bulk of this research (Midgley et al., 2021; Studies 1, 2, and 4) did not distinguish between role models who were personally known and those who were famous; rather, participants were simply asked to nominate an athlete and rate the extent to which they were motivated by that exemplar. In one study that did distinguish between personally known and famous exemplars (Midgley et al., 2021; Study 3), varsity and recreational athletes were asked to describe an athletic role model who motivated them and indicate whether or not they knew the role model personally; women (vs. men) and varsity (vs. recreational) athletes were more likely to describe a role model whom they knew personally. Because this study did not directly compare participants' motivation by personally known and famous role models, however, it is unclear what the relative impact of these role models might be on women and men.
In the case of famous role models, both women and men may be most likely to be motivated by same-gender examples. Indeed, when individuals believe that gender is related to performance, they are most likely to compare themselves to and be influenced by same-gender others (Zanna et al., 1975). Gender matching may also be associated with greater motivation for women even in domains where famous role models who are women are scarce. Indeed, because women in such domains are minority group members, gender may be especially salient to them (Abrams et al., 1990;Hogg & Turner, 1987;McGuire et al., 1978), such that matching on gender may be an important determinant of the impact of a role model. Further, in contexts dominated by men, such as STEM, women will be particularly likely to identify with their gender (van Veelen et al., 2019), and so may be most inspired by a role model who is the same gender. Consistent with this possibility, past research on social comparison suggests that women are especially likely to compare themselves to and be influenced by same-gender others (Crocker & Blanton, 1999;Martinot et al., 2002). In sum, for famous role models, gender matching may be important for both men and women.
In the case of personally known role models, gender matching may be less important for men; to the extent that they share a personal connection with the model, they may be able to draw other parallels between themselves and the role model, and so be motivated regardless of the model's gender. Indeed, past research on social comparison indicates that individuals may be influenced by others with whom they share similarities on a variety of attributes related to performance (Goethals & Darley, 1987) or even similarities unrelated to performance, such as a shared birthday (Brown et al., 1992). When one knows a potential role model personally, one may be more aware of such similarities than would be the case for a famous role model. For women, however, gender matching may remain important even for personally known role models, over and above other possible similarities. Given women's minority group status, gender will continue to be salient, such that women will be more influenced by exemplars who are women than men. In addition, people assume greater similarity between individuals who share distinctive rather than nondistinctive attributes (Nelson & Miller, 1995); women may see greater parallels between themselves and other women in fields dominated by men, again due to their distinct minority status. Thus, for personally known role models in such domains, matching on gender may be more important for women than for men.
In sum, it is important to understand whether women in fields dominated by men, who have access to relatively few famous examples who are women, might benefit from personally known role models. However, no research to date has examined how the impact of personally known and famous role models on women's and men's motivation might differ, depending on whether those role models are gender-matched. We examined this directly.
The Present Research
In two studies, we compared the motivating impact of famous and personally known exemplars, same-and othergender, on women and men. Participants in two domains dominated by men, athletics (Study 1) and STEM careers (Study 2), were randomly assigned to nominate a successful woman or man who was either famous or personally known. We examined three hypotheses. We predicted that, for both women and men, personally known exemplars would be more motivating than famous exemplars (H1). We also predicted that the importance of gender matching and whether an exemplar was famous or personally known would interact with individuals' own gender in predicting how motivating they would find an exemplar to be. Specifically, we expected that both women and men may find famous same-gender exemplars most motivating (H2). For personally known exemplars, on the other hand, we expected that gender matching would be important primarily for women (H3); men may be able to find connections with a personally known exemplar other than gender, and so find them to be motivating. Because gender may be more salient to women than men in fields dominated by men (Abrams et al., 1990), women may continue to benefit most from gendermatched exemplars, even those whom they know personally. After first examining gender matching of personally known and famous exemplars in the domain of athletics in Study 1, we conducted a follow-up Study 2 in the STEM domain, for which we preregistered our plans for analyses before finishing data collection.
On an exploratory basis, we also examined the perceived future attainability of exemplars as a potential mechanism underlying the effects of whether an exemplar is famous or personally known (H4, exploratory) and the interaction of the exemplar type (famous or personally known) with participant gender and exemplar gender (H5, exploratory) on motivation (Study 2). We expected that personally known exemplars' achievements would be perceived as more attainable and more representative of future selves, which in turn would be associated with greater motivation (H4). Past research (Lockwood, 2006;Midgley et al., 2021) found that women were more likely to be motivated by successful exemplars to the extent that they viewed the exemplars as possible selves, and saw their achievements as attainable; however, it is unclear whether this effect holds true for both famous and personally known role models, or whether the attainability of personally known role models (which women may rely on more) is differently influenced by gender matching. We thus examined whether women would perceive same-gender exemplars to be more attainable future selves, particularly when they are personally known, and whether this in turn would be associated with greater motivation (H5).
Study 1
In Study 1, women and men were asked to generate the name of either a woman or man who was successful in the athletic domain, and who was either famous or known to them personally. We then examined the degree to which they perceived the athlete to be a motivating role model. We predicted that both women and men would find personally known athletes most motivating (H1), and that they would also be most motivated by same-gender famous athletes (H2). In addition, we predicted that, for personally known athletes, women would be especially likely, relative to men, to be motivated by same-gender examples (H3).
Method
Ethics approval for the study was obtained from the University of Toronto Social Sciences, Humanities, and Education Research Ethics Board before data collection. Data were collected between September 19, 2018 and October 3, 2018, and data and syntax are available upon request from the corresponding author.
Participants
We recruited 856 Amazon's Mechanical Turk workers who had previously indicated that they participated in sports/ physical activity (e.g., running) at least once per week; the latter was assessed using a scale ranging from 1 (less than 1 h/week) to 6 (5+ hours per week) with a median response of 4 (3-4 h/week). All participants who completed the survey were compensated with $2 USD. Data were cleaned as it was coming in to reach our target sample size of approximately 800 participants (i.e., 200 per condition) who had completed 70% or more of the survey.
In total, we excluded data from 58 participants who completed less than 70% of the survey, 11 participants who indicated, in an attention check, that the individual they listed was not involved in sport or physical activity, six participants who were asked to name an athlete who was a woman but later indicated the person they named was a man, two participants who were asked to name an athlete who was a man but later indicated the person they named was a woman, 16 participants in the personally known condition who indicated that they did not in fact know the athlete or who named someone identified by a research assistant as a famous public figure, and 22 participants in the famous athlete condition who listed an individual whom they later indicated was not famous or whom our research assistants could not find any online reference to as a public figure. Finally, we excluded data from seven participants who did not identify as a woman or man or did not disclose a gender identity. In the current research, we did not have sufficient power to examine gender as a continuous or multinomial variable and thus were unable to include data from these participants. We look forward to future studies that examine how gender and fame of exemplars impact athletes across the gender spectrum.
Procedure
After providing informed consent, participants were randomly assigned to fame condition (famous or personally known) and gender of athlete (woman or man). Specifically, participants were either asked to think of an athlete they knew personally with the prompt: "Think of a female [male] athlete or athletic woman [man] whom you know or have interacted with in person. Ideally, this is someone you know well, but can also be an acquaintance" or were asked to think of a famous athlete who is well-known to the general public with the prompt: "Think of a famous female [male] athlete or athletic woman [man]. Ideally, this would be someone known to the general public." After entering an athlete's name, participants were asked a series of questions about the athlete that served as both attention and manipulation checks. First, participants provided an open-ended description of the athlete they nominated and their primary sport. These responses were later reviewed by a research assistant unaware of the study hypotheses to verify that participants had nominated a specific athlete who fit the criteria of the condition to which the participant was assigned. Additionally, participants indicated the extent to which they perceived the athlete they nominated as famous on a 7-point Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree) and whether they knew the athlete personally or not.
Next participants responded to five items about the athlete as a role model (Midgley et al., 2021): "[athlete's name, as entered by the participant] is a role model for me," "[athlete's name] is someone I strive to be more like," "[athlete's name] is a representation of what I would like to be in the future," "[athlete's name] sets an example that I would like to follow," and "I would like to be more like [athlete's name]." Participants then completed six items assessing motivation: "[athlete's name] motivates me," [athlete's name] makes me feel determined," "[athlete's name] encourages me," "[athlete's name] makes me want to work harder," "[athlete's name] makes me want to put more effort into achieving my goals," and "[athlete's name] inspires me." Ratings were made on a 7-point Likert scale with endpoints ranging from 1 (strongly disagree) to 7 (strongly agree). Answers to all 11 statements were averaged to create a single index of the degree the participant saw the named athlete as a motivating role model (α = .97). Finally, participants answered demographic questions about themselves, which included items in which they reported their own gender and the primary sport or physical activity in which they participate.
Sports of Athletes and Participants
We first examined the athletes nominated by participants across conditions and their primary sport or physical activity as listed by participants (Table 1), ensuring that participants nominated a single individual and that the individual met the criteria for the condition to which participants were assigned (i.e., was the correct gender and, if participants were in the famous condition, was indeed an athlete known to the public). Participants in the famous condition were most likely to nominate exemplars who were women in the domain of tennis, and exemplars who were men in the domain of basketball or football. For participants in the personally known condition, on the other hand, the most common domain for both men and women was track and field/running. A summary of participants' own primary sport or physical activity is included in Table 2. Women were most likely to report their sport or physical activity as running/jogging/ track and field, whereas men were most likely to report their sport or physical activity as basketball.
Athletic Role Model Motivation
Next, we conducted a three-way ANOVA to examine the effects of participant gender, athlete gender, and athlete fame (i.e., whether the athlete was famous vs. personally known) on the extent to which participants viewed the athlete as a motivating role model. Complete results of this model are shown in Table 3.
Discussion
In sum, for famous athletic exemplars, all participants were motivated more by same-than other-gender athletes. For personally known athletic exemplars, in contrast, men were equally motivated by both men and women, but women again derived more motivational benefits from the samegender athletes. In other words, although men rated same-(vs. other-) gender athletes as more motivating role models, this was true only for famous athletic exemplars. Women, on the other hand, rated same-(vs. other-) gender athletes as more motivating role models, regardless of whether they were famous or personally known. In addition, both men and women were more motivated by personally known than famous exemplars. Although famous examples may illustrate a high level of success, their very stardom may make their success less attainable and consequently less inspirational. Moreover, although personally known samegender athletes may be most motivating for women, it is noteworthy that the personally known man was at least as motivating as a famous woman. In domains dominated by men, there may be relatively few famous or personally known exemplars of success who are women. It may be that women have learned to find personally known exemplars who are men to be relevant through sharing of other attributes.
Past studies have shown that individuals tend to be most motivated by athletic exemplars in their own sports of interest, and that women may therefore be at a disadvantage because they are less likely to nominate same-gender exemplars in their own sports (Midgley et al., 2021). In the present study, many participants nominated athletes from outside their own sports (Tables 1 and 2). For example, although numerous participants chose Serena Williams, a tennis player, as the example of a famous woman athlete, relatively few indicated that they themselves were tennis players. In addition, it is unclear the degree to which these participants, many of whom may have been only casually interested in sports, would have had strong goals to become like these exemplars. This distinction is important because more serious athletes are more likely to select role models in their own sport than are more recreational athletes (Midgley et al., 2021;Study 3). It is noteworthy, therefore, that despite this potential limitation, participants nevertheless reported relatively strong motivation to become more like the successful exemplar; mean motivation scores were 4.84 (SD = 1.51) on a 7-point Likert scale. Moreover, although women may have been less likely than men to identify exemplars in their own sport (Midgley et al., 2021), they were nevertheless at least somewhat motivated by these exemplars, particularly when matched on gender. Thus, this study suggests that gender matching of role models may be relevant even when individuals have more casual goals in a domain, and when the exemplars are only superficially matched to their own domains of interest. Nevertheless, we cannot rule out the possibility that participants were simply remembering being motivated by such role models when they were younger and had more specific aspirations in the sport in which their chosen exemplar excelled.
In the second study, we selected participants who identified themselves as professionals in a domain dominated by men: STEM. Because all our participants were themselves in STEM careers, we were able to compare the relative impact of famous and personally known exemplars for individuals more likely to have well-defined goals, in domains more aligned with these exemplars. Moreover, because these participants were actively engaged in STEM careers, their exemplars would have implications for their current professional motivation in the workplace.
Study 2
As with athletics, STEM fields tend to be dominated by men (Fry et al., 2021;Global Gender Gap Report, 2021). Accordingly, we attempted to replicate the findings of Study 1 in the STEM domain. Specifically, in Study 2, we examined whether the motivating impact of STEM exemplars on men and women participants would be determined by both the gender of the exemplar and whether the exemplar was famous or known personally to participants. We predicted that, as in Study 1, both women and men would find personally known exemplars most motivating (H1), and that they would also be most motivated by same-gender famous exemplars (H2). In addition, we predicted that women would be especially likely, relative to men, to find same-gender, personally known exemplars to be motivating (H3).
We also used Study 2 to examine, on an exploratory basis, whether the degree to which exemplars represent attainable future selves would determine how motivating those exemplars are perceived to be. We have argued that personally known exemplars may be more inspiring than famous ones because they represent future achievements that are more attainable; indeed, studies have found that individuals are more motivated by a successful other when that other's accomplishments are attainable (Lockwood & Kunda, 1997). Given that the achievements of famous figures such as Albert Einstein or Marie Curie may not be seen as attainable by most individuals in STEM careers, they may be less motivating than exemplars with less spectacular but more reachable achievements. Accordingly, we conducted a mediation analysis to assess whether personally known exemplars do indeed represent more attainable future selves, and so are more motivating than famous exemplars (H4, exploratory).
In addition, past research suggests that women are most motivated by same-gender exemplars in the domain of athletics because they perceive their achievements are more attainable, and because they are more representative of a possible future self (Midgley et al., 2021). In Study 2, we examined whether future attainability would play a similar role in determining women's motivation by same-gender STEM exemplars. Because men are less likely to face gender-related barriers to success in STEM, their motivation may be less driven by the perceived future attainability of the samegender exemplar's success; they are unlikely to see a successful man's achievements as any more attainable than a successful woman's achievements. Accordingly, we conducted moderated mediation analyses to assess whether future attainability of same-gender exemplar is a mechanism underlying motivation by personally known (vs. famous) exemplars (H4, exploratory) and women's (but not men's) motivation by exemplars (H5, exploratory).
Method
As in Study 1, ethics approval for the study was obtained from the University of Toronto Social Sciences, Humanities, and Education Research Ethics Board before data collection. Data were collected between December 18, 2020 and March 31, 2021, and data and syntax are available upon request from the corresponding author.
Participants
We recruited 1,020 workers from Prolific who had previously indicated, on a prior pre-screening survey conducted through Prolific, that their primary sector of employment was "Science, Technology, Engineering, and Mathematics." All participants who completed the survey were compensated at Prolific's suggested rate of £7.80 GBP per hour, for an average payment of £0.81 GBP (or $1.12 USD) per participant. Data were cleaned as it was being collected to reach our target sample size of 787 participants, which, based on an a priori calculation with G*Power, would provide 80% power to detect a small effect size (i.e., Cohen's f = 0.10 or partial η 2 = 0.01; Miles & Shevlin, 2001) when testing for a three-way interaction at an alpha level of.05. In total, we excluded data from 59 participants who indicated in an eligibility check that they were not currently studying or working in a STEM field, 43 participants who answered fewer than 70% of the survey questions, and 14 participants who indicated in an attention check that they did not nominate an individual in STEM. Additionally, we excluded 53 participants who did not follow instructions related to the nominated individual's gender (e.g., naming a woman when asked for a man) and 57 participants who did not follow instructions pertaining to naming an individual that was either known to the public or personally known to the participant. Finally, as in Study 1, we excluded data from six participants who did not clearly identify as either a man or woman, a key variable in the current analyses. Four of these participants reported a gender identification different from their response on a prior Prolific prescreening survey, and two participants reported identifying as neither a man nor a woman. As in Study 1, we did not have significant power to examine gender as a continuous or multinomial variable. The
Procedure
After providing informed consent, participants were randomly assigned to condition (famous or personally known) and gender (woman or man) of an individual in a STEM career. That is, participants were either asked to think of someone they know personally in STEM with the prompt: "Think of a woman [man] in a Science, Technology, Engineering, or Math career whom you know or have interacted with in person. Ideally, this is someone you know well, but can also be an acquaintance" or were asked to think of a famous person in STEM who is well-known to the public with the prompt: "Think of a famous woman [man] in a Science, Technology, Engineering, or Math career. Ideally, this would be someone known to the general public." After entering the name of an individual in STEM, participants were asked to describe the named individual in openended form and to indicate that individual's primary occupation. These responses were later reviewed by a research assistant unaware of the hypotheses of the study to verify that participants had nominated a specific individual who works in a STEM field and who fit the criteria of the condition to which the participant was assigned. Next, participants rated the extent to which they found this individual to be a motivating role model by indicating their agreement with two items: "[STEM individual's name] is a role model for me" and "[STEM individual's name] motivates me." Because the 11 items tapping into role model motivation in Study 1 were highly inter-correlated, we selected the two most face-valid items for use in Study 2. Ratings were made on a 7-point Likert scale with endpoints ranging from 1 (strongly disagree) to 7 (strongly agree), and, as in Study 1, answers to these two items were averaged to create a single index of the degree the participant saw the named individual as a motivating role model, r(786) = .76.
Participants then indicated their agreement, on a 7-point Likert scale that ranged from 1 (strongly disagree) to 7 (strongly agree), with 7 possible reasons why participants found the STEM individual they nominated motivating (Midgley et al., 2021). First, participants indicated the extent to which they perceived the person they nominated as demonstrating that similar success is attainable for themselves on three items: "[Nominated individual's name, as entered by the participant] shows me that I can achieve something similar in the future", "[Nominated individual's name] shows that I can achieve success in STEM careers", and "[Nominated individual's name ] shows that I can overcome barriers to success in STEM (i.e., Science, Technology, Engineering, and Math) careers." Next, participants indicated the extent they perceived their nominated individual as a possible future self on four items: "[Nominated individual's name] represents the self that I would like to become", "[Nominated individual's name] is a possible "future self" for me," "[Nominated individual's name] gives me something specific to aim for," and "[Nominated individual's name] personifies a goal I want to achieve." We combined all seven attainability and possible future self-ratings to form a single measure of future attainability (Cronbach's α = .91).
Participants also indicated the extent to which they saw the exemplar they nominated as an example of countering gender stereotypes on three items: "[Nominated individual's name] demonstrates that it is possible to break down genderrelated barriers to success," "[Nominated individual's name] challenges traditional gender stereotypes in STEM careers," and "[Nominated individual's name] defies/challenges traditional beliefs about what women and men are capable of in the STEM (i.e., Science, Technology, Engineering, and Math) domain" (Cronbach's α = .90). Because we did not, however, expect this variable to predict role model motivation above and beyond future attainability (Midgley et al., 2021), this variable is not discussed further here and instead is outlined in our supplementary analyses.
Next, participants completed a set of manipulation and attention checks, similar to those used in Study 1. To verify adherence to condition instructions, participants indicated the gender of the individual they nominated, whether they knew them personally, and the extent to which the individual nominated is famous on a 7-point Likert scale that ranged from 1 (strongly disagree) to 7 (strongly agree). Finally, participants answered demographic questions about themselves, including items on which they reported their own gender and occupation.
STEM Fields of Nominated Individuals and Participants
As in Study 1, we first examined the individuals nominated by participants across conditions, ensuring that participants nominated a single person, and that the individual met the criteria for the condition to which participants were assigned. Next, a research assistant unaware of the study hypotheses reviewed the occupations participants had listed for their nominated individual in STEM and participants' own selfreported occupations and classified them into one of the four primary STEM fields (i.e., Science, Technology, Engineering, or Math; see Tables 4 and 5). Additionally, this assistant classified famous exemplars as either current or historical individuals (Table 6). Both women and men were especially likely to nominate exemplars in science fields. In the case of famous 0exemplars, participants were more likely to nominate historical than current exemplars who were women. Women were most likely to report Note. When the STEM field was not discernable from the listed occupation (e.g., researcher), the STEM field was classified as "unclear." being in science fields, whereas men were more likely to report being in technology or engineering fields.
Role Model Motivation
Next, we conducted a 2 × 2 × 2 ANOVA to examine the effects of participant gender, the STEM individual's gender, and fame (i.e., whether the individual was famous or personally known) on the extent to which participants viewed the nominated STEM individual as a motivating role model. Complete results of this model are shown in Table 7. As in Study 1, there was a significant main effect of fame: Personally known individuals in STEM (M = 5.53, SD = 1.28) were rated as more motivating role models than famous individuals (M = 5.08, SD = 1.43). In addition, the two-way participant gender by exemplar gender interaction was significant; men were more motivated by men (M = 5.35, SD = 1.18) than women (M = 4.98, SE = 1.53), F(1,780) = 7.74, p = .006, partial η 2 = .010, and women were more motivated by women (M = 5.67, SD = 1.30) than men (M = 5.23, SE = 1.39), F(1,780) = 11.51, p < .001, partial η 2 = .015. This interaction, however, was qualified by a significant three-way interaction (H3). For famous individuals in STEM (Figure 2a), women were more motivated by women (M = 5.62, SD = 1.32) than men (M = 4.84, SD = 1.50), F(1,780) = 15.69, p < .001, partial η 2 = .020, and men were more motivated by men (M = 5.19, SD = 1.19) than women (M = 4.69, SD = 1.54) exemplars F(1,780) = 7.17, p = .008, partial η 2 = .009. For personally known individuals in STEM (Figure 2b), on the other hand, men were not more motivated by personally known men (M = 5.53, SD = 1.15) or women (M = 5.28, SD = 1.47), F(1,780) = 1.64, p = .20, partial η 2 = .002; however, contrary to our findings in Study 1, women were not more motivated by women (M = 5.70, SD = 1.30) than men (M = 5.58, SD = 1.19), F(1,780) = 0.49, p = .49, partial η 2 = .001. As in Study 1, because we made no specific predictions about the other simple effects in the three-way interaction, they are not discussed here and instead are outlined in our supplementary analyses. In sum, in Study 2, both men and women rated same-(vs. other-) gender individuals in STEM as more motivating role models but only when this individual was famous (rather than personally known).
Future Attainability of Exemplar's Success as a Mediator
Next, we examined a possible mechanism for participants' ratings of role model motivation. Specifically, we examined whether, for all participants, personally known (vs. famous) exemplars in STEM predicted higher role model motivation ratings at least in part due to higher future attainability (H4, exploratory) and whether same-gender STEM exemplars were especially motivating for women relative to men because they better represented an attainable future self (H5, exploratory). To test these hypotheses, we conducted a three-way moderated mediation model (Figure 3) using the PROCESS macro for SPSS and bootstrapping procedure (Model 12; Hayes, 2018) with 5,000 resamples, generating 95% confidence intervals for the indirect effects. Note. When the STEM field was not discernable from the listed occupation (e.g., researcher), the STEM field was classified as "unclear." Predicting Attainability. There was a main effect of participant gender on future attainability, b = −0.50, 95% CI [−0.86, −0.14], SE = 0.18, p = .007; compared to men, women rated the exemplars they nominated as less attainable future selves. Additionally, whether the exemplar was famous or personally known predicted ratings of future attainability, b = 0.64, 95% CI [0.28, 0.99], SE = 0.18, p < .001; participants reported that personally known (vs. famous) exemplars represented a more attainable future self. Finally, there was a significant two-way interaction between participant gender and exemplar gender in predicting ratings of future attainability, b = 1.14, 95% CI [0.63, 1.66], SE = 0.26, p < .001. Specifically, there was a significant effect of exemplar gender on ratings of future attainability for women (Figures 4a, 4b), b = 1.11, 95% CI [0.85, 1.36], SE = 0.13, p < .001, but not for men (Figures 4a, 4b) After accounting for attainability, whether the exemplar was famous or personally known no longer predicted motivation, b = −0.14, 95% CI [-0.39, 0.12], SE = 0.13, p = .28. Additionally, the remaining (direct) effect of exemplar gender on motivation was not significant for men in the personally known condition, b = 0.02, 95% CI [−0.23, 0.29], SE = 0.13, p = .85, nor for women in the famous condition, b = −0.10, 95% CI [−0.37, 0.18], SE = 0.14, p = .50. There was however, a significant direct effect of exemplar gender for men in the famous condition, b = −0.53, 95% CI [−0.78, −0.27], SE = 0.13, p < .001, and for women in the personally known condition, b = −0.35, 95% CI [−0.60, −0.11], SE = 0.13, p = .005; after taking into account the significant indirect effects through attainability, both these groups of participants rated women (vs. men) as less motivating.
In sum, personally known (vs. famous) exemplars were seen as better demonstrating future attainability, regardless of gender of the participant or exemplar, and this in turn predicted higher role model motivation. In addition, women (but not men) gave higher ratings of future attainability to samegender exemplars, which in turn contributed to the high motivation ratings of these particular individuals.
Discussion
As in Study 1, participants were more motivated by personally known than famous exemplars. In addition, participants were most motivated by famous exemplars of the same gender as themselves. Thus, as with sports, gender matching is important in determining whether celebrity exemplars are motivating in STEM. Also consistent with Study 1, men were similarly motivated by personally known men and women. Contrary to our hypothesis (H3), however, women were no more motivated by personally known women than men. It may be that women experienced a ceiling effect; because they were already very motivated by personally known men (M = 5.58 on a 7-point Likert scale), it may have been difficult to detect an even higher level of motivation for personally known women. Alternatively, because STEM fields are less subject to segregation by sex than sports, women may have been exposed to more personally known men who were relevant to them on dimensions other than gender. In athletics, women and men typically compete separately, on different teams in sex-segregated leagues or competitions. Women may consequently have had very little personal contact with successful men in their sports. In contrast, women working in STEM fields are more likely to have encountered many men in the same field on a regular basis in their studies and later in their workplaces. Accordingly, they may simply have had more opportunities to identify men known to them personally, even if they might have benefitted even more from personally known women. Thus, for women, personally known exemplars may be especially important, even when not of the same gender as themselves.
In addition, exploratory mediation analyses revealed a possible mechanism to explain why some exemplars are especially motivating. Specifically, compared to famous exemplars, those who are personally known are more motivating because they are perceived to be more attainable future selves. Furthermore, gender-matched exemplars are perceived as more attainable future selves by women but not men, which in turn contributes to women's motivation by both famous and personally known same-gender exemplars.
In Study 1, we examined the impact of successful exemplars in sports, a domain in which participants were likely casually rather than professionally engaged; we did not explicitly recruit professional athletes. In Study 2, in contrast, participants were selected if they were actively employed in a STEM field. Thus, Study 2 provides evidence that whether or not a role model is matched on gender, and personally known rather than famous, are important variables determining the impact of role models in individuals' own career areas. Motivation scores for Study 2 were generally higher, with a mean of 5.31 (SD = 1.37) compared to 4.84 in Study 1, which is not surprising given that these individuals were describing motivation in their professional careers. Although the degree to which participants were casually or more formally involved in the domain of interest across the two studies, the results were generally consistent, with the exception that women in Study 2 were not more motivated by a gender-matched than mismatched personally known exemplar.
General Discussion
Past research has highlighted the value of role models for women, particularly in fields dominated by men, but has not differentiated between celebrity role models who inspire from a distance and close-contact role models, who inspire through their personal connection to others. In the present research, we found that gender matching of famous exemplars was crucial: Both women and men were most motivated by famous exemplars of the same gender as themselves. Different effects emerged for personally known exemplars. Specifically, gender matching was not important for men's personally known exemplars in either Studies 1 or 2; men were motivated similarly by men and women who were personally known in both athletic and STEM career domains. In contrast, whereas women were more motivated by gender-matched personally known athletic exemplars, they were similarly motivated by personally known women and men in STEM. Overall, with the exception of women in sports domains, individuals may be able to derive similar inspiration from men and women who are personally known to them.
In addition, the present studies suggest that personally known exemplars may be more inspiring than famous ones because they personify goals and achievements that resemble participants' future selves. Thus, Serena Williams or Rafael Nadal may be less inspiring than a local coach; Donna Strickland or Richard Dawkins may have less influence than a high school science teacher. That is not to say that celebrity role models have no impact; indeed, participants reported finding famous exemplars to be at least somewhat motivating. Rather, although all success stories may have the potential to inspire, those that demonstrate a level of success that is perceived as attainable may have the greatest influence. Indeed, our mediation analyses (Study 2) suggest that personally known (vs. famous) exemplars were more motivating for both women and men because they were seen as more attainable representations of future selves. However, our analyses also suggest that different processes may be driving women's motivation by same-and othergender exemplars when those individuals are known to them personally; whereas women were motivated by samegender exemplars in part due to their greater future attainability, this was not the case for exemplars who were men. In future research, it will be important to further examine the mechanisms through which women in fields dominated by men may benefit from personally known role models of either gender.
The distinction between famous and personally known role models is especially significant for women, moreover, because in many domains, they may have few same-gender celebrity exemplars available to them. Men, who have access to many more examples of famous same-gender success stories in domains like sports and science, have an advantage; it is easier for them to find motivating role models who are household names. Women, in contrast, may more frequently benefit from personally known role models; even when they are unable to find an inspirational example of a celebrity who is a woman, they can nonetheless turn to examples of successful individuals whom they encounter in their daily lives.
Although these studies provide new evidence regarding the role of fame and gender in determining whether individuals find role models to be motivating, we note that we did not measure individuals' actual behavior or performance in STEM or sport. We would expect that when individuals are inspired by role models, they work harder to achieve success. Indeed, past studies suggest successful role models can boost individuals' performance (Hoyt, 2013;Latu et al., 2019;Marx & Ko, 2012). Nevertheless, it will be important to establish that personally known role models are not only more inspiring than famous ones, but also have a more beneficial effect on performance.
The present study provides evidence that gender matching and the degree to which exemplars are personally known will impact the degree to which those exemplars are motivating. In their motivational theory of role modeling, Morgenroth et al. (2015) note that motivation by role models will be determined by both the characteristics of role aspirants (e.g., their goals and beliefs about whether their abilities are fixed or malleable) and the characteristics of the role models (e.g., their degree of success, and their similarity to role aspirants); role aspirants' perceptions of the degree to which a role model represents goal embodiment, attainability, and desirability, will be key to determining whether that model is motivating. In our examination of hypothesis 5, we focused primarily on whether the exemplars represented possible future selves (which should map on to goal embodiment), and attainability. We did not explicitly assess whether the role model represented a desirable outcome. Indeed, given that participants in Study 2 were selected based on a casual rather than professional interest in sport, it is not clear that a successful athlete would necessarily represent a particularly desirable outcome for these individuals. It is noteworthy, therefore, that even these participants were at least somewhat motivated by the successful athletic exemplars. In future research, it will be important to consider how individuals' specific goals, and the degree to which they view an exemplar as representing a desirable outcome, may influence motivation by role models.
In addition, we note that the generalizability of our findings is limited, given that the majority of participants in both studies were White and resided in the United States (Study 1) or Western Europe (Study 2). It may be that matching on race is more important than gender for some underrepresented groups. Moreover, although we pre-screened individuals to identify those with athletic (Study 1) or STEM (Study 2) interests, we note that the specific sport and career domains of women and men participants differed. For example, women in Study 1 were more likely to report athletic interests related to running, track and field, and tennis, whereas men were more likely than women to report athletic interests in domains such as basketball, baseball, and weightlifting. In Study 2, women were more likely to report that they were in science careers, whereas men were more likely to report that they were in technology or engineering careers. To the extent that women in technology or engineering occupations are especially underrepresented, they may be more likely to benefit from same-gender personally known role models than would women in science careers more generally. In future research, it will be important to examine the impact of gender matching in specific career areas. Further, because we did not collect data on sexual orientation or socioeconomic status, it will be important in future research to assess whether these variables may also moderate the impact of famous and personally known role models on individuals.
In the present research, we have discussed the effect of successful exemplars in the context of the literature on role models. We note that, although this use of the term role model is in keeping with past studies (Lockwood, 2006;Lockwood et al., 2002;Lockwood & Kunda, 1997, the exemplars nominated by participants might better be considered to be potential role models, in that, not all participants necessarily aspired to become like these exemplars or were motivated by them. Indeed, our dependent measures were designed to assess the extent to which these exemplars were perceived to be motivating role models. It is also possible, however, that role models may exert at least some of their impacts through implicit rather than explicit processes. Indeed, past studies suggest that social comparisons to other people occur relatively automatically (Gilbert et al., 1995) and can occur outside of conscious awareness (Mussweiler et al., 2004); thus, individuals may experience changes in their goals or motivation after exposure to a successful other even if they do not explicitly identify the other as a role model. In the future, it may be useful to consider whether, according to existing definitions of role models (Lockwood, 2006;Morgenroth et al., 2015), individuals must consciously recognize that they are being motivated by or are modelling their behavior on that of another person, for that person to be considered a role model.
In addition, we note that the present studies focused on role models rather than mentors. Whereas a role model is someone who can inspire others through their own example, a mentor is someone who typically also contributes directly to the success of their protégé by offering support, advice, or encouragement. We have argued that personally known role models are more inspirational because their achievements are more attainable, and the mediation analysis in Study 2 supports this argument. It is possible, however, that personally known role models are, at least in some cases, also mentors; they may not only inspire by their example, but also motivate individuals by offering more direct support. An examination of mentorship was beyond the scope of the present research, but past research has confirmed the importance of gender-matched mentors; for example, women engineering students benefitted more when assigned a mentor who is a woman (vs. a man; Dennehy & Dasgupta, 2017). In future studies, it will be important to consider how personally known role models may also serve as mentors and how gender matching may also be implicated in such processes.
In our studies, participants were adults, who had presumably already identified their sports-related areas of interest (Study 1) and selected a career area (Study 2). For these individuals, the role models they nominated were not so much helping them to identify a possible future direction as to move forward in an already-chosen direction. Role models, however, play an important role in determining children's choices and goals (Bryant & Zimmerman, 2003;Zirkel, 2002) and may also be valuable in highlighting potential future pathways for children and young adults (Keller & Whiston, 2008;Valero et al., 2019). For girls, it may be especially valuable to see a same-gender celebrity athlete or scientist, who may encourage them to consider future pathways that, due to negative gender stereotypes, might otherwise have seemed irrelevant or unattainable (Olsson & Martiny, 2018;Zirkel, 2002). Indeed, to the extent that children are less likely than adults to assume an upper limit on future accomplishments, they may find celebrity role models to be more motivating than personally known ones. If one believes one can be a future Wimbledon winner, one may be more inspired by Serena Williams than by a local tennis star. Thus, the optimal degree of success exemplified by a role model may vary across individuals' lifespans, with younger individuals viewing the accomplishments of even famous exemplars as attainable. As a result, it may be especially important for girls to be exposed to examples of same-gender famous role models in domains dominated by men. Younger girls may benefit from knowing that there are pathways to publicly lauded success, beyond those that they see in their immediate surroundings. Such questions have important practical implications: The dearth of highprofile women in domains dominated by men may create a self-fulfilling prophecy. To the extent that girls cannot "see themselves" in future careers, they may be loath to pursue studies in these areas. Indeed, women are less likely to pursue an education in STEM even when they have higher mathematics grades than their counterparts who are men (Hango, 2013).
Practice Implications
The present studies have important practice implications. First, it is important for coaches, educators, and other mentors to understand the potential value of introducing positive role models that counter gender stereotypes. When starting out in a domain long dominated by men (e.g., athletics, STEM), women may find it difficult to identify high-profile, same-gender role models who have achieved success despite obstacles associated with gender discrimination. Experts in these domains, on the other hand, would have more knowledge of successful individuals of all genders, and thus be able to help women (or members of other minority groups within the field) find a role model with whom they can identify. Crucially, this research indicates that these role models must not necessarily be famous or well-known to the public; indeed, the present research suggests that because their achievements appear more attainable, personally known role models may be especially valuable. For women, these personally known role models provide important evidence that success is possible in fields where more famous same-gender exemplars are few and far between, and helping women identify and connect with such role models could be an important step toward correcting gender inequities in various domains. In future research, it will be important to learn more about the optimal role models for women who are at various stages in their careers, to ensure that women have relevant sources of inspiration in fields once open only to men.
Availability of Data Materials
Preregistration for Study 2 and our supplementary analyses (Studies 1 and 2) are available at: https://osf.io/j24zc. Data and study materials are available from the corresponding author upon request.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. | 2023-03-07T16:05:39.106Z | 2023-02-27T00:00:00.000 | {
"year": 2023,
"sha1": "c34ef3a42493ad94653bfbded8ccaeb6d5eb5369",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1177/03616843231156165",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "931ad5711ce57ec83ea2a69e28449eb41ebd7182",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
265168119 | pes2o/s2orc | v3-fos-license | Melatonin protects TEGDMA-induced preodontoblast mitochondrial apoptosis via the JNK/MAPK signaling pathway
Resin monomer-induced dental pulp injury presents a pathology related to mitochondrial dysfunction. Melatonin has been regarded as a strong mitochondrial protective bioactive compound from the pineal gland. However, it remains unknown whether melatonin can prevent dental pulp from resin monomer-induced injury. The aim of this study is to investigate the effects of melatonin on apoptosis of mouse preodontoblast cells (mDPC6T) induced by triethylene glycol dimethacrylate (TEGDMA), a major component in dental resin, and to determine whether the JNK/MAPK signaling pathway mediates the protective effect of melatonin. A well-established TEGDMA-induced mDPC6T apoptosis model is adopted to investigate the preventive function of melatonin by detecting cell viability, apoptosis rate, expressions of apoptosis-related proteins, mitochondrial ROS (mtROS) production, mitochondrial membrane potential (MMP) and adenosine triphosphate (ATP) level. Inhibitors of MAPKs are used to explore which pathway is involved in TEGDMA-induced apoptosis. Finally, the role of the JNK/MAPK pathway is verified using JNK agonists and antagonists. Our results show that melatonin attenuates TEGDMA-induced mDPC6T apoptosis by reducing mtROS production and rescuing MMP and ATP levels. Furthermore, mitochondrial dysfunction and apoptosis are alleviated only by the JNK/MAPK inhibitor SP600125 but not by other MAPK inhibitors. Additionally, melatonin downregulates the expression of phosphorylated JNK and counteractes the activating effects of anisomycin on the JNK/MAPK pathway, mimicking the effects of SP600125. Our findings demonstrate that melatonin protects mDPC6T cells against TEGDMA-induced apoptosis partly through JNK/MAPK and the maintenance of mitochondrial function, offering a novel therapeutic strategy for the prevention of resin monomer-induced dental pulp injury.
Introduction
Due to the superior performance, ease of operation and aesthetic properties, resin-containing compounds are being used for a wide variety of dental applications, such as restorations, sealants, bonding agents, and dental pulp capping [1][2][3].The most commonly used monomer compounds in dental resins are bi-sphenol-A-glycidyl methacrylate (Bis-GMA), 2-hydroxyethyl methacrylate (HEMA) and triethylene glycol dimethacrylate (TEGD-MA) [4].Depending on the application, different amounts of the resin compounds mentioned above depolymerize and release residual monomers into oral tissues [5].Depolymerized resin monomers can diffuse across the dentin layer through the dentinal tubules, with concentrations ranging from 0.2 to 8 mM [6,7].This diffusion can trigger a wide variety of cellular responses in the pulp tissue, including apoptosis, inflammation, and impairment of dental mineralization.Apoptosis is the primary manifestation of resin monomer-induced dental pulp injury, commonly observed as a result of cell toxicity [8][9][10][11], which not only aggravates the existing damage but also obstructs the defense responses of the dentin-pulp complex.Therefore, exploring the mechanisms of TEGDMAinduced cell apoptosis and finding potential preventive/therapeutic strategies are essential for improving restorative materials for clinical application.
Mitochondria play vital roles in many cellular processes, including cell proliferation, metabolism, and apoptosis [12][13][14].Accumulating evidence has indicated that an excess of reactive oxygen species (ROS) in mitochondria promotes caspase-dependent apoptosis.As a result, mitochondria are identified as the center of apoptosis through the intrinsic pathway [15].Furthermore, an excess of ROS causes respiratory dysfunction and reduces adenosine triphosphate (ATP) generation, which further promotes mitochondrial ROS (mtROS) production and oxidative damage, thereby creating a vicious cycle.Previous research has shown that the cytotoxicity of TEGDMA to oral tissues leads to an accumulation of mtROS and irreversible mitochondrial damage [16,17].Our latest study also demonstrated that mitochondrial dysfunction is the major factor in preodontoblast apoptosis induced by TEGDMA [18].However, the detailed mechanism, especially the signaling pathway that regulates mitochondrial dysfunction in TEGDMA-induced cell apoptosis, needs further exploration.
The mitogen-activated protein kinase (MAPK) pathway, grouped into distinct families, such as extracellular signal-regulated protein kinase 1/2 (ERK1/2), c-Jun N-terminal kinase (JNK), and p38 MAPK, has been proven to be essential for regulating various cellular processes, including cell proliferation, differentiation, and apoptosis [19,20].The role of the MAPK pathway in mitochondria focuses on the phosphorylation of executive function proteins that are essential for fundamental mitochondrial functions, such as energy production, redox processes, and metabolic pathways [21].Meanwhile, the subfamilies of MAPKs have distinct functions within the mitochondria.JNK has been implicated in the generation of high levels of mtROS when it is transferred to mitochondria [22], while ERK1/2 and p38 act as upstream signals leading to disruptions in mitochondrial membrane potential, cytochrome C release, and caspase3 activation [23].On the other hand, in resin monomer-induced mouse macrophage apoptosis, the activation of ERK/JNK/p38 MAPKs induced by TEGDMA is inhibited by the antioxidant N-acetylcysteine (NAC) [24].Another study noted that p38 and ERK1/2, but not JNK, participated in TEGDMA-induced cell apoptosis [25].Therefore, further clarification is needed regarding which MAPK pathway is involved in TEGDMA-induced apoptosis in preodontoblasts and the identification of a mitochondrion-targeted antioxidant capable of effectively sequestering mtROS and protecting against mitochondrial damage by this mechanism.
Melatonin is synthesized and secreted mainly by the pineal gland during the alternation of light and darkness.Similarly, melatonin is commonly ingested in the form of drugs, serving as an oral mitochondrial protective agent [26].Melatonin has been employed in the fields of oral diseases and material applications.For instance, it has been shown to alleviate the symptoms of dental pulpitis [27] and promote the osteointegration of dental implants [28].Further-more, as an indoleamine, melatonin may offer a new potential application to enhance the properties of biomaterials used in dentistry [29].Serving as a potent mitochondrial protective substance, melatonin enhances mitochondrial electron transfer chain (ETC) complexes I and IV, thereby improving mitochondrial respiration, ATP synthesis, and energy metabolism under stress conditions [30].Evidence suggests that melatonin attenuates apoptosis induced by hydrogen peroxide in human dental pulp cells (hDPCs) [31,32].Physiological concentrations of melatonin can inhibit proliferation and promote odontogenic differentiation of hDPCs [33,34].It has been reported that exogenous melatonin can ameliorate oxidative stress damage mediated by the MAPK pathway [35], but its specific effect on dental pulp cell apoptosis and mitochondrial damage induced by TEGDMA remains unknown.
In this study, we utilized an established in vitro model to investigate the impact of melatonin on TEGDMA-induced mitochondrial dysfunction and apoptosis in preodontoblasts.Our aim was also to elucidate the role of the MAPK signaling pathway in the protective effect of melatonin.This study represents the first exploration of the protective effects of the bioactive compound melatonin on TEGDMA-induced dental pulp damage, shedding light on its role through the MAPK signaling pathway, marking an innovative approach in this field.
Cell culture
mDPC6T cells, a preodontoblast cell line, were generously provided by Prof Chen from Wuhan University (Wuhan, China).Cells were maintained in Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% fetal bovine serum and antibiotics (100 IU/mL penicillin G and 100 ng/mL streptomycin) in a humidified incubator at 37°C with 5% CO 2 .
Cell treatments
The test compounds were prepared as stock solutions and diluted to the desired final concentrations immediately before use.Apoptosis was induced in mDPC6T cells by treatment with TEGDMA (2 mM) for 6 h [18,36].To examine the protective effect of melatonin on TEGDMA-induced mDPC6T damage, varying concentrations of melatonin (50, 100, 150, and 200 μM) were added along with TEGDMA to the culture medium.The final concentrations of the other compounds used were as follows: melatonin (100 μM), MitoQ (10 nM), SP600125 (10 μM), PD98059 (10 μM), SB203580 (10 μM), and anisomycin (1 μM).The cells were pretreated with the above mentioned drugs for 2 h and then exposed to TEGDMA.The final concentration of dimethyl sulfoxide (DMSO) in the culture did not exceed 0.5% in the experiments.Cells were subjected to the designated treatments with or without TEGDMA and the specified test compounds for various durations, following the experimental protocol.
Cell viability test
mDPC6T cells were seeded in 96-well plates (1×10 4 cells/well) and cultured under different conditions, as indicated for each experiment.mDPC6T cells were briefly washed twice with phosphate buffered saline (PBS) and incubated in 100 μL/well serum-free medium supplemented with 20 μL MTT solution (5 mg/mL) at 37°C.After 6 h of incubation, the supernatant was removed, and the formazan crystals were dissolved in 100 μL/well of DMSO for 10 min.Then, the plates were agitated for 15 s, and the absorbance was measured at 570 nm using a microplate reader (Molecular Devices, San Jose, USA).
Apoptosis measurement by flow cytometry
Flow cytometry was performed to identify the cell cycle and apoptotic cells.Annexin-V labelled with fluorescein isothiocyanate and propidium iodide (PI, 1 μg/mL) was used to determine cell apoptosis and necrosis.After exposure to various experimental conditions, cells were trypsinized and labelled with fluorochromes at 37°C, and then cytofluorometric analysis was performed with a FACScan flow cytometer (Becton Dickinson, Franklin Lakes, USA).
Mitochondrial functional imaging assay
Cells were seeded in chamber slides at 1×10 4 cells/well and then treated with TEGDMA and other test compounds for 1 h.Then, the cells were incubated in fresh culture medium containing 2 μM MitoSOX for 30 min.To assess the MMP, cells were costained with Mitogreen (100 nM) and TMRM (100 nM) for 30 min, according to our previous work [18], and images were captured under a fluorescence microscope (ZEISS, Jena, Germany).Excitation wavelengths were 543 nm for MitoSOX and TMRM and 488 nm for Mitogreen.Postacquisition processing was performed with ImageJ software (NIH) to measure and quantify fluorescence signals.Mitochondrial fluorescence intensities, density, and length were quantified by an investigator blinded to the experimental groups.More than 30 clearly identifiable mitochondria in 10 to 15 randomly selected cells per experiment were measured according to our previous work [18].
ATP detection
To measure ATP level, whole-cell extracts from the indicated cells were lysed in lysis buffer provided in the ATP Assay kit (Beyotime).After centrifugation at 12,000 g for 5 min at 4°C, the supernatants were transferred to a new 1.5-mL tube for ATP testing.The luminescence from a 100 μL sample was assayed in a luminometer (Molecular Devices, San Jose, USA) together with 100 μL of ATP detection buffer.The standard curve of ATP concentration was prepared from a known amount (1 nM to 1 μM).All measurements were performed in triplicate.
Statistical analysis
Data are presented as the mean±SD.Statistical analysis was performed using StatView software (Version 5.0.1;SAS Institute, Cary, USA).For comparisons between multiple groups, one-way ANOVA was used followed by individual post hoc Fisher tests when applicable.P<0.05 was considered statistically significant.
Melatonin attenuates TEGDMA-induced apoptosis and mitochondrial dysfunction in mDPC6T cells
As we previously discovered, TEGDMA reduced cell viability and induced apoptosis partially due to mitochondrial dysfunction in mDPC6T cells.Based on previous studies, we utilized mDPC6T cells treated with 2 mM TEGDMA for 6 h as our experimental group [18,36].Melatonin, an active circadian regulator secreted by the pituitary gland and distributes throughout the body [26], including oral tissues, has been shown to have strong mitochondria-targeted antioxidant and antiapoptotic effects on various cell types.To explore the effect of melatonin on TEGDMA-induced mDPC6T cell apoptosis, we treated mDPC6T cells with melatonin 2 h prior to TEGDMA exposure.As shown in Figure 1A, melatonin partially rescued the viability of mDPC6T cells.Moreover, the DNA damage and apoptosis induced by TEGDMA were significantly ameliorated by melatonin, as reflected by flow cytometry (Figure 1B,C).Melatonin also downregulated the expression levels of Bax and cleaved caspase3 induced by TEGDMA (Figure 1D-F).These data indicated that melatonin attenuated TEGDMA-induced mDPC6T cell apoptosis.It is clear that TEGDMA-induced mDPC6T mitochondrial reactive oxygen species elevation and membrane potential decrease were significant [18,36].Interestingly, melatonin significantly ameliorated mitochondrial ROS levels, as indicated by reduced MitoSOX staining intensity (Figure 2C,D).Compared with TEGDMA alone, melatonin improved MMP level, as shown by increased TMRM intensity (Figure 2A,B).In addition, we evaluated the effects of melatonin on mitochondrial energy-producing function.Compared with TEGDMA alone, melatonin rescued the cellular ATP level (Figure 2E).In summary, melatonin ameliorated Melatonin protects TEGDMA-induced preodontoblast mitochondrial apoptosis TEGDMA-induced apoptosis and mitochondrial dysfunction in mDPC6T cells.
JNK/MAPK pathway is involved in TEGDMA-induced mDPC6T mitochondrial apoptosis
We then examined whether TEGDMA-induced mDPC6T apoptosis and mitochondrial dysfunction involve MAPK family proteins, including p38, JNK, and ERK1/2.We utilized MitoQ, a classical mitochondrial function protectant, to investigate the role of the MAPK pathways in the model.The results revealed that TEGDMA stimulation of mDPC6T cells could enhance the phosphorylation level of JNK but not that of p38 or ERK1/2 (Figure 3A-D).In addition, pretreatment with MitoQ reduced the phosphorylation level of JNK without affecting the other MAPK pathways (Figure 3A-D).To further confirm the role of the MAPK signaling pathway, we used chemical inhibitors of MAPKs, namely, SP600125, PD98059, and SB203580, which specifically inhibit the actions of JNK, ERK1/2, and p38 respectively, to evaluate their ability to rescue TEGDMA-induced mDPC6T cell apoptosis.MTT assays demonstrated that only SP600125 protected against the cytotoxic effect of TEGDMA on mDPC6T cells (Figure 3E-G).Simultaneously, flow cytometric analysis results revealed the reversal effect of SP600125 on apoptotic cells, whereas the reversal effects of PD98059 and SB203580 were less apparent (Figure 3H,I).Furthermore, western blot analysis results indicated that SP600125 reduced the protein expressions of cleaved caspase3 and Bax (Figure 4A-D), indicating that SP600125 safeguarded mDPC6T cells from TEGD-MA-induced apoptosis.Subsequently, we investigated the role of
396
Melatonin protects TEGDMA-induced preodontoblast mitochondrial apoptosis JNK/MAPK signaling in TEGDMA-induced mitochondrial dysfunction.Our findings suggested that SP600125 ameliorated mDPC6T mitochondrial dysfunction, as evidenced by the reduction in mtROS production (Figure 4E,F), elevation in MMP (Figure 4G,H), and cellular ATP level (Figure 4I).In summary, these results provide direct evidence that the JNK/MAPK pathway significantly participates in TEGDMA-induced mDPC6T mitochondrial apoptosis.
Melatonin prevents TEGDMA-induced apoptosis through the JNK/MAPK pathway
We further investigated whether melatonin induces the JNK/MAPK pathway to protect against TEGDMA-induced apoptosis.The JNK agonist anisomycin increased the level of cell apoptosis by approximately 15% compared to TEGDMA alone, while melatonin effectively inhibited this portion of apoptosis, bringing it back to normal level (Figure 5A,B).Western blot analysis confirmed that melatonin decreased the level of p-JNK and reversed the additional phosphorylation activation of JNK induced by anisomycin (Figure 5C-F).Additionally, the results from western blot analysis of apoptosis-related proteins demonstrated that melatonin reversed the increase in cleaved caspase3 and Bax caused by anisomycin (Figure 5C-F).Furthermore, the reduced MMP (Figure 6A,B), elevated mtROS levels (Figure 6C,D), and decreased ATP production (Figure 6E) returned to the level of damage induced by TEGDMA with anisomycin following pretreatment with melatonin.This suggests that melatonin alleviates mitochondrial dysfunction and cell apoptosis caused by TEGDMA by inhibiting the JNK/MPK pathway.
Discussion
Composite restorative materials consist of resin monomers and inorganic fillers, and their polymerization shrinkage can result in the diffusion of resin monomers such as HEMA and TEGDMA through dentin.This process can lead to various harmful effects, including the disruption of normal pulp tissue morphology and physiology, disturbance of cellular redox balance, interference with cell division and genetic material replication, ultimately culminating in pulp cell apoptosis [7].Melatonin, an endogenously secreted antioxidant and mitochondrial protector, has shown promise in ameliorating mitochondrial dysfunction and inhibiting cell apoptosis.However, the precise protective mechanisms of melatonin against resin monomer damage remain largely unexplored.In our
400
Melatonin protects TEGDMA-induced preodontoblast mitochondrial apoptosis numerous mechanisms, and it is not only related to damage to mitochondrial function but also leads to an apoptotic signaling pathway cascade [14,37,38].Notably, as the products of mitochondrial metabolism, mtROS come from the respiratory chain when mitochondria are functionally disordered [15].It has been found that TEGDMA inhibits complex I in the respiratory chains of mitochondria isolated from guinea pig brain [39].Nevertheless, our previous research showed that TEGDMA impairs the complex III activity of preodontoblasts, resulting in disordered mitochondria characterized by decreased ATP synthesis and increased production of mtROS [18].Furthermore, mtROS generation is believed to depend on the membrane potential of mitochondria.Damaged mitochondria, serving as the essential storage pool for the electron chain, promote a reduction in electron flow when the membrane potential across the inner membrane is lost, leading to the production of ROS.As expected, our results demonstrated a significant decrease in MMP and ATP levels, along with an upregulation of mtROS levels, confirming the role of compromised mitochondrial function in TEGDMA-induced apoptosis of mDPC6T cells.Melatonin protects TEGDMA-induced preodontoblast mitochondrial apoptosis 401 Melatonin, a neurohormone primarily synthesized by the pineal gland, is a multifaceted molecule with diverse physiological functions.Apart from its role in circadian rhythm regulation, melatonin functions as a potent scavenger of mtROS and exhibits significant antioxidant properties.Previous studies have highlighted its established antiapoptotic effects [3,25,40].However, studies on the protective role of melatonin on TEGDMA-induced apoptosis of preodontoblasts, as well as its underlying mechanism of action are still lacking.In our investigation, we discovered that pretreatment with melatonin effectively inhibited TEGDMA-induced apoptosis of mDPC6T cells by ameliorating mitochondrial dysfunction.Melatonin exerts its cellular effects through several key mechanisms.Primarily, owing to its highly lipophilic properties, melatonin can permeate cellular and membrane structures, eventually accumulating in the mitochondria [41].Within the mitochondria, it enhances catalase activity, reduces Ca 2+ influx, eliminates residual mtROS, and preserves mitochondrial function [42].Secondly, melatonin regulates programmed cell death through specific melatonin receptors.The G protein-coupled membrane receptors MT1 and MT2 are recognized as the primary molecules involved in mediating the receptor-dependent pathways of melatonin [43,44].MT1 and MT2 activate multiple signaling pathways, including the cAMP-response element-binding protein (CREB), phosphatidylinositol 3-kinase (PI3K), and MAPK signaling pathways, integrating various linear inputs to regulate cellular functions such as circadian rhythm, cell differentiation, cumulus expansion, and programmed cell death [45][46][47].However, further confirmation is required to determine the specific pathway involved in the protective effect of melatonin on preodontoblastic cell apoptosis.
Activated MAPKs exert various biological effects by promoting the phosphorylation of downstream substrates, which then serve as signals in various cell responses, including apoptosis.In our study, the phosphorylation of MAPKs was detected by western blot analysis, and only p-JNK changed with time and concentration when mDPC6T cells were stimulated by TEGDMA.The results are different from those of previous studies [24,25], as they found that the phosphorylation of ERK1/2 and p38 MAPKs was more pronounced.Previous studies examined the effect of TEGDMA on mouse macrophages and treated cells for 24 to 48 h, but we stimulated mDPC6T cells with TEGDMA for approximately 6 h.The distinct cell types and processing conditions may contribute to these inconsistencies.Therefore, we specifically focused on JNK pathway-regulated apoptosis.As expected, pretreatment with SP600125 notably abolished TEGDMA-induced apoptosis, increased mitochondrial function and suppressed mtROS production, and the opposite effect of anisomycin on these results further proved that the JNK pathway is involved in the apoptotic effect of TEGDMA.JNK/MAPK not only is required for the release of cytochrome C from the inner membrane space of mitochondria and the activation of pro-apoptosis proteins, such as Bax and caspase3 [48] but also leads to the inhibition of mitochondrial respiration and electron transport, and damaged mitochondria lead to the release of mtROS and the improvement of MMP [49].Mechanistically, the direct disruption of the interaction between JNK and mitochondria plays an important role in apoptosis.Krifka et al. [4] conducted a comprehensive review of the impact of monomer-induced oxidative stress on central signal transduction pathways, including JNK/ MAPK, which reaffirmed the significant involvement of the JNK signaling pathway in TEGDMA-induced cell apoptosis.Never-theless, the crucial role of JNK necessitates additional validation through overexpression or silencing using siRNA in our future study.
The release of melatonin in response to cellular stress by activating the JNK/MAPK pathway has been reported in various pathological processes [31,45].In our study, we found that melatonin antagonized mtROS and mDPC6T cell apoptosis caused by TEGDMA or anisomycin alone.Meanwhile, melatonin significantly inhibited cell apoptosis induced by TEGDMA and anisomycin together.Furthermore, melatonin mimicked the effects of the inhibitor SP600125 and abolished the suppressive effects of TEGDMA on p-JNK.Therefore, we propose that melatonin plays a role in mitigating mitochondrial dysfunction-regulated apoptosis partly through the JNK/MAPK pathway.It has been reported that TEGDMA causes mitochondrial oxidative damage via JNK-dependent autophagy to exacerbate mDPC6T cell apoptosis [9,36].Melatonin can modulate autophagy by various pathways.For example, melatonin and its metabolites adjust various sirtuin pathways related to mitochondrial function and autophagy in the case of stroke [50].In addition, melatonin-based therapeutics modulated mitophagy in macrophages to ameliorate atherosclerosis [51].However, whether melatonin eliminates the damaged mitochondria by regulating autophagy in mDPC6T cells has not yet been investigated.The role of the MT1 and MT2 receptors in this context needs further elucidation.
In the present study, we for the first time showcased the remarkable protective effect of melatonin against TEGDMA-induced apoptosis in preodontoblast cells.Nevertheless, certain limitations exist in this study.First, the study exclusively utilized a mouse preodontoblast cell line, warranting the use of primary dental pulp cells to validate the mechanisms underlying melatonin's protective effect on TEGDMA-induced apoptosis.Moreover, additional in vivo investigations are required to verify the preventive effects of TEGDMA on dental pulp injury and confirm the role of the JNK/ MAPK signaling pathway.
In summary, the JNK/MAPK signaling pathway appears to be the pivotal mechanism underlying the protective effect of melatonin against TEGDMA-induced mitochondrial apoptosis in mDPC6T cells.As a result, this research potentially contributes to the exploration of melatonin's application in alleviating TEGDMAinduced dental pulp damage, and provides new perspectives for the development of innovative dental resin materials with enhanced biocompatibility.
Figure 1 .
Figure 1.Melatonin attenuated TEGDMA-induced apoptosis in mDPC6T cells (A) Cell viability determined by MTT in mDPC6T cells in the presence of melatonin (Mel) with or without TEGDMA.Data are presented as the mean±SD (n=3).(B,C) Flow cytometry assay after melatonin treatment.Data are presented as the mean±SD (n=3).(D) Representative western blots for caspase3 and Bax in mDPC6T cells with (+) or without (-) melatonin treatment in the presence of TEGDMA (+) or culture medium (-).(E) Quantification of protein expression of cleaved caspase3 relative to caspase3.(F) Quantification of protein expression of Bax relative to β-actin.Data are presented as the mean±SD (n=3).
Figure 3 .
Figure 3.The role of the MAPK pathway in TEGDMA-induced mDPC6T cell apoptosis (A) Representative western blots for MAPKs in mDPC6T cells with (+) or without (-) MitoQ treatment in the presence of TEGDMA (+) or culture medium (-).(B) Quantification of protein expression of p-JNK relative to JNK. (C) Quantification of protein expression of p-ERK relative to ERK. (D) Quantification of protein expression of p-p38 relative to p38.Data are presented as the mean±SD (n=3).(E-G) Cell viability determined by MTT after TEGDMA with or without treatment with MAPK inhibitors.Data are presented as the mean±SD (n=3).(H,I) Flow cytometric analysis after TEGDMA with or without treatment with MAPK inhibitors.Data are presented as the mean±SD (n=3).
398
Melatonin protects TEGDMA-induced preodontoblast mitochondrial apoptosis study, we investigated the role of the MAPK signaling pathway, particularly the JNK/MAPK pathway, in TEGDMA-induced mitochondrial apoptosis in preodontoblasts.Our findings highlight the potential therapeutic efficacy of melatonin in managing resin monomer-induced dental pulp injury.MtROS is a widely accepted initiator of apoptosis through
Figure 4 .Figure 5 .
Figure 4.The role of the JNK/MAPK pathway in TEGDMA-induced mDPC6T apoptosis and mitochondrial dysfunction (A) Representative western blots in mDPC6T cells with (+) or without (-) SP600125 treatment in the presence of TEGDMA (+) or culture medium (-).(B) Quantification of protein expression of Bax relative to β-actin.(C) Quantification of protein expression of cleaved caspase3 relative to caspase3.(D) Quantification of protein expression of p-JNK relative to JNK.Data are presented as the mean±SD (n=3).(E,F) Representative images showing MitoSOX staining (E) and quantification (F) in the indicated groups (n=3).(G,H) Representative images of TMRM staining (G) and quantification (H) in the indicated groups (n=3).(I) Cellular ATP levels in the indicated groups (n=3). | 2023-11-15T17:07:59.190Z | 2023-11-01T00:00:00.000 | {
"year": 2024,
"sha1": "920ba2fca3fb36ed2efb5e38ab2dc24cb4f20b84",
"oa_license": "CCBY",
"oa_url": "https://www.sciengine.com/doi/pdfView/67FAE43ADEC441DBA06DBF24E5ACA29A",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "08ceb30c0a181bd7dfb939e10f30475e06f02b3e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225254944 | pes2o/s2orc | v3-fos-license | Trait-specific testing of the equal environment assumption: The case of school grades and upper secondary school attendance
Objective: This paper tests the equal environment assumption for school grades and upper secondary school attendance and describes the conditions under which violations are problematic. Background: A growing number of sociologists use twin-based research designs, particularly the Classical Twin Design (CTD), to differentiate between genetic and social causes of social inequalities. One key assumption of CTD is that environmental influences are shared by monozygotic and dizygotic twins to the same extent; called the equal environment assumption (EEA). This assumption is frequently contested and the target of concern, because violation can result in an overestimation of heritability and an underestimation of the role of the social environment. Method: Using data from the first wave of the German TwinLife study, the paper illustrates two approaches to test EEA for school grades and enrolment in upper secondary school (Gymnasium). The analysis is based on a sample of twins (N = 1,576) aged ten to twelve years. Results: The results show that the approaches are able to detect violations of EEA (though in different ways), depending on the environmental variables that might causally be involved in trait variance. Only in one case was a violation was observed; it had no effect on heritability estimates. Conclusion: While EEA holds for school grades, violations do not automatically invalidate CTD in case of upper secondary school attendance.
Introduction
Twin-based research designs are relatively new to sociology, and a growing number of sociologists use these designs to differentiate between genetic and social causes of social inequalities (e.g. Nielsen 2016; Grätz & Torche 2016;Jaeger & Møllegaard 2017;Schulz et al. 2017;Gil-Hernández 2019;. Demographers have used twin-based research designs for longer to differentiate between genetic and social causes of demographic outcomes, such as fertility (e.g. Rodgers et al. 2001; for an overview, see Mills & Tropf 2015). Though there are newer approaches based on molecular genetic variation in unrelated individuals, such as genome-wide association studies (GWAS) and polygenic risk scores derived from such GWAS (e.g. Rietveld et al. 2013), twin-based research designs have great value in understanding the sources of social inequalities and underlying family processes. For example, the study of twins opens up possibilities to examine causal relations in the comorbidity of traits based on discordant twins. In addition, twin studies help us to better understand underlying biological processes. For example, MZ twin designs allow the study of biological discordance against an equivalent genetic background (for a detailed overview see van Dongen et al. 2012). Compared to traditional sibling analysis, which suffers from unobserved heterogeneity (Solon et al. 1991: 512), twin designs have the advantage that they allow us to differentiate between genetic and environmental confounds (Diewald et al. 2016; for an example see Baier 2019). However, this superior control comes at the cost of strong assumptions that are regularly contested.
The most frequently applied genetically informative design is the Classical Twin Design (CTD). CTD is based on a comparison of monozygotic (MZ) and dizygotic (DZ) twins pairs (Keller et al. 2010: 377). One of its key and in general most debated assumptions is the equal environment assumption 1 (EEA). EEA assumes that environmental influences are shared by MZ and DZ twins to the same extent (Derks et al. 2006: 403-404). At the heart of concerns regarding EEA is the observation that MZ twins often experience much more similar home environments and are often treated more alike than DZ twins (e.g. Robin et al. 1994;Evans & Martin 2000;Felson 2009). They more often share the same room, are more often dressed alike, and more often play together than DZ twin pairs (Loehlin & Nichols 1976: 50-51;Robin et al. 1994;LoParo & Waldman 2014). However, greater similarities in the environment of an MZ twin pair do not automatically invalidate CTD. Even if violations occur, these do not necessarily affect estimations, as long as differential treatment of MZ and DZ twins is unrelated or only weakly related to a trait under study; this is referred to as the "trait-relevant" definition of EEA (LoParo & Waldman 2014: 611).
Sociological research in this area often points to the role of family and demographic processes in explaining social inequalities (Kiernan & Mensah 2011;Mare 2011). In this case, there is clear evidence of family environment and parental treatment being relevant for a child's life chances. Apart from investments in children related, for example, to cultural activities, differences in parent-child interactions especially have been shown to cause differences in child outcomes, including their academic achievement (Fan & Chen 2001;Lareau 2002;Spera 2005;Cheadle 2008;Kiernan & Mensah 2011). Accordingly, even 1 The literature criticizing it includes Joseph (2015); Beckwith & Morris (2008); Moore & Shenk (2017). The literature supporting it includes Derks et al. (2006). small systematic differences in the home environment and in the parental treatment of MZ and DZ twins could -in the long run -cause strong differences in child development (Plomin & Daniels 2011: 576).
As argued by Bouchard and McGue (2003: 9) and Joseph (2015), it is "good scientific practice" to test and demonstrate the validity of the "trait-relevant" definition of EEAespecially for sociological research on the genetic and social causes of social inequalities. Previous research has tested EEA mainly for health outcomes and psychological traits 2 (for an overview, see Felson 2014). To my knowledge, there have been only three studies so far that tested the validity of EEA for status-related outcomes, including income, years of education (Felson 2014), high school grade point average (GPA) (Conley et al. 2013;Felson 2014), and qualification test scores (Loehlin & Nichols 1976: 51-52). These studies presented mixed results. While Conley et al. (2013) found no indication of EEA being violated in the case of GPA, Felson (2014) observed EEA being invalid for income and years of education and describes the overall bias as modest. However, previous research is based on relatively small samples. Conley et al. (2013: 421) based their analysis on 392 twin pairs. Small samples make it more difficult to detect violations of EEA (Derks, Dolan, & Boomsma 2006). Further research is therefore needed to validate previous results.
Moreover, since violations of EEA can lead to an upward bias in heritability estimates, violations of EEA have been regarded as a possible explanation for part of the "missing heritability" problem (Felson 2014). Missing heritability refers to the gap in heritability estimates derived from twin data and genotyped data (Young 2019). While the research has discussed various reasons for this gap, such as the presence of non-additive genetic effects (Zuk et al. 2012;Zhu et al. 2015), or the effects of rare variants (Zuk et al. 2014;Tropf et al. 2017), researchers have additionally argued that twins studies might simply overestimate heritability in the case of violations of the underlying assumptions (Felson 2014;Young 2019). Therefore, testing the "trait-relevant" definition of EEA is important to underpin the validity of results obtained from twin data.
Given the need to pay greater attention to the problem of differential treatment in cases when EEA is likely to be violated (e.g. Richardson & Norgate 2005), this paper studies to what extent EEA holds for three educational outcomes: child's maths grade, German grade, and enrolment in upper secondary school (Gymnasium). Addressing the "trait-relevant" definition of EEA, I illustrate two different approaches to test EEA and compare the results. Both approaches link possible violations of EEA to differences in experiences of parental treatment among MZ and DZ twins. However, the first approach does so only indirectly, based on the physical similarity between twins. The second approach directly studies the parenting the twins receive by looking at the mother's reports on her parenting style. Mothers are normally the person most knowledgeable about the child (Jenkins et al. 2003: 102). Parenting styles are influenced by family structures (Chan & Koo 2010) and can be understood as the parent's capacity to socialize their children by changing the effectiveness of parenting practices expressed in parenting activities (Darling & Steinberg 1993: 493). Parenting styles have not only been shown to influence educational outcomes such as school grades (Conger et al. 1992: 532, 536-537), they have also been identified as important mediators of the effect of family background on school grades (Kaiser, Li, & Pollmann-Schult 2019).
I test the validity of EEA and evaluate the effect of violations of EEA on heritability estimates using data from the first wave of the German TwinLife panel study. TwinLife includes families across the full range of the social strata and is representative of the German population (Lang & Kottwitz 2017). To my knowledge, this is the first kind of study testing the validity of EEA for educational outcomes in Germany, and it is one of the few that considers the validity of EEA for status-related outcomes overall. In addition, the study extends previous research by illustrating and comparing the results of two approaches to test EEA.
The classical twin design and its extensions
Independent of the underlying method, whether it is twin correlations, structural equation models, or advanced regression techniques, CTD compares the similarities in a trait between MZ and DZ twin pairs to calculate the narrow sense heritability of a trait (h²) (Keller et al. 2010: 377). Heritability estimates have been criticized for being misleading, because they "convey[s] a sense of direct genetic influence" on traits (Moore & Shenk 2017: 2). Nevertheless, heritability estimates provide a good indicator of possible genetic confounding (Freese, Li, & Wade 2003). In addition, heritability estimates can vary between sub-groups in a given population, and comparing estimates between groups provides information about the nature of between-group differences (Visscher, Hill, &Wray 2008: 257). For example, heritability estimates can be used as an indicator describing the degree to which sub-groups are differently able to fully develop their genetic potential (Scarr-Salapatek 1971;. CTD calculates the heritability of a trait (h²) as twice the difference between the intraclass correlations of MZ ( ) and DZ twins ( ) (Conley et al. 2013: 416). Since MZ twins share 100% and DZ twins on average 50% of their genetic makeup, intra-class correlations ( , ) can be decomposed into a heritability (h²) and a shared environment (C²) component (Felson 2014: 185-186).
In this context, the equal environment assumption (EEA) is crucial because it helps to solve the equations. 3 EEA assumes that "the covariance between environment and genetics is zero" (Conley et al. 2013: 416) -i.e. that the environment has the same effect on MZ and DZ twins' behaviour. Only when EEA is fulfilled, subtracting the equations (1) and (2) leads to differences in twin-pair correlations ( − ) being equal to 0.5 h²; thus, twice the difference being the heritability of a trait (h²) (Felson 2009: 4).
An extension of CTD often used by sociologists is genetically informed linear mixed models, such as ACDE models and their variants. ACDE models partition the total variance (var(y)) in a trait into four components (Rabe-Hesketh et al. 2008: 281): an additive genetic (A), a shared environment (C), a non-additive genetic (D), and a non-shared environment (E) component.
Another assumption is the absence of genetic assortative mating, or a random selection of mates in a population (Conley et al. 2013: 415).
Heritability
The degree to which an outcome variable (trait) varies by genetic variation in a given population (Freese & Shostak 2009).
Narrow sense heritability (h 2 )
The additive genetic effects which represent the averaged effects of single alleles on the phenotype (Neale & Cardon 2013: 12).
Broad sense heritability (H 2 )
The sum of the additive genetic and non-additive genetic effects (Visscher, Hill, & Wray 2008: 256). Non-additive genetic effects relate mainly to dominance and epistasis.
Dominance relates to interactions between alleles at single loci, whereas epistasis describes the interaction between alleles at different loci (Neale & Cardon 2013: 12).
Missing heritability
The gap in heritability estimates from twin data and genotyped data (Young 2019).
In this context, the additive genetic component represents the main or averaged effects of single alleles on the phenotype (h²). The non-additive genetic component refers to two main types of genetic non-additivity: dominance and epistasis (Neale & Cardon 2013: 12). Dominance relates to interactions between alleles at single loci, 4 whereas epistasis describes the interaction between alleles at different loci (Neale & Cardon 2013: 12). The C component reflects the extent of homogeneous effects of environments shared by the twins on a trait that work in the same direction and make twins more similar. However, even when the twins share an environment, its effect does not necessarily end up in the C component. In cases in which the twins experience the same environment differently, the variance will enter the E component. For example, the same degree of parental control of a child's behaviour can be experienced differently by the twins and finally lead to different outcomes. Therefore, the E component reflects two types of effect that make twins less alike: (1) unshared environments, e.g. different peer groups, and (2) distinct reactions by the twins to the same environment (Turkheimer & Waldron 2000;Freese & Jao 2017).
However, ACDE models and their variants cannot easily be estimated, because there are more unknown parameters than known parameters (Coventry & Keller 2005: 214-215). In this context, EEA again provides a solution, because it reduces the number of estimated parameters. Leaving three parameters and two covariance terms, one for MZ and one for DZ twins, to be estimated, the model is identified by additionally assuming either no additive-genetic effects (ACE model) or no shared-environment effects (ADE model) (Keller et al. 2010;Zyphur et al. 2013: 575-576). In this context, the twin correlations (ICC) can be used as a first indicator for the presence of non-additive genetic effects. When the MZ correlations (rMZ) are twice as large as the DZ correlations (rDZ), the ADE model applies (Bleidorn et al. 2018). Otherwise, the model reduces to an ACE model.
There are different extensions of CTD that also relax EEA and control for geneenvironment interactions, e.g. through the inclusion of environmental indices (Boomsma et al. 2002: 875;Conley et al. 2013: 416) or through the inclusion of additional informants, such as parents, siblings, and even other relatives (Keller et al. 2010).
Gene-environment interplay and EEA validity
Testing the "trait-relevant" definition of EEA requires researchers to understand the environmental variables that might causally be involved in trait variance (Richardson & Norgate 2005: 341). While this makes testing the validity of EEA much more complicated, some researchers argue that even if "trait-relevant" influences are found, leading to differential treatment of MZ and DZ twins, this would not necessarily lead to biased estimates (Joseph 2015 chapter 7;Verhulst & Hatemi 2013). These researchers argue that genes can confound environmental similarities (Derks et al. 2006: 403). For example, parents' treatment of their children seems to be influenced by their children's genetic makeup (evocative gene-environment correlation; Plomin, DeFries, & Loehlin 1977), and MZ twins appear to be treated more alike regarding their mother's expression of warmth than DZ twins (Kendler 1996: 15). These differences between MZ and DZ twins could be explained either by the greater genetic similarity in MZ twins leading to greater behavioural similarity in the twins themselves, which impacts the parenting they receive, or by parents of MZ twins being less able to differentiate the behaviour between them (Grätz & Torche 2016: 10). In both cases, the greater similarity in MZ twins' traits would then relate to geneenvironment interplay.
Technically, EEA allows for the confounding of genotype and environment, called geneenvironment correlation (rGE), as well as environments moderating the effects of genes, or genes affecting the sensitivity to environments, called gene-environment interaction (GxE) (Price & Jaffee 2008: 305-306). In the presence of rGE and GxE heritability estimates based on CTD, these encompass not only direct genetic effects, but also indirect effects (Stenberg 2011). However, Joseph (2012) criticizes this argument for circular reasoning, because CTD's premise is the goal of separating variances into a genetic and an environmental component based on EEA, assuming no rGE nor GxE. In this context, it is a conceptual issue of how the genetic component is understood and defined, and whether greater environmental similarities resulting from gene-environment interplay are understood as reflecting genetic effects. Strictly speaking, effects related to rGE or GxE cannot be clearly allocated to either the environmental or the genetic component.
In addition, as argued by Fosse et al. (2015), for evocative genetic effects to be a valid defence of the twin method, MZ twins themselves must be regarded as the primary causal agents of any increased correlation in a child's "trait-relevant" exposures. However, it often remains an empirical question whether twins' behaviour is more alike, because they are treated more alike, due to their more similar appearance, or due to other underlying factors (Matheny et al. 1976). In addition, even if the presence of rGE or GxE is understood as violating EEA, it is debatable whether violations necessarily result in an overestimation of heritability (Walker et al. 2004;Richardson & Norgate 2005;Conley et al. 2013: 415;Joseph 2015). As demonstrated by Verhulst & Hatemi (2013), it is not in all cases that the presence of GxE and rGE has meaningful effects on the estimated variance components (compare Conley et al. 2013;Felson 2014). Again, this seems to be the case only when the specified environment is substantially correlated with the trait under study. In such cases, extensions of the CTD that deal with GxE and rGE are available and can be utilized (Purcell 2002;Verhulst & Hatemi 2013: 368-369, 371).
Testing EEA validity
Different ways to test EEA have been developed (for an overview see Derks et al. 2006: 403-404;LoParo & Waldman 2014: 606-607). A first method is based on a comparison of the impact of twins' actual and perceived zygosity on trait similarity (Kendler et al. 1993). The twins' zygosity is not automatically obvious to the parents (Bamforth & Machin 2004), and not always determined correctly by professionals during pregnancy or after birth, leading to a substantive proportion of twins being misclassified (Ooki, Yokoyama, & Asaka 2004;Cutler et al. 2015). In cases in which misperceived zygosity leads to differences in trait similarity, EEA is regarded as being violated, because these difference are assumed to relate to treatment effects. Trait similarity is said to be affected by the twins' environments treating them more alike, due to greater perceived similarity and not based on their actual genetic similarity (see Conley et al. 2013 for an example).
An important limitation of the first approach is that in most datasets the number of misperceived twins is too small to actually test EEA. For example, the analysis of Conley et al. (2013: 421) for High School GPA is based on twelve misperceived DZ twins and fifty-six misperceived MZ twins. For the current analysis, the number of misperceived twins in TwinLife can be considered too low to detect violations of EEA (in the cohort studied the sample includes just 222 misperceived twins). A second alternative method is to investigate in how far the physical resemblance between twins, often leading to misperceptions of their zygosity, leads to differences in how MZ and DZ twins are treated (Hettema et al. 1995). Appling this method, researchers study the correlation between the physical similarity of twin pairs and trait similarity after controlling for zygosity (LoParo & Waldman 2014: 606). If greater physical resemblance leads to greater trait similarity, after controlling for zygosity, EEA is again assumed to be violated due to the remaining differences probably relating to treatment effects.
While there is often more data available to apply the second approach, in relation to both approaches it can be argued that the greater trait similarity in DZ twins can relate to greater genetic similarity (e.g. Plomin, Willerman & Loehlin 1976: 50), leading to more physical resemblance and increasing the likelihood that the twins' zygosity is misperceived. In this case, significant correlations would then point to gene-environment interplay (GxE or rGE), which does not violate EEA (see section 2.2). However, most researchers probably want to describe the extent of GxE or rGE separately from heritability estimates, utilizing the extensions of CTD. In addition, it is still possible to test EEA based on the first two approaches by looking at MZ twins only and regarding the extent to which physical similarity and misperceived zygosity affect trait similarity. In this case, greater trait similarity can no longer be related to greater genetic similarity.
A third method to detect violations of EEA is to determine the extent to which increased environmental similarity in MZ twins relates to the behaviour of the twins themselves or is initiated by others (LoParo & Waldman 2014: 607). While in the first case environmental similarity could again be attributed to gene-environment interplay (GxE or rGE), in the second case environmental similarities would again relate to treatment effects. One problem with this method is the additional information required to determine whether any observed behaviour was initiated by important others.
A fourth method, developed by Derks et al. (2006), suggests that EEA can be evaluated based on multivariate data and by using only DZ twins. Using more than one observed trait variable, one can calculate the shared environmental correlation in DZ twins for these phenotypic traits. As long as this correlation does not deviate significantly from 1 for same sex DZ twins, indicating that shared environment affects the traits alike, EEA is supported. The advantage of this method is that it does not require information on environmental similarity between twins. However, as demonstrated by Derks et al (2006: 409), this method requires 1) that "the shared environmental correlation in DZ twins is different from .5", 2) the included trait variables are not perfectly correlated, 3) the factor loadings of the variance components are not collinear. In addition, regarding the constraints needed to reduce the unknown parameters to get the model identified, 4) an identifying constraint is needed "that does not lead to a significant decrease in model fit" (Derks et al. 2006: 409).
A fifth method, following the approach by Loehlin & Nichols (1976), is to evaluate the associations between similarities in twins' environments and trait similarities within zygosity groups (LoParo & Waldman 2014: 607;Derks et al. 2006: 404). If this correlation is significantly greater than zero, EEA is violated. This is because the greater similarity in the traits studied for MZ twins is no longer linked to only greater similarities in genetic endowments. This frequently applied method has the advantage that it can be applied without extra information on either the physical resemblance of the twins, information about twins' perceived zygosity, or information from different informants, i.e. twins and their parents. In addition, it can be applied in contexts where the focus is on one particular trait and there are no additional traits to which the shared environmental correlation in DZ twins can be compared. Accordingly, the sixth method places fewer demands on the data than most of the other methods. However, precise information on the twins' environments is needed, while researchers need to be sure which facets of the twins' environment is relevant for the traits studied. A slightly improved version of this method is applied by Felson (2014), who estimated heritability for thirty-two different outcomes with and without controls for environmental similarity. Comparing the changes in heritability estimates, he was able to test whether environmental similarity significantly reduced heritability and thus whether EEA was violated or not.
A sixth method is to compare the similarity of how parents report the way they treat their twins with the similarities in the twins' traits for MZ and DZ twin pairs (Kendler & Gardner 1998). If greater similarities in the parental reports relate to grater similarity in the twins' traits, this might point to differential treatment effects. This method is useful when data on the physical similarity of twin pairs, or any related information such as the misperception of the twins' zygosity, is not available. Beyond that, research studying the causes of social inequalities frequently relies on mechanisms related to parenting (Kaiser, Li, & Pollmann-Schult 2019). In this context, many sociological studies have focused particularly on parenting practices in terms of investments in children, such as cultural activities that affect the formation of a child's cultural capital (e.g. Lareau & Weininger 2003;Roksa & Potter 2011). However, more recently, parenting styles, which psychologists have traditionally analysed (e.g. Fan & Chen 2001;Chao 1994Chao , 2001García & Gracia 2009), have been integrated into sociological research as an important concept to describe the mechanisms through which parents influence a child's skills development and -most importantly -educational outcomes (e.g. Pong, Hao, & Gardner 2005;Chan & Koo 2010;Kiernan & Mensah 2011;Kaiser, Li, & Pollmann-Schult 2019). As described before, parenting styles moderate the relationship between parental activities and child outcomes by transforming parent-child interactions and influencing a child's personality (Darling & Steinberg 1993: 493). For example, when parents support their children with their homework (parenting activity), they might either strictly control their children's behaviours ("authoritarian parents"), provide a high level of support ("indulgent parents"), or do both ("authoritative parents") (Huver et al. 2010). Parents can then actually influence child outcomes. For example, more nurturant parenting, expressed in terms of greater parental warmth, has been observed to lead to better school performance (Conger et al. 1992: 532, 536-537), while insufficient parental control and over-controlling have been observed to impact negatively on child development through raising levels of child depression and lowering levels of child competence (Schiffrin et al. 2014: 548, 554). In addition, there is evidence that specific dimensions of parenting styles are differently affected by the genetic makeup of children. In their meta-analysis of parent-based designs Kendler & Baker (2007: 619-620) found that in particular parental expression of emotional warmth is more strongly affected by a child's genetic makeup than parental expression of behavioural control. Though the results vary according to whether parental or child reports are taken into account (Kendler & Baker 2007: 619), this makes parenting styles particularly interesting for studying possible violations of the "trait-relevant" definition of EEA related to differences in family processes between MZ and DZ twin families.
Therefore, alongside the second approach -investigating the effects of physical resemblance on trait similarity in MZ and DZ twins -which is used for comparison, this paper tests the "trait-relevant" definition of EEA based on the sixth method.
TwinLife
This paper is based on the first wave of TwinLife, a prospective longitudinal study of twins and their families in Germany (Diewald et al. 2017). The first wave includes four cohorts of about 500 pairs of MZ and about 500 pairs of same-sex DZ twins per cohort (in total N = 8,194 twins, nested in 4,097 families). Sampling was based on a stratified random sampling strategy using administrative data from communal registration offices. TwinLife thus includes families across the full range of the social strata (Lang & Kottwitz 2017). Of the four birth cohorts in the data (C1: born 2009-2010, C2: born 2003-2004, C3: born 1997-1998, C4: born 1991-1992), I focus on the second-youngest cohort, who were aged between ten and twelve years at the time of the first interview (N = 2,086 twins out of 1,041 families 5 ). I focus on mother's reports of parenting styles because the information on fathers is limited. Mothers more often took part in the survey, more often completed the required information on their parenting styles, and can normally be regarded as the person most knowledgeable about the child (Jenkins et al. 2003: 102). Table 1 provides some basic information on the sample demographics of the TwinLife dataset in cohort 2. Children enrolled in primary school (incl. schools with an orientation level for secondary education), schools for special needs, other unspecified school types, or in Waldorf schools were excluded from the analysis (N=302). In addition, in a few cases, the information on school grades or the school track was missing (N=137). Excluding these children and restricting the analysis to full twin pairs reduced the analytical sample to a maximum of 1,576 cases.
School grades and enrolment in upper secondary school
In TwinLife the information on school grades was taken from the most recent report card of the children (Mattheus et al. 2017: 6). For respondents for whom this information was missing, parents were asked to report on their children's academic performance. The performance of school children in the German school system is evaluated based on a sixpoint grading scale. Grades range from 1 (excellent) to 6 (insufficient). In this paper, I look only at the German and maths grades of children. As demonstrated by Table 2, on average children scored between "good" (2) and "satisfactory" (3) in both subjects. However, looking at differences in grades across school types, children from upper secondary school received better grades than children from lower or intermediate secondary school. Since grades can have different meanings across school types, the following analysis of grades is partly split between lower/intermediate secondary and upper secondary schools (Gymnasium), and restricted to twin pairs enrolled in the same school type.
Apart from looking at school grades, I am also interested in enrolment in upper secondary school (Gymnasium) compared with enrolment in other secondary school types. As demonstrated by Table 2, about 52% of the children (N=823) in cohort 2 attended upper secondary school, which is above the population mean (40% in the school year 2014/15; Malecki 2016: 26). However, taking into account all children, also those still enrolled in primary education and any other excluded school types (N=302), 43% are enrolled in upper secondary school, reflecting the general population very well.
The twins' physical resemblance
The twins' physical resemblance was derived based upon a set of questions included in the physical similarity questionnaire (Lenau et al. 2017). These questions referred to the twins' parents' perceptions of (1) "significant differences", (2) "slight differences", or (3) "no differences" in the twins' hair colour, hair texture, eye colour, and earlobes, and parents' assessments of the twins' similarity based on earlier photographs and resemblance in early childhood; these were recoded into (1) "had no resemblance at all", (2) "looked exactly the same", (3) "had a strong resemblance, like siblings". Taking the mean score of these, the resulting score ranges from one to three, with higher values indicating greater physical resemblance between the twins. As expected, the twins' physical resemblance is generally higher for MZ than for DZ twins (Table 1). Moreover, there is variation in physical resemblance in both MZ and DZ twins, which is crucial for one aspect of the method applied.
Parental treatment (parenting styles)
Research testing EEA frequently relied upon indicators describing differences in parental treatment -in particular the parenting twins receive (Felson 2014). In this paper, too, I measure similarities in family environment based on parental treatment, more precisely the mother's report on her parenting styles (e.g. how often she "praised" or "scolded" her children). Parenting styles were reported according to ten items from five different subscales (for an overview see Baum et al. 2020). These five subscales identify mother's parenting styles according to her emotional warmth (Jaursch 2003) (three items), her negative communication (Schwarz et al. 1997) (two items), the degree of inconsistent parenting (Reichle & Franiek 2005) (two items), strict control (Schwarz et al. 1997) (two items), and psychological control (Reitzle et al. 2001) (one item). The ten items measure parenting styles on a scale of one ("never") to five ("very frequent"), according to how often specific parenting behaviours occurred. Aggregating these items by use of mean scores results in five variables that are reliable (Table 3). Previous research suggests that facets of parenting styles are affected differently by the genetic makeup of children (Kendler & Baker 2007: 619-620). Therefore, it seems necessary to analyse the different parenting dimensions separately. In a first step, I look at the five sub-dimensions separately. In a second step, the ten items were then aggregated to reflect overall negative parenting styles expressed by mothers ("negative parenting"). In this context, the items measuring emotional warmth were recoded so that higher values reflected the absence of emotional warmth.
As demonstrated by Table 3, mothers in TwinLife score comparatively high on the items measuring the expression of emotional warmth, and tend to less often report negative communication and inconsistent parenting as parenting styles. Moreover, the resulting new variable "negative parenting" is normally distributed and reliable.
Methods
This paper looks at three different educational traits to test EEA. In this context, the analysis is split into two parts. In the first, I derive the results of different multilevel mixed-effects ACE variance decomposition models (Guo & Wang 2002). The analysis is based on the acelong command developed by Lang (2018). Acelong is a wrapper for generalized structural equation models (GSEM) that estimates different types of multilevel mixedeffects ACE variance decomposition model, such as that proposed by Guo and Wang (2002). Based on the variance decomposition model, I calculate the sizes of the different variance components, the additive genetic (A), the shared environment (C), and the non-shared environment (E) component, for the traits of interest. I also report MZ and DZ correlations (inter-class correlation, ICC). While school grades are assumed to be scaled metrically, a child's enrolment in upper secondary school is binary. Therefore, the analysis of a child's track attendance is based on a linear probability model, where an underlying latent variable describing a child's probability of attending the upper secondary school is assumed: In extreme cases of combinations of independent variables, linear probability models have been observed to estimate coefficients that imply probabilities below 0 or above 1. However, such cases are very unlikely to occur (Hellevik 2009). In addition, linear probability models and logistic regression models produce similar results when the percentage of cases with high values on the dependent variable varies between 0.2 and 0.8 (Hellevik 2009: 62-64, 68). In the current case of the binary dependent variable, the percentage of cases with high values, i.e. those enrolled in upper secondary school, is about 52% (see Table 2). Comparing the results for the linear probability model with those for a respective binary logistic regression additionally shows that the results are indifferent (see Table 5). Therefore, for ease of interpretation, I discuss only the results of the linear probability model.
Moreover, the models are based on maximum likelihood estimation, to improve the model fit, and use clustered robust error terms to resolve problems related to heteroscedasticity (Hellevik 2009). Finally, for the analysis of school grades, results of school-type specific models are reported to account for possible differences in the grading between school types.
In the second part of the analysis, I test the validity of EAA for the outcomes studied based on two different approaches. First, I investigate the effect of twins' physical resemblance on trait similarity at the twin-pair level based on OLS regression models. In this context, I include an interaction term between physical resemblance and the twins' zygosity to study the effects within zygosity groups. Second, I evaluate the associations between similarities in parenting the twins experience and similarities in school grades as well as the twins' chances of being enrolled in an upper secondary school within zygosity groups. If the calculated correlations are significantly greater than zero, EEA is violated (Derks et al. 2006: 403-404). Similarities in parenting experiences were derived by calculating the absolute difference in the parenting a pair of twins received ( = ( 1 − 2 ). The higher the values in the resulting variables, the greater the differences in parenting the twins experience, and the smaller the values, the greater the similarities. Following this approach, the differences in school grades were also calculated for each twin pair. For track attendance a binary variable was derived, describing whether the twins were on the same or different educational tracks (upper secondary school or any other track).
As demonstrated by Table 4, the majority of twin pairs were enrolled in the same track (85%). In about 47% of cases both twins attended a upper secondary school. Table 4 also shows that there is sufficient variance in school grades and parenting styles within twin pairs as indicated by the mean differences. Interestingly, there are greater differences within twin pairs regarding experiences of maternal control than experiences of emotional warmth. Moreover, differences tended to be smaller when regarding overall parenting style ("negative parenting") instead of looking at the different parenting sub-dimensions. Based on the observed differences, in a final step, EEA is investigated using OLS regression models that examine if differences in the parenting experiences explain differences in the twins' grades or their likelihood of enrolling in higher secondary education, controlling for the child's zygosity. These models are again derived at the twinpair level and control for the other parenting styles. Table 5 provides an overview of the results of the multilevel mixed-effects ACE variance decomposition models for all twins for whom there is full information at the twin-pair level. Looking at the reported twin correlations, I find no indication of non-additive genetic effects, suggesting that the ACE model is valid. As demonstrated by Table 5 for all three traits, and independent of whether one looks at the school types separately or combined, and of whether the twins attend the same school type or not, we find substantial heritability estimates. These range from around 37% of the variance for maths grades to about 56% of the variance for German grades being explained by variances in genes. The results additionally suggested that shared and non-shared family environments, too, explain a large proportion of the variance in school grades. For the enrolment in upper secondary school, the shared-environment component turns out to be even more important. The results are consistent with previous research -for example, on the heritability of school grades (Eifler, Star, & Riemann 2019). However, the derived confidence intervals turn out to be relatively large, reflecting the relatively low power of the track-specific models (this probably relates to small sample size). For some of the trackspecific models, particularly for German grades, the lower bound of the confidence interval is actually negative; suggesting that under specific circumstances the model even reduces to an AE model. Notes: 1 only full twin pairs, 2 including comprehensive schools, 3 twins enrolled in the same school type.
Results for the Classical Twin Design (CTD)
Reading note: Where the confidence intervals for C cover negative values, the models may reduce to an AE model under specific circumstances. Table 6 investigates the effect of twins' physical resemblance on trait similarity controlling for the twins' zygosity. The results suggest there is no violation of EEA for school grades. The result is robust for all twins, as well as for twins enrolled in the same school type (not shown), or for both twins enrolled in specific school tracks (Table 6). Similarly, the results suggest no violation of EEA for a child's chance of being enrolled in an upper secondary school. In a second step, the similarities in maternal reports on how they treat their twins with similarities in the twins' traits are studied for each zygosity group using different regression models. Table 7 shows that nearly all parenting dimensions correlate with a child's school grades and track attendance. The results in Table 8 show a significant effect for zygosity, suggesting greater differences in the outcomes studied for DZ twins compared to MZ twins. This effect relates to the role of genetic endowments in explaining greater similarities in the traits studied. Only in the case of track attendance is there a significant direct and interaction effect for differences in maternal psychological control, suggesting a violation of EEA. The results are the same regardless of whether the models control for other parenting styles or take into account only twins enrolled in the same school type (not shown). Correcting for multiple comparisons based on the Benjamini-Hochberg method (Benjamini & Hochberg 1995), assuming that in one case the H0 is erroneously rejected, the result for psychological control remains significant (p < 0.03).
Physical resemblance
Following the approach of Felson (2014), I test the extent to which the observed violation of EEA for psychological control leads to an overestimation of heritability. Comparing the results for nested models with and without controls for the extent of psychological control (not presented here), the derived heritability estimates are not different (without controls: A=40.3%, with controls: A=40.6%). Thus, the results suggest that the observed violation of EEA did not result in a meaningful overestimation of heritability.
Discussion
Twin-based research designs are relatively new to sociology, and a growing number of sociologists use these to differentiate between genetic and social causes of social inequalities. This paper tested one of the key and most debated underlying assumptions behind the most frequently applied genetically informative designs, CTD, i.e. the equal environment assumption (EEA). The paper extends previous research, which has mainly tested the validity of EEA for psychological and health outcomes (Felson 2014), and tested the "trait-relevant" definition of EEA for school grades and enrolment in higher secondary education based on two different approaches. Both approaches link possible violations of EEA either indirectly or directly to the experiences of differential parental treatment by MZ and DZ twins. In sociology, differences in family environment and the parenting children receive have been integrated as important key concepts to describe the mechanisms through which parents influence a child's skills development and educational outcomes (e.g. Lareau 2002;Pong, Hao, & Gardner 2005;Kaiser, Li, & Pollmann-Schult 2019). Systematic differences between MZ and DZ twins could in the long run relate to strong differences in child development (Plomin & Daniels 2011: 576). Therefore, testing for possible violations of EEA based on differences in the parental treatment of twins is particularly relevant to sociological studies based on CTD. In this paper I focused on parenting styles, which have been shown to influence educational outcomes such as school grades (Conger et al. 1992: 532, 536-537), and to mediate the effect of family background on school grades (Kaiser, Li, & Pollmann-Schult 2019). Given that different facets of parenting styles appear to be affected differently by the genetic makeup of children (Kendler & Baker 2007: 619-620), this makes parenting styles particularly interesting for studying possible violations of the "traitrelevant" definition of EEA in the context of status-related outcomes. The results demonstrated that, independent of the approach applied, EEA was not violated in the case of school grades. However, there was an indication of EEA being violated in the case of track attendance when taking into account the extent of maternal psychological control children receive. Interestingly, no violation of EEA was detected when testing the effect of physical similarity on track attendance, and violations did not show up for the aggregated measure of negative parenting styles. Comparing heritability estimates for models with and without extra controls to capture the source of violation, following the approach of Felson (2014), the results turned out to be almost identical. Greater similarities in psychological control for MZ twins did not lead to an overestimation of heritability. This probably relates to the weak correlation between maternal psychological control and child's track attendance. Violations of EEA probably need to be much stronger to meaningfully affect the size of the variance components. Though this result is reassuring, because in many applications of CTD associations between the twins' family environments and traits studied can be expected to be weak or mediocre, it remains good practice to test the validity of the "trait-relevant" definition of EEA when using CTD based on one of the available approaches.
One limitation of this study is that the maternal reports on their parenting styles could have been affected by social desirability. Future research should combine reports from fathers and mothers to address this limitation. In the current study the information from fathers is limited and could not be added. Another limitation is that even though the analytical sample was much larger compared to previous studies (e.g. Conley et al. 2013;Felson 2014), confidence intervals were still relatively large. Regarding the results presented in Tables 6 and 8, one might detect more violations of EEA in the case of an even larger sample size. Thus, future research should corroborate the findings presented using an even larger sample.
Nevertheless, the approaches applied were able to detect a violation of EEA for track attendance and -even more importantly -showed that such violations did not automatically invalidate CTD. The results highlight again the importance of the trait-relevant definition of EEA that can be tested only by taking into account the environmental variables that might causally be involved and contribute to greater similarity in MZ compared to DZ twins (Richardson & Norgate 2005: 341). In many cases these environmental variables relate to the family environments of the twins. For example, the tracking decision for children is particularly influenced by the family resources and the decisions taken by the parents (Ditton & Krüsken 2006;Jähnen & Helbig 2015). Therefore, the source of a possible bias violating EEA can be located primarily inside the family. Taking this into account, using an indicator that is only indirectly linked with the family environment or parental treatment, such as physical similarity, it can be more difficult to detect violations of EEA. Using measures directly related to trait variance, i.e. specific parenting styles, seemingly provides better chances.
To identify these influencing factors, a detailed investigation of the family environments and the underlying processes explaining possible trait variances is necessary. In many cases, however, it is not only the family environment that is decisive; so, too, are environments outside the family, which could contribute to greater similarity in MZ compared to DZ twins. For instance, school grades are influenced by parental resources and parental treatment, but they are also subject to school environments and teacherstudent relationships. Thus, if school environments were the main source of bias, one will probably be unable to detect violations of EEA by looking at family environments only.
Taken together, there seems to be a trade-off between the approaches to detect violations of EEA based on how narrowly the environments included in the analysis are defined. Therefore, before testing, researchers need to be sure and clear about where violations of EEA are to be expected. Otherwise, violations of EEA might be overlooked. This insight is relevant not only to research into the genetic and social causes of social inequalities, it extends to all sociological research that relies on CTD. | 2020-10-30T06:03:37.272Z | 2020-09-18T00:00:00.000 | {
"year": 2021,
"sha1": "e12325a25a80b9b7897d3db2e3730fb332b33a26",
"oa_license": "CCBY",
"oa_url": "https://ubp.uni-bamberg.de/jfr/index.php/jfr/article/download/381/500",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7f6ec2f967478eef90356ecda4c40348e06bb3d0",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
237271287 | pes2o/s2orc | v3-fos-license | Effects of Salt Thickness on the Structural Deformation of Foreland Fold-and-Thrust Belt in the Kuqa Depression, Tarim Basin: Insights From Discrete Element Models
The salt layer is critical for the structural deformation in the salt-bearing fold-and-thrust system, which not only acts as the efficient décollement layer but also flows to form salt tectonics. Kuqa Depression has a well-preserved thin-skinned fold-and-thrust system with the salt layer as the décollement. To investigate the effects of salt thickness on the structural deformation in the Kuqa Depression, three discrete element models with different salt thicknesses were constructed. The experiment without salt was controlled by several basal décollement dominant faults, forming several imbricate sheets. The experiments with salt developed the decoupled deformation with the salt layer as the upper décollement (subsalt, intrasalt, and suprasalt), significantly similar to the Kuqa Depression along the northern margin of Tarim Basin. Basal décollement dominant imbricated thrusts formed at the subsalt units, while the monoclinal structure formed at the suprasalt units. The decoupled deformation was also observed in the tectonic deformation graphics, distortional strain fields, and max shear stress fields. However, the salt layer was thickened in the thick salt model, and the salt thickness of the thin salt model varied slightly because the thin salt weakened the flowability of the salt. The lower max shear stress zone was easily formed in the distribution region of salt under the action of compression stress, which is conducive to the flow convergence of salt and the crumpled deformation of interlayer in salt. The results are well consistent with the natural characteristics of structural deformation in the Kuqa Depression. Our modeling result concerns the structural characteristics and evolution of salt-related structures and the effects of salt thickness on the structural deformation in the compressional stress field, which might be helpful for the investigations of salt-related structures in other salt-bearing fold-and-thrust belts.
INTRODUCTION
The Kuqa Depression is a peripheral foreland basin placed at the southern piedmont of Tianshan Orogen, Northwest China ( Figure 1). It develops from the Late Permian to Quaternary with strong compressional deformation since the late Cenozoic (Nishidai and Berry (1990); (Lu et al., 1994;Yin et al., 1998;Lu et al., 1999;Liu et al., 2000)). Bounded by the Kuqa River, the depression can be divided into the eastern Kuqa Depression and the western Kuqa Depression (Xin et al., 2002). From the north to south, five main structural belts are divided according to the different structural features in the western Kuqa Depression: the North structural belt (NSB), Kelasu structural belt (KLSSB), Baicheng sag (BCS), Qiulitage structural belt (QLTGSB), and the South basement slope belt (SBSB) (Figure 1) (Yin et al., 1998;Xin et al., 2002). As one of the most important hydrocarbonbearing evaporite basins in China (Yu et al., 2014;Feng et al., 2018;Song et al., 2019), numerous salt-related structures were widely developed in the Kuqa Depression, especially in the Kelasu structural belt with obvious topographic relief features and obvious structural deformation features Wu et al., 2014;Yu et al., 2014;Wu et al., 2015a;Zhao and Wang, 2016;Wang et al., 2017;Neng et al., 2018).
In recent years, based on structural analysis of seismic profiles with the application of several structural analysis techniques, such as the interpretation or balanced analysis of seismic profiles (Li and Qi, 2012;Neng et al., 2012;Neng et al., 2013;Yu et al., 2015;Hou et al., 2019), the area-depth-strain method (Xie et al., 2015;Wang et al., 2017), and the Coulomb wedge theory (Suppe, 2007;Lin et al., 2017), and structural simulated techniques, such as the sandbox physical simulation (Wang et al., 2010;Yin et al., 2011;Li and Qi, 2012;Xu et al., 2012;Wu et al., 2014) and numerical simulation (Wang et al., 2010;Xu et al., 2012;Li W. et al., 2017;Duan et al., 2017;Li, 2019;Li et al., 2020), further understanding has been obtained on the research of structures in the Kelasu structural belt. In many compressional salt-bearing basins around the world, the rock salt, as the important décollement layer, has been demonstrated to have important impact on regional structural evolution (Cotton and Koyi, 2000;Wu et al., 2014;Wu et al., 2015b). Meanwhile, much attention has been paid on the differential thickness distribution of Kumugeliemu salt and Jidike salt in the western and eastern Kuqa Depression Tang et al., 2004;Yu et al., 2014;Tang et al., 2015;Zhao and Wang, 2016;Wang et al., 2017). How did differential thickness distribution of these two salt layers influence the structural deformation in the western and eastern depression? It is worthy of further research to explore the influence of the differential thickness of rock salt on the structural evolution, especially from the perspective of experimental simulation.
In this study, two seismic profiles were presented to reveal the differential structural deformation caused by the difference in the thickness of the salt layers. Besides, three two-dimensional discrete element models with different salt thicknesses were constructed to investigate the characteristics of the differential structural deformation in the western Kuqa Depression. The experimental setup, the model construction technique, the material, and the wall properties were prescribed. Based on three simulation experiments, we focused on the internal relationship between the experimental results and structural deformation characteristics in the northern margin of the Kuqa area, e.g., the formation of the "accommodative space" in the salt strata and the crumpled deformation of the dolomite interbed in the Kelasu structural belt.
GEOLOGICAL SETTING
The Kelasu structural belt is a strong structural deformation belt placed at the northern part of the western Kuqa Depression (Figure 1). The stratigraphy of the Kuqa Depression has been Figure 1 for location) of the western Kelasu structure in the western Kuqa Depression; data revised from Wang et al. (2017). (B) The thin salt model. Interpreted seismic section Line 2 (see Figure 1 for location) of the eastern Kelasu structure in the western Kuqa Depression; data revised from Wu et al. (2014). The prekinematic strata (syn, isopachous layer) interval whose initial stratigraphic thickness is constant above a salt structure. It records sedimentation before salt movement. The synkinematic layer accumulated during salt flow and may include internal onlap or truncation. The subsalt strata were sedimentary units immediately underlying salt.
Frontiers in Earth Science | www.frontiersin.org August 2021 | Volume 9 | Article 655173 3 summarized regionally (Yin et al., 1998;Chen et al., 2004;Wang et al., 2011). Isolated by the evaporative rock salt, the stratigraphic column could be subdivided into three parts ( Figure 2): (1) Palaeozoic and Mesozoic subsalt basement; (2) the Palaeocene-Eocene Kumugeliemu (E 1-2 km) evaporative rock salt; and (3) Eocene-Quaternary overburden which is composed of old strata to young strata by Suweiyi formation (E 2-3 s), Jidike formation (N 1 j), Kangcun formation (N 1 k), Kuqa formation (N 2 k), and Xiyu formation (Q 1 x). The strata E 2-3 s to N 1 k were interpreted as prekinematic strata ( Figure 3A) which were deposited before salt flow began. The prekinematic strata (syn, isopachous layer) interval's initial stratigraphic thickness is constant above a salt structure (Jackson and Talbot, 1991). These record sedimentation before salt movement. The strata N 2 k to the Quaternary (Q) were interpreted as synkinematic strata ( Figure 3A) which accumulated during salt flow and may include internal onlap or truncation (Jackson and Talbot, 1991;Wu et al., 2014). The thickness of the synkinematic sedimentation is about 6-8 km in the western Kuqa Depression . According to the interpreted seismic profiles (Figure 3), the differential thickness and the vertical distribution of the salt layer were presented, i.e., the thick salt model with ca. 1,000 m salt in the western Kelasu structural belt (Wu et al., 2014) and the thin salt model with ca. 200 m salt in the eastern Kelasu structural belt . Several typical structural characteristics were summarized as follows based on the comprehensive comparison and analysis of typical seismic profiles.
The Western Kelasu Structural Belt
The salt layer was thickened in the thick salt model ( Figure 3A) because the flowability of the salt could be enhanced. The fault F1 ( Figure 3A) pinched out on the salt layer. Two major décollement levels exist in Kuqa Depression, i.e., an upper décollement with salt-gypsum lithologies (the Paleogene-Miocene Kumgeliem and Jidike strata) and the lower décollement mostly within Jurassic coal and mudstone strata . Imbricate thrust faults and duplex structures linking the two décollements developed with salt that flowed into the cores of the duplex structure (F1, Figure 3A). The differences in the geometries of salt structures in different regions show that the thickness of the salt sequences has an important influence on the development of salt-cored décollement folds and related thrust faults in the Tarim Basin (Wu et al., 2014).
The Eastern Kelasu Structural Belt
The fault F2 ( Figure 3B) in the piedmont thrust fold belt of the Kuqa Depression directly cuts through all the layers and nappes to the shallow strata from the deep of orogenic belts. The thin salt between the salt substratum and the slat superstratum shows no obvious rheological properties, so the salt thickness shows little variation ( Figure 3B).
EXPERIMENTAL METHOD AND MODEL SETUP
The discrete element method (DEM) has been applied to the study of geological and geophysical problems in recent decades (Hardy et al., 2009;Yin et al., 2009;Liu et al., 2015;Morgan, 2015;Botter et al., 2016;Buiter et al., 2016;Morgan and Bangs, 2017;Li, 2019;Li et al., 2020;Xu et al., 2021). A full, detailed description of the theory behind this modeling approach and its application to geological problems is given by Morgan (2015) and Li C. et al. (2017Li C. et al. ( , 2018Li C. et al. ( , 2021, Li (2019). A geological body is simplified into an assemblage of ball elements which obey Newton's equations of motion and can move under the action of the forces which are generated by interaction with pairs by elastic springs. Our implementation of DEM in the discrete element software ZDEM was summarized by Li (2019).
Three experiments presented here were all initialized by randomly generating particles within a 40 km wide × 14 km-tall domain. Particles were allowed to settle under gravity, bound by two vertical walls and a basal row of fixed particles. The resulting particle assembly was 40 km wide and 5 km thick, and values were chosen to allow typical sedimentary covers to be modeled, large absolute values of convergence to be achieved, and model boundaries to be far from the locus of deformation (Figure 4). The particle packing consisted of 12,234 particles, with uniform distribution radii of 60.0 and 80.0 m. To examine the influence of salt thickness on the structural deformation, three experiments were carried out on initially identical homogeneous packings and boundary and initial conditions and dimensions but using different sets of the salt thickness ( Figure 4). As a reference experiment, there is no salt layer in Exp. 1. The salt thickness of Exp. 2 was set to ca. 300 m tall, and the salt thickness of Exp. 3 was set to ca. 1,000 m tall. Frontiers in Earth Science | www.frontiersin.org August 2021 | Volume 9 | Article 655173 4 The particle properties of experiments are presented in Table 1. Upon settling, bonds of assigned strengths ( Table 2) were introduced at all interparticle contacts, except salt layers. Interparticle friction was set to 0.3 throughout the bonded domain (i.e., rock layer). There are two major décollement levels existing in the Kuqa Depression, i.e., an upper décollement in the salt layer and the lower décollement mostly within Jurassic coal and mudstone strata . As the low décollement, the interparticle friction of the foot wall was set to 0.15. As the upper décollement, the interparticle friction of the salt layers was set to 0.0 to ensure its lower strength.
The bulk mechanical properties of the numerical materials used in these experiments were prescribed by particle properties and bond parameters in Tables 1 and 2. These parameters in Tables 1 and 2 were consistent with previous studies (Morgan, 2015;Li, 2019). They were calibrated through a series of repose angles and two-dimensional biaxial tests based on the method presented by Li, (2019; and Morgan (2015). In numerical simulations, a local damping coefficient, which was the one most commonly used (Potyondy and Cundall, 2004;Itasca Consulting Group 2008;Kozicki and Donzé, 2008;O'Sullivan, 2011;Scholtès and Donzé, 2013;Weatherley et al., 2014;Zhao, 2015;Li, 2019;Xu et al., 2021), was added to damp the reflected waves from the boundary of the particle and to avoid buildup of kinetic energy in the closed system (Itasca Consulting Group 2008;Li, 2019). The meaning of the other parameters in Tables 1 and 2 was given by Morgan (2015) and Li (2019). The value of cohesion of rock layer and salt layer is, respectively, ca. 10.5 MPa and ca. 1.8 MPa (Li, 2019). The values of friction angles of rock layer and salt layer are ca. 18.6°and ca. 4.3°, respectively (Li, 2019). The value of cohesion is consistent with the strength of shallow crustal sediments (Camac et al., 2009;Jaeger et al., 2009;Schumann et al., 2014). Note that the values of friction angle are significantly lower than the typical value of friction angle, 30° (Jaeger et al., 2009;Wu et al., 2014), which is a common characteristic of these numerical materials (Morgan, 1999;Aharonov and Sparks, 2004;Morgan, 2004;Vidal and Bonneville, 2004;Dean et al., 2013;Gray et al., 2014;Morgan, 2015;Li, 2019), and consistent with shear experiments on smooth glass rods (Frye and Marone, 2002;Sun et al., 2016;Li, 2019).
The horizontal contraction was initiated by capturing particles along the right sidewall and applying a constant velocity of 2.0 m/ s to the left. The time step per cycle was 0.05 s, producing 0.1 m of wall displacement per cycle. The synkinematic sedimentation played an important role in structural deformation in Kuqa Depression (Yin et al., 2011;Wu et al., 2014). After the first thrust was formed (ca. 2 km of shortening), ca. 0.5 km-thick synkinematic layer was deposited for every 1 km of shortening. The final thickness of the synkinematic layer was ca. 5 km.
Distribution Deformation
Comparative plots of the final particle configurations (12 km wall displacement) of the three experiments with different salt thicknesses are shown in Figure 5. Three experiments have the same initial model but use different sets of salt thickness. As the reference experiment, Exp. 1 is without salt ( Figure 5A). The salt thickness is ca. 300 m in Exp. 2 with thin salt ( Figure 5B), while salt thickness is ca. 1,000 m in Exp. 3 with thick salt ( Figure 5C). Two faults form in Exp. 1 without salt layer, and the growth strata show the features of fault-propagation fold in accordance with the fault activity ( Figure 5A). But, the deformation was apparently divided into two parts in Exp. 2 (ca. 300 m salt, Figure 5B) and Exp. 3 (ca. 1,000 m salt, Figure 5C) containing the salt layer. Deformation above and below the salt layer was decoupled, with the imbricate structure formed in the subsalt units and back-thrust fault formed in suprasalt units.
In Figure 3B, continuous progradation from the southern Tianshan piedmont until the end of the late Miocene-early Pliocene made Kumugeliemu salt flow basinward and F2 developed . Accelerated crustal shortening since the end of the late Pliocene-early Pleistocene amplified the Misikantake anticline and formed the Quele salt nappe, and several new forward subsalt structures developed and they do not extend to the surface . Previous studies have shown that frictional resistance increases with salt pinch-out (Dooley et al., 2007) and buttressing effects of a distal salt pinch-out can control the location and style of distal salt structures (Costa and Vendeville, 2002;Couzens-Schultz et al., 2003;Dooley et al., 2007). Faults can easily cut through rock layers without salt layers ( Figure 5A), but they rarely cut the thick salt layers; instead, they detach along with the salt layers ( Figure 3A, 5C).
Distortional Strain
The distortional strain was used to quantify the results for DEM and was calculated according to the study by Morgan (2015). Distortional strain, i.e., strain-induced distortion, can be quantified as the second invariant of the deviatoric finite strain tensor (Morgan, 2015). Throughout the experiments, particle positions and interparticle forces were output every 10,000 cycles (1 km wall displacement), an interval referred to as an "increment." Subsequent calculations of particle displacements are made at whole increments, and this unit is used for plotting purposes. The results of the three experiments are shown in Figure 6 with plots of cumulative distortional strain after 12 km (the shortening rate 20%) of shortening. The experiments were accommodated by largely distributed shear deformation with occasional local zones of more intense top-to-the-left shearing (Figure 6, blue zones). Dip angles of the forward thrust ( Figure 6, blue zones) were ca. 45°in all three experiments. They were in the same range as the dip angles (ca. 30°-50°, Figures 3A,B) of the forward thrust in the subsalt units. Figure 6B,C, F3-F6 did not cut through the salt layer; instead, they detached along with the salt layer because of the flow of salt that contributes significantly to structural relief in this part of the Kuqa fold-and-thrust belt. On the contrary, F1 and F2 ( Figure 6A) could easily cut through the rock layer. The salt layer was obviously thickened near the right wall when the initial thickness of salt was ca. 1,000 m ( Figure 6C). The thickness of salt had a little change when the initial thickness of salt was ca. 300 m ( Figure 6B) because the flowability of thin salt is weaker. The salt could be one of the main factors leading to differential tectonics. The contour map of distortional strain ( Figures 6A-C) shows local strain concentration around the fault, indicating that faulting played an important role in the fracture development in the damage zone surrounding the fault. This mechanism explained the higher density of shear fractures that developed in the near-fault area and the salt layers.
Max Shear Stress
Stress invariants for all of the systems are calculated and plotted for 1 × 1 km elements (summing over ca. 25 particles), with colors scaled by stress magnitude. The final structure of each series (Figure 7) is superimposed by plotting regions of high distortional strain (i.e., the absolute value is greater than 4.8 in Figure 6) in black. Similar to the simulation results of Morgan (2015), there is a distinct variability within the wedges about the maximum shear stress, τ max (Figures Frontiers in Earth Science | www.frontiersin.org August 2021 | Volume 9 | Article 655173 7A-C). τ max increased with depth due to the combined increase in both vertical and horizontal stresses with burial (Morgan, 2015). Moreover, τ max was relatively high near the moving wall and in the foot walls of major shear zones. The highest values of τ max usually appeared directly in front of the frontal thrusts, outlining regions of unfaulted material that still supported high shear stresses (Morgan, 2015). The frontal regions of high τ max expanded with decreasing salt thickness (Figure 7), demonstrating the flowability of salt layers to dissipate shear stresses. The stress also had obvious stratification characteristics in the experiments with the salt layer.
DISCUSSION
Both the regional geological structure analysis and the structural simulation experiments show that if there is a regional distribution of salt rock as an important décollement layer in the fold orogenic belt and its foreland basin, the strata will undergo obvious structural stratification deformation in the vertical direction ( Figures 5, 6) (Wang et al., 2010;Yin et al., 2011;Xu et al., 2012;Wu et al., 2014;Yang, 2017;Neng et al., 2018;Li, 2019;Li et al., 2020;Sun et al., 2021;Xu et al., 2021). The results of these experiments well reflect the regional structural characteristics of the northern margin of the western Kuqa Depression (Figure 8). In the northern margin of the western section of Kuqa Depression, the basement involved thrust faults into the salt with a large displacement. When the salt is thick, the thrust fault will make the overlying salt significantly thicken and create favorable "accommodative space," which is conducive to the formation of the salt anticline and salt diapir ( Figure 3A, 8A). When the salt is thin, it is easy to cut through by faults, which is not conducive to the formation of favorable "accommodative space" (Figure 3B, 8B).
The results of the three simulations also show that a lower max shear stress zone is easily formed in the distribution region of salt under the action of compression stress, which is conducive to the flow convergence of salt and the crumpled deformation of interlayer in salt (Figures 7, 9). When the thickness of salt decreases, the fluidity of salt decreases obviously which is not conducive to salt convergence to form salt-related structures ( Figure 8B). While the salt is thick enough, its fluidity is obviously enhanced which is conducive to the flow and convergence of salt from the high-stress area to the low-stress area and forms larger salt structures, such as salt anticline or salt diapir structures ( Figure 8A) There are a fewer imbricate faults of the subsalt units in three discrete element simulations ( Figure 5C) than in the interpreted seismic sections of the Kelasu structure in the Kuqa Depression ( Figure 3A, 8A). The displacement of shortening, the slope, and the thickness of substrate décollement may control the distribution range and the number of imbricate faults in the subsalt units. We would make further analysis in the later study.
CONCLUSION
Effects of salt thickness on the structural deformation were discussed using different seismic profiles in the foreland foldand-thrust belt of the Kuqa Depression, which indicated that the thickness of the salt had an important influence on the structural styles.
The experiment without salt was controlled by several basal décollement dominant faults, forming several imbricate sheets. The experiments with salt developed the decoupled deformation with the salt layer as the upper décollement (subsalt, intrasalt, and suprasalt), significantly similar to the Kuqa Depression in the northern margin of Tarim Basin. Basal décollement dominant imbricated thrusts formed at the subsalt units, while the monoclinal structure formed at the suprasalt units. The decoupled deformation was also observed in the tectonic deformation graphics, distortional strain fields, and max shear stress fields. However, the salt layer was thickened in the thick salt model and the salt thickness of the thin salt model varied slightly because the thin salt weakened the flowability of the salt evidently. The lower max shear stress zone was easily formed in the distribution region of salt under the action of compression stress, which is conducive to the flow convergence of salt and the crumpled deformation of interlayer in salt. These phenomena are well consistent with the natural characteristic of structural deformation in the Kuqa Depression, Tarim Basin.
The modeling results in this study concern the structural characteristics and evolution of salt-related structures and the effects of salt thickness on the structural deformation in the compressional stress field, which might be helpful for the investigations of salt-related structures in other salt-bearing fold-and-thrust belts.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material; further inquiries can be directed to the corresponding author.
FUNDING
The authors gratefully acknowledge the financial support provided by the National Natural Science Foundation of China (grants 41972219, 41927802, 41572187, and 41602208), National S&T Major Project of China (grants 2016ZX05026-002-007, 2016ZX05003-001, and 2016ZX005008-001-005), the PhD Frontiers in Earth Science | www.frontiersin.org August 2021 | Volume 9 | Article 655173 8 Starting Foundation of East China University of Technology (grants DHBK2019024 and DHBK2019053), and Project of PetroChina Company Limited (grant 2018A-0101). CL was also supported by program B for being an outstanding PhD candidate of Nanjing University. | 2021-08-24T13:14:50.283Z | 2021-08-24T00:00:00.000 | {
"year": 2021,
"sha1": "d7230921760844e39c09a260cdb5705732042e9b",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/feart.2021.655173/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "d7230921760844e39c09a260cdb5705732042e9b",
"s2fieldsofstudy": [
"Geology",
"Environmental Science"
],
"extfieldsofstudy": []
} |
221294838 | pes2o/s2orc | v3-fos-license | Galeazzi Fracture Dislocations: An Illustrated Review
Galeazzi fracture dislocations are a fracture of the distal one third of the radius shaft with a concomitant dislocation of the distal radioulnar joint (DRUJ). These injuries usually occur by axial loading on an outstretched arm with pronation or supination of the wrist which determines the angulation of the fracture. Surgical treatment has been historically by the anterior (volar) approach to the forearm with plate fixation with or without pinning of the distal radioulnar joint. Failed or inadequate treatment may lead to complications including chronic pain, malunion or instability of the DRUJ that may warrant salvage procedures.
Introduction And Background
Galeazzi fracture dislocations are a fracture of the distal one third of the ulna's shaft with a concomitant dislocation of the distal radioulnar joint (DRUJ). First it was described by the British surgeon Sir Astley Cooper in 1822 and then it was named after Galeazzi who first reported a series of 18 cases in 1934 in which he described the mechanism, incidence, and management of this injury [1]. They represent approximately 7% of adult and 3% of pediatric forearm fractures, with closed reduction and casting being the gold standard of treatment in pediatrics and open reduction and internal fixation in adults in order to anatomically restore the radial bow to avoid any functional deficit [2]. This fracture has been notoriously known for its instability, and delayed or inadequate treatment may result in dreadful complications that could have a substantial effect on the outcomes of the fracture [3].
Review Pathoanatomy
The radius and ulna are held together by the interosseous membrane (IOM) which is composed of the following: the proximal cords, accessory bands, distal band and finally the central band which is the strongest component of the IOM (Figure 1). The IOM is a relatively weak attachment to the distal one third of the radius which may predispose it to subsequent shortening if an injury occurs through it. The IOM has the following functions that are of biomechanical importance: 1) Load transfer from the radius to the ulna, 2) Load transfer from wrist joint to the elbow, 3) Maintains a stable DRUJ, 4) Maintains forearm stability throughout range of motion [4]. A number of deforming muscular forces are exerted on the distal radius. The abductor pollicis longus and extensor pollicis brevis exert a shortening force upon the distal radius, the pronator quadratus muscle also exerts a rotational force, and finally the brachioradialis pulls the distal radius fragment proximally (Figures 2, 3) [5]. Furthermore, the main stabilizer of the distal radioulnar joint is the triangular fibrocartilage complex (TFCC) which originates between the sigmoid notch and the lunate fossa on the radius, and inserts on the ulnar styloid and the fovea [6]. The dorsal and volar radioulnar ligaments of the TFCC also are paramount to maintain stability of the ulna and they are the main stabilizers of the DRUJ within the TFCC (Figure 4) [6]. Original illustration by Alswaji G.F.
Classification
There have been several classification systems to classify Galeazzi fracture dislocation, the first was described by Walsh et al. in a report of 41 pediatric fractures which included: Type 1 which is characterized by a dorsal displacement of the distal radius (apex volar), and it is caused by axial load applied to the forearm while the forearm is in supination ( Figure 5) [7]. Type 2 which is characterized by volar (posterior) displacement of the distal radius which makes it an apex dorsal ( Figure 6). A second classification system was proposed by Rettig and Raskin in 2001 which classifies the fracture based on the distance from the distal radioulnar joint, >7.5 cm or <7.5 cm [8]. Twenty-two of the 40 fractures had radius fractures <7.5 cm from the articular surface and 12 of these fractures showed intraoperative instability of the DRUJ. On the other hand, the remainder 18 fractures were >7.5 cm from the articular surface and only one of them was shown to intraoperative instability of the DRUJ. Beneyto et al. also classified Galeazzi fractures into three types based on the location of the distal radius fracture; type I was 0-10 cm from the tip of radial styloid, type II was 10-15 cm, and type III was >15 cm away from the radial styloid [9]. The worse results were noted in patients with type I fractures.
Diagnosis
A systemic approach should be utilized with any patient presenting with any orthopedic injury.
In cases of an open fracture or high energy injury the Advanced Trauma Life Support protocol should be initiated to rule out any life-threatening injuries or hemorrhage. In cases of a low energy isolated Galeazzi fracture clinical examination usually reveals gross deformity and/or swelling on inspection. Tenderness to palpitation will be obvious in the site of the distal radius fracture and the distal radioulnar joint. Although painful, gentle passive and active wrist flexion and extension along with forearm rotation can be attempted. Prominent ulnar head either dorsally or volary along with distal radioulnar tenderness is characteristic for DRUJ injury [3].
Shortening of the radius may be evident depending on the extent and severity of the injury. A comprehensive neurovascular examination is mandatory although neurovascular injuries in Galeazzi fractures are rare [10]. Radiographic assessment should include dedicated X-rays of the wrist, forearm and elbow. The finding of the radius fracture and disruption of the DRUJ confirms the diagnosis of Galeazzi fracture dislocation. Contralateral wrist X-rays for comparison may aid in the diagnosis. Findings suggestive of DRUJ injury on plain radiographs include: Radius shortening >5 mm relative to the ulna, fracture of the base of the ulnar styloid, asymmetry compared to the contralateral uninjured limb radiographs, widening of the DRUJ on anteroposterior radiographs, and on lateral radiographs subluxation or dislocation of the radius relative to the ulna [11]. In the setting of negative radiographs but a high index of suspicion for DRUJ injury, axial CT has been recommended [12]. Currently the use of MRI in the diagnosis of Galeazzi fractures has not been clearly established [13]. Figure 7 shows a proposed diagnostic algorithm by the author.
FIGURE 7: Diagnostic algorithm
Proposed by the author.
Management
In adults, Galeazzi fractures are known to be the "Fracture of Necessity", necessitating open reduction and internal fixation to achieve satisfactory outcomes. This is largely due to the inherent instability of Galeazzi fractures and the deforming forces that were mentioned previously [14]. If treated conservatively in adults, the majority of Galeazzi fracture patient will almost always achieve non-satisfactory outcomes [15]. The Volar (Henry [16]) approach is classically utilized to access fractures of the middle and distal thirds of the radial shaft [17]. Although some surgeons may differ or disagree, described here is the author's preferred method which is most commonly utilized. Prior to the incision and tourniquet inflation, it is best to avoid exsanguinating the limb in order to easily identify the radial artery by the two venae comitantes accompanying it. The landmarks are the radial head or lateral to the biceps tendon proximally to the radial styloid distally, the incision should be centered on the fracture site and can either be straight or curved proximally while the forearm is supinated (Figure 8).
After dissection of the subcutaneous fat, the proximal intermuscular interval is between the pronator teres and brachioradialis, while distally it is between the brachioradialis and the flexor carpi radialis (FCR). Careful incision of the fascia between the brachioradialis and the FCR as the radial artery lies directly below the medial edge of the brachioradialis midway in the forearm. The radial artery should be identified, protected, and freed along its length to allow it to be retracted medially. Under the brachioradialis muscle belly lies the superficial radial nerve which should also be identified and protected as injury or damage may cause neuromas ( Figure 9).
Deep dissection varies depending on the location. Proximally the posterior interosseous nerve (PIN) is at risk. In order to expose the proximal radius safely the forearm should be supinated in order to move the PIN laterally away from the surgical field. With this maneuver the insertion of the supinator muscle on the anterior radius is exposed and it is incised laterally and retracted with caution to avoid injury to the PIN ( Figure 10).
Finally in the distal forearm the pronator quadratus and flexor pollicis longus arise from the radius and can be incised laterally with the forearm supinated. Historically, dynamic compression plates have been the preferred method of osteosynthesis in Galeazzi fractures [18]. The efficacy of locked plates has not been extensively studied, but dynamic compression plates have shown to have superior torsional stability than unicortical locked plates [19]. After anatomical reduction of the radius and restoration of the DRUJ, the forearm is examined throughout range of motion and in supination. If the DRUJ is stable and reduced, no further intervention is needed. In cases where the DRUJ is unstable, there are several options; in cases where there is a large ulnar styloid fragment, it can be fixed using lag screw, tension band, or pins. In cases where there is TFCC tear, it can be repaired through a dorsal approach via suture anchors or other techniques [20]. Following repair of the TFCC, or if the surgeon has doubts of the DRUJ stability the DRUJ may be transfixed with K wires transversely with forearm in supination. Finally, the forearm should be mobilized post-operatively in supination to minimize the rotational forces around the DRUJ and to allow for ligamentous healing.
On the other hand, in pediatric Galeazzi fractures the gold standard remains non-surgical treatment due to the stable nature of the fracture in this population due to several factors which include: highest elasticity of the ligaments and superior strength of the DRUJ compared to adults, thicker periosteum, and most importantly the higher capacity for the fracture to remodel especially in the joint plane [21]. Also, several studies have reported successful and satisfactory outcomes following non-surgical management of pediatric Galeazzi fractures [2,7,22]. Treatment of pediatric Galeazzi fractures should be in the form of closed reduction under general anesthesia followed by above elbow immobilization in supination for up to six weeks. Immobilization in supinations allows the healing of the TFCC along with maintaining stability of the DRUJ. Although rare, surgical treatment of pediatric Galeazzi is indicated if closed reduction is unable to yield satisfactory alignment or if there is loss of reduction following the initial anatomic reduction. The type of surgical intervention in pediatric may vary depending on the age of the patient, location of the fracture, and stability of the DRUJ after reduction. The options include: K-wire fixation, flexible intramedullary nailing, plate osteosynthesis, or simply open reduction without internal fixation [8,9]. Figure 12 outlines a structured treatment algorithm by the author.
FIGURE 12: Treatment algorithm
Proposed by the author.
Complications
Along with the usual complications of forearm fractures, the most devastating complications associated with Galeazzi fractures are either radius fracture non-union or malunion, or DRUJ instability which may in turn lead to loss of forearm rotation, reduced grip strength and chronic pain [3,11,12]. For inappropriately treated Galeazzi fractures with late presentation, or with non-reconstructable DRUJ, salvage procedures may be indicated. Salvage procedures include: the Darrach's procedure, the Sauve-Kapandji procedure, hemiresection or implant arthroplasty. The Darrach's procedure includes resection of the distal ulna to relive DRUJ pain. Authors vary in describing the amount resected and some preserve the styloid in order to avoid instability [22]. On the other hand, the Sauve-Kapandji procedure is done by arthrodesis of the DRUJ and creating a pseudarthrosis on the ulna just proximal to the site of arthrodesis. The Sauve-Kapandji procedure is superior to Darrach's procedure as it preserves the ulnocarpal ligaments, and ulnar support of the wrist and it also has a superior aesthetic appearance [23].
Conclusions
Galeazzi fractures are uncommon type of forearm fractures. They are highly unstable fracture dislocations and they should be addressed in a timely fashion to limit their complications. In adult, the gold standard of treatment is open reduction and internal fixation to overcome the deforming muscular forces and to achieve anatomical reduction of the radial bow and DRUJ. If the reduction of the DRUJ is not achieved on closed manners a trial of open reduction with possible reconstruction of the TFCC or the ulna styloid fracture should be attempted. Pediatric patients, however, can be managed by closed reduction and above elbow immobilization due to their superior bone remodeling capacity and stronger ligaments. Open reduction with or without fixation should be attempted in pediatric patients if close reduction fails to achieve successful results.
Conflicts of interest:
In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2020-07-30T02:05:21.391Z | 2020-07-01T00:00:00.000 | {
"year": 2020,
"sha1": "fc1ea6dddd4fb4893a986eda330069bd12b506c4",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/36962-galeazzi-fracture-dislocations-an-illustrated-review.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a0f3074ee32e0dbafa769b2177cb104d90d8c993",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
26101504 | pes2o/s2orc | v3-fos-license | A lipomannan variant with strong TLR-2-dependent pro-inflammatory activity in Saccharothrix aerocolonigenes.
Lipomannans (LMs) are powerful pro-inflammatory lipoglycans found in mycobacteria and related genera, however the molecular bases of their activity are not fully understood. We report here the isolation and the structural and functional characterization of a new lipomannan variant present in the Pseudonocardineae, Saccharothrix aerocolonigenes, designated SaeLM. Using a range of chemical degradations, NMR experiments, and mass spectrometry analyses, SaeLM revealed a mannosylphosphatidyl-myo-inositol (MPI) anchor glycosylated by an original carbohydrate structure whereby an (alpha1-->6)-Manp backbone is substituted at >80% of the O-2 position by side chains composed of Manp-(alpha1-->2)-Manp-(alpha1-->. Matrix-assisted laser desorption ionization time-of-flight mass spectrometry analysis indicated a distribution of SaeLM glyco-forms ranging from 19 to 61 Manp units, which centered on species containing 37 or 40 Manp units. SaeLM induced a Toll-like receptor 2 (TLR-2)-dependent production of tumor necrosis factor-alpha (TNF-alpha) by human THP-1 monocyte/macrophage cell lines and interestingly was found to be the strongest inducer of this pro-inflammatory cytokine when compared with other LAM/LM-like molecules. We previously established that a linear (alpha1-->6)-Manp chain, linked to the MPI anchor, is sufficient in providing pro-inflammatory activity. We demonstrate here that by adding side chains and increasing their size, one may potentiate this activity. These findings should enable a better understanding of the structure/function relationships of TLR-2-dependent lipoglycan signaling.
bacterial cell walls (1)(2)(3)(4)(5). Their structures originate from a phosphatidyl-myo-inositol (MPI) anchor, which is mannosylated to generate LM (4,6) and further arabinosylated to give LAM. The non-reducing termini of the arabinosyl side chains can be substituted by capping motifs, yielding to the classification of LAM into three families. LAM from slow growing mycobacteria bearing mannose caps, i.e. mono-or (␣132)-di-or tri-mannoside units, are designated as ManLAM. In contrast, LAM from fast growing mycobacteria capped by phospho-myoinositol units or not capped at all are termed PILAM and AraLAM, respectively (4). LAM and LM exhibit a broad spectrum of immunomodulatory activities, including the ability to modulate the production of macrophage-derived Th1 pro-inflammatory cytokines, most commonly TNF-␣ and IL-12. For example, ManLAM are able to inhibit the LPS-induced production of IL-12 and TNF-␣ (7,8). So, ManLAM contributes, via an immunosuppressive effect, to the persistence of slow-growing mycobacteria in the human reservoir. ManLAM anti-inflammatory activity has also been shown to require the interaction of ManLAM with the mannose receptor and/or dendritic cell-specific intercellular adhesion molecule-3 grabbing nonintegrin via the mannose capping motifs (8 -10). In contrast, PILAM are able to induce the release of a variety of proinflammatory cytokines through the activation of Toll-like receptors 2 (TLR-2) (11)(12)(13). This activity is likely to require PI caps, because AraLAM does not show any activity (14). Early studies demonstrated that LM from Mycobacterium sp. induce expression of pro-inflammatory cytokines (15). A set of recent reports have shown that LM from both pathogenic and nonpathogenic mycobacterial species, independent of their origin, were potent stimulators of TNF-␣, IL-8, and IL-12 (16 -18). Furthermore, LM was shown to activate macrophages in a TLR-2-dependent, and TLR-4-and TLR-6-independent manner (16 -18). The ManLAM/LM balance might thus be a parameter influencing the net immune response against mycobacteria. Indeed, according to their activity, lipoglycans are therefore likely to favor either the persistence or the killing of the corresponding mycobacteria (3). Induction of a protective pro-inflammatory response via TLR signaling should be to the benefit of the host (19 -21), whereas stimulation of anti-inflammatory response via mannose receptor or dendritic cell-specific intercellular adhesion molecule-3 grabbing non-integrin should be to the benefit of the pathogen (3,10).
The molecular bases of LM/LAM pro-versus anti-inflammatory activities are not yet fully understood. Nevertheless, it seems clear that LM or the lipomannan moiety of LAM bear the intrinsic capacity to induce the production of TNF-␣ and IL-12 (16,22). However, the presence of the arabinan moiety on LAM inhibits the pro-inflammatory activity, presumably by masking the mannan core (23,24) and thus limiting its accessibility to the TLR (16,22). Also, the type of capping motifs may then direct LAM activity toward a pro-(PILAM) or anti-(ManLAM) inflammatory activity (3).
Further insights into deciphering these complex molecular interactions could benefit from the structural and functional characterization of LAM variants. Indeed, lipoglycans are not restricted to members of the mycobacteria, and a number of non-mycobacterial lipoglycans have been isolated and characterized in several actinomycete genera, including Rhodococcus (25,26), Gordonia (27), Amycolatopsis (28), Corynebacterium (29), and most recently Turicella (30) and Tsukamurella (22). In the latter study, we demonstrated that LAM from Tsukamurella paurometabola (TpaLAM) possesses a similar structural prototype when compared with mycobacterial LAM, with distinct mannan and arabinan domains, yet had weak biological activity (22); however, upon chemical degradation of the arabinan domain, the resulting lipomannan moiety elicited a powerful pro-inflammatory response as previously demonstrated with ManLAM (16). Interestingly, the TpaLAM lipomannan moiety is composed of a linear (␣136)-Manp chain linked to the MPI anchor demonstrating that this structure alone is sufficient in providing pro-inflammatory activity, and that, importantly, the branched t-Manp units are not necessarily required (22).
In the present study we report the isolation, structural, and functional characterization of a LM molecule from the Pseudonocardineae, Saccharothrix aerocolonigenes (31). The investigation revealed an original structure, and, furthermore, we demonstrate that LM possessed potent pro-inflammatory activity. As such, the structure/function relationship of the lipomannan is discussed, enabling further insights into the molecular basis of lipoglycan-mediated inflammatory responses.
MATERIALS AND METHODS
Bacteria and Growth Conditions-S. aerocolonigenes, type strain DSM 40034 (S. aerocolonigenes subsp. aerocolonigenes, recently renamed as Lechevaliera aerocolonigenes (32)) was purchased from DSMZ, Germany. It was routinely grown at 30°C in GYM streptomyces medium, which contained 4 g of glucose, 4 g of yeast extract, and 10 g of maltose per liter of deionized water supplemented with 0.05% (w/v) Tween 80. Cells were grown to late log phase and harvested by centrifugation, washed, and lyophilized.
Purification of SaeLM-Purification procedures were adapted from protocols established for the extraction and purification of mycobacterial lipoglycans (33,34). Briefly, the cells were delipidated at 60°C by mixing in CHCl 3 /CH 3 OH (1:1, v/v) overnight. The organic extract was removed by filtration, and the delipidated biomass was resuspended in deionized water and disrupted by sonication (MSE Soniprep, 12 micro amplitude, 60 s on then 90 s off for 10 cycles, on ice). The cellular glycans and lipoglycans were further extracted by refluxing the broken cells in 50% ethanol at 65°C overnight. Contaminating proteins and glucans were removed by enzymatic degradation using protease and ␣-amylase treatments followed by dialysis. The resulting extract was resuspended in buffer A, 15% propan-1-ol in 50 mM ammonium acetate, and loaded onto an octyl-Sepharose CL-4B column (50 ϫ 2.5 cm) and eluted with 400 ml of buffer A at 5 ml/h, enabling the removal of non-lipidic moieties. The retained lipoglycans were eluted with 400 ml of buffer B, 50% propan-1-ol in 50 mM ammonium acetate. The resulting lipoglycans were resuspended in buffer C, 0.2 M NaCl, 0.25% sodium deoxycholate (w/v), 1 mM EDTA, and 10 mM Tris, pH 8, to a final concentration of 200 mg/ml and loaded onto a Sephacryl S-200 HR column (50 ϫ 2.5 cm) and eluted with buffer C at a flow rate of 5 ml/h. Fractions (1.25 ml) were collected and analyzed by SDS-PAGE followed by periodic acid-silver nitrate staining. The resulting lipoglycan fractions were pooled, dialyzed extensively against water, lyophilized, and stored at Ϫ20°C.
Preparation of Deacylated SaeLM-Deacylated SaeLM (dSaeLM) was obtained by incubating 100 g of SaeLM with 200 l of 0.1 N NaOH for 2 h at 37°C. The reaction was stopped by extensive dialysis against water.
MALDI/MS-The matrix used was 2,5-dihydroxybenzoic acid at a concentration of 10 g/l, in a mixture of water/ethanol (1:1, v/v). 0.5 l of SaeLM, at a concentration of 10 g/l, was mixed with 0.5 l of the matrix solution. Analyses were performed on a Voyager DE-STR MALDI-TOF instrument (PerSeptive Biosystems, Framingham, MA) using linear mode detection. Mass spectra were recorded in the negative mode using a 300-ns time delay with a grid voltage of 95% of full accelerating voltage (24 kV) and a guide wire voltage of 0.05%. The mass spectra were mass assigned using external calibration.
Fatty Acid Analysis-200 g of SaeLM was deacylated using strong alkaline hydrolysis (200 l of 1 M NaOH at 110°C for 2 h). The reaction mixture was neutralized with HCl, and the liberated fatty acids were extracted three times with chloroform and, after drying under nitrogen, were methylated using three drops of 10% (w/w) BF 3 in methanol (Fluka) at 60°C for 5 min. The reaction was stopped by the addition of water, and the fatty acid methyl esters were extracted three times with chloroform. After drying, the fatty acid methyl esters were solubilized in 10 l of pyridine and trimethyl-silylated using 10 l of hexamethyldisilazane and 5 l of trimethylchlorosilane at room temperature for 15 min. After drying under a stream of nitrogen, the fatty acid derivatives were solubilized in cyclohexane before analysis by gas chromatography (GC) and gas chromatography-mass spectrometry (GC/MS).
Glycosidic Linkage Analysis-Glycosyl linkage composition was performed according to the modified procedure of Ciucanu and Kerek (35). The per-O-methylated SaeLM was hydrolyzed using 500 l of 2 M trifluoroacetic acid at 110°C for 2 h, reduced using 350 l of a 10 mg/ml solution of NaBD 4 (NH 4 OH 1 M/C 2 H 5 OH, 1:1, v/v), and per-O-acetylated using 300 l of acetic anhydride for 1 h at 110°C. The resulting alditol acetates were solubilized in cyclohexane before analysis by GC and GC/MS. NMR Spectroscopy-NMR spectra were recorded on a Bruker DMX-500 spectrometer equipped with a double resonance ( 1 H/X)-BBi z-gradient probe head. SaeLM (20 mg) was exchanged in D 2 O (D, 99.97% from Euriso-top, Saint-Aubin, France), with intermediate lyophilization, then re-dissolved in 0.4 ml of Me 2 SO-d6 (D, 99.8% from Eurisotop), and analyzed in 200-ϫ 5-mm 535-PP NMR tubes at 343 K. Data were processed on a Bruker-X32 workstation using the xwinnmr program. Proton and carbon chemical shifts are expressed in ppm and referenced relative to internal Me 2 SO signals at ␦ H 2.52 and ␦ C 40.98. The one-dimensional (1D) proton ( 1 H) spectra were recorded using a 90°t ipping angle for the pulse and 1 s as recycle delay between each of the 387 acquisitions of 1.64 s. The spectral width of 3,064 Hz was collected in 16,000 complex points that were multiplied by a Gaussian function (LB ϭ Ϫ1, GB ϭ 0.4) prior to processing to 32,000 real points in the frequency domain. After Fourier transformation, the spectra were baseline corrected with a fourth order polynomial function. The 1D 31 P spectrum was measured at 202.46 MHz at 343 K and phosphoric acid (85%) was used as external reference (␦ P 0.0). The spectral width of 20 kHz was collected in 16,000 complex points that were multiplied by an exponential function (LB ϭ 1 Hz) prior to processing to 32,000 real points in the frequency domain. The scan number was 256. Two-dimensional (2D) spectra were recorded without sample spinning, and data were acquired in the phase-sensitive mode using the time-proportional phase increment method. The 2D 1 H-13 C Heteronuclear Multiple Quantum Correlation (HMQC) and 1 H-31 P HMQC-HOHAHA were recorded in the proton-detected mode with a Bruker 5-mm 1 H broad band tunable probe with reverse geometry. The globally optimized alternatingphase rectangular pulses (GARP) sequence (36) at the carbon or phosphorus frequency was used as a composite pulse decoupling during acquisition. The 1 H-13 C HMQC spectrum was obtained according to Bax and Subramanian pulse sequence (37). Spectral widths of 25,154 Hz in 13 C and 2200 Hz in 1 H dimensions were used to collect a 2,048 ϫ 512 (time-proportional phase increment) point data matrix with 56 scans/t1 value expanded to 4,096 ϫ 1,024 by zero filling. The relaxation delay was 1 s. A sine bell window shifted by /2 was applied in both dimensions. A 1 H-13 C HMBC spectrum was obtained using the Bax and Summers pulse sequence (38). Spectral widths of 25,154 Hz in 13 C and 3,064 Hz in 1 H dimensions were used to collect a 2,048 ϫ 480 (timeproportional phase increment) point data matrix with 80 scans/t1 value expanded to 4,096 ϫ 1,024 by zero filling. The relaxation delay was 1 s. A sine bell window shifted by /2 was applied in both dimensions. A 1 H-31 P HMQC-HOHAHA spectrum was obtained using the Lerner and Bax pulse sequence (39). Spectral widths of 1,620 Hz in 31 P and 3,064 Hz in 1 H dimensions were used to collect a 2,048 ϫ 80 (time-proportional phase increment) point data matrix with 16 scans/t1 value expanded to 4,096 ϫ 1,024 by zero filling. The relaxation delay was 1 s. A sine bell window shifted by /2 was applied in both dimensions. The 2D 1 H-1 H HOHAHA spectrum was recorded using a MLEV-17 mixing sequence of 110 ms (40). The spectral width was 3,064 Hz in both F 2 and F 1 dimensions. 450 spectra of 4,096 data points with 24 scans/t1 increment were recorded. The 2D 1 H-1 H ROESY spectrum was acquired at a mixing time of 300 ms (41). The spectral width was 3,064 Hz in both dimensions. 512 spectra of 2,048 data points with 24 scans/t1 increment were recorded.
TNF-␣ Production by Macrophages-A THP-1 monocyte/macrophage human cell line was maintained in continuous culture with RPMI 1640 medium (Invitrogen), 10% fetal calf serum (Invitrogen) in an atmosphere of 5% CO 2 at 37°C, as non-adherent cells. Purified native or modified SaeLM as well as the other stimuli were added in duplicate or triplicate to monocyte/macrophage cells (5 ϫ 10 5 cells/well) in 24-well culture plates and then incubated for 20 h at 37°C. Stimuli were previously incubated for 1 h at 37°C in the presence or absence of 10 g/ml polymyxin B (Sigma) known to inhibit the effect of (contaminating) LPS (12). To investigate the TLR dependence of TNF-␣-inducing SaeLM activity, monoclonal anti-TLR-2 (clone TL2.1, eBioscience) or anti-TLR-4 (clone HTA125, Serotec) antibodies or an IgG2a isotype control (clone eBM2a, eBioscience) at concentrations of 10 and 20 g/ml were added together with SaeLM to THP-1 cells. Supernatants from THP-1 cells were assayed for TNF-␣ by sandwich enzyme-linked immunosorbent assay using commercially available kits and according to the manufacturer's instructions (R&D Systems). LPS was from Escherichia coli 055:B5 (Sigma), ManLAM and LM were from Mycobacterium bovis BCG, and mahTpaLAM was from T. paurometabola (22).
The sequence of these units was first investigated by 1 Spin systems with weaker intensities were also present. Spin system IV characterized by ␦ H-1 4.71 (Figs. 2A and 3A) and ␦ C-1 99.4 (Fig. 2E) was attributed to 6-␣-Manp unit (Table I) (Table I), with both H-1 units correlating with their own C-6 resonances in the HMBC spectrum (Fig. 2C). Taken (Table I). In a similar manner, spin systems IIb and -c, characterized by ␦ H-1 4.96 and 4.92, respectively (Fig. 3A), and ␦ C-1 102.5 and 102.8, respectively (Fig. 2E), were attributed to t-␣-Manp units (Table I). Finally, spin systems IIIb and -c characterized by ␦ H-1 4.86 and 4.87 (Fig. 3A) and ␦ C-1 99.4 (Fig. 2E) were attributed to 2,6-␣-Manp units (Table I). H-1 of the t-␣-Manp unit (IIc 1 ) at ␦ 4.92 showed an intense inter-residue nOe contact with H-2 of 2-␣-Manp unit (Id 2 ) at ␦ 3.90, whereas H-1 of these 2-␣-Manp unit (Id 1 ) at ␦ 5.11 showed an intense interresidue nOe contact with H-2 of 2,6-␣-Manp unit (IIIb 2 ) at ␦ 3.79. As the main spin systems (Ia, IIa, and IIIa), these spin systems characterize the trisaccharide structure: Manp-(␣132)-Manp-(␣132)- [36)]-Manp-(␣13. The linkage of these units was further confirmed by HMBC experiment (Fig. 2A). Indeed, H-1 of the t-␣-Manp unit (IIc 1 ) at ␦ 4.92 showed an inter-residue correlation with C-2 of 2-␣-Manp unit (Id 2 ) at ␦ 77.8, whereas H-1 of these 2-␣-Manp units (Id 1 ) at ␦ 5.11 showed an inter-residue correlation with C-2 of the 2,6-␣-Manp unit (IIIb 2 ) at ␦ 78.6. In a similar manner, H-1 of spin system Ic (Ic 1 ) at ␦ 5.08 showed an intense inter-residue nOe contact with H-2 of spin system IIIc (IIIc 2 ) at ␦ 3.76 characterizing the sequence 32)-Manp-(␣132)- [36)]-Manp-(␣13. No obvious correlations could be observed with the remaining low abundant spin systems either in the ROESY or HMBC experiments. Thus one may conclude that the trisaccharide motifs characterized by these spin systems are probably located at a specific site in the structure, for example at the beginning or the end of the mannan chain. MPI Anchor-The presence and structure of the MPI anchor was first investigated by 1D 31 P and 2D 1 H-31 P NMR of SaeLM dissolved in Me 2 SO at 343 K. The 1D 31 P showed four resonances at ␦ 1.83, 1.90, 3.58, and 3.83, in a ratio 1.0:1.0:1.4:1.8 indicative of the presence of different SaeLM acyl-forms (Fig. 4). As expected, in the 2D 1 H-31 P HMQC-HOHAHA experiment, each phosphorus resonance correlated with a complex set of signals attributed to protons of myo-inositol and glycerol (Fig. 5, A and B). Protons H-1 of myo-inositol and H-3 and H-3Ј of glycerol were attributed using 1 H-31 P HMQC experimentation (data not shown; Table I). The attribution of the remaining myo-inositol and glycerol signals was resolved using literature data to compare chemical shifts and multiplicity, and further confirmed by 1 H-1 H HOHAHA experiments (Fig. 5C and Table I) (43,45). Glycerol units with phosphates resonating at ␦ 1.83 and 1.90 corresponded to diacylglycerol units, as revealed by the deshielding of their H-2 proton at ␦ 5.12 ( Fig. 5A and Table I). In contrast, Glycerol units linked to phosphates resonating at ␦ 3.58 and 3.83 did not show a deshielded H-2 proton, corresponding to 1-acyl-2-lysoglycerol units (Fig. 5B and Table I). These attributions were in agreement with the chemical shift ranges of the corresponding phosphorus atoms, phosphodiacylglycerols (high field) versus phosphomonoacylglycerols (low field) (45,46). The myo-inositol units derived from the four phosphorus atoms showed very similar proton chemical shifts that indicated non-acylated units (Fig. 5, A and B, and FIG. 2. 1D 1 H (A and B), 2D 1 H- 13 The chemical shift of these protons could not be determined precisely and are given within an interval. b ND, indicate that the chemical shift could not be determined. Table I). According to the nomenclature used in our previous studies (43,45), the phosphorus atoms at ␦ 1.83 and 1.85 corresponded to P2 and P3, respectively, defining an MPI anchor characterized by a diacylglycerol and a non-acylated myoinositol. The phosphorus atom at ␦ 3.58 corresponded to P5, whereas that at ␦ 3.83 corresponded to an acyl-form not previously characterized and termed P6. P6 actually seems to be the equivalent of P2, yet in this case the MPI anchor bears a mono-acylated glycerol. Thus, both P5 and P6 MPI anchors are characterized by a 1-acyl-2-lysoglycerol and a non-acylated myo-inositol. The difference between acyl-forms P2 and P3 on one hand and P5 and P6 on the other most probably arises from the presence or lack of an additional fatty acid positioned on the Manp unit linked at O-2 of the myo-inositol. 2 The MPI anchor of mycobacterial lipoglycans is characterized by a myo-inositol unit glycosylated at position O-2 by a single Manp unit and at position O-6 by a Manp further glycosylated to give rise to the mannan core (1,4). Manp units glycosylating the different myo-inositol units were defined thanks to ROESY experiment. Indeed, the myo-inositol protons H-2 at ␦ 4.14 (P3), 4.18 (P5), 4.21 (P2), and 4.23 (P6) showed intense nOe contacts with protons at ␦ 5.13, 5.13, 5.14, and 5.14, respectively, all tentatively attributed to mannosyl anomeric protons (Fig. 5D). This assumption was confirmed by the correlation of these protons on one hand with anomeric carbons at ␦ 101.5 on the 1 H- 13 ing to the Manp units glycosylating the myo-inositol ring at O-6, ␣-Manp-1 (P6). C-1 of Manp-1 (P6) was found to resonate at ␦ 101.3 on the 1 H-13 C HMQC spectrum (low intensity, not shown) and H-2 at ␦ 3.61 on the 1 H-1 H HOHAHA spectrum (Fig. 3C). H-1 of Manp-1 (P6) showed on the 1 H-1 H ROESY spectrum (Fig. 3D) intense contacts with its own H-2 at ␦ 3.61 and with the myo-inositol (P6) protons H-6 at ␦ 3.65, as indicated above, and with H-4 at ␦ 3.42 (Fig. 5D), but also much weaker nOe contacts with the myo-inositol (P6) H-1 at ␦ 3.98 and H-5 at ␦ 3.09 (not shown). No correlations with anomeric protons could be observed for proton H-6 of myo-inositol units corresponding to phosphorus P2, P3, or P5. However, as these protons are slightly more shielded than the H-6 of myo-inositol (P6), we cannot exclude that they might correlate with the H-1 of the same Manp unit, ␣-Manp-1 (P6), at ␦ 5.15, but the cross-peak might superimpose with that of the intra-residue ␣-Manp-1 H-1/H-2. Altogether, these data strongly indicated that SaeLM exhibits a MPI anchor similar to that of mycobacterial lipoglycans with a myo-inositol unit mannosylated at positions 2 and 6.
FIG. 3. 1D 1 H (A and B) and 2D 1 H-1 H HOHAHA m 110 ms (C) and ROESY m 300 ms (D) spectra of SaeLM in
MALDI-TOF/MS Analysis-MALDI/MS analyses of mycobacterial LAM provided broad unresolved signals due to a complex heterogeneity of molecules differing in term of mannosylation, arabinosylation, and acylation (47). MS analyses of LAM are thus poorly informative, and an average molecular weight is the only information that can be deduced from the spectra. However, due to the lack of any arabinan domain, the mass distribution of LM molecules is still complex, although comparatively simpler as compared with that of LAM. Recently, MALDI/MS has proved to be much more efficient in analyzing LM molecules. Optimization of sample preparation and the choice of a suitable matrix enabled the recording of well resolved spectra resulting in ions corresponding to the different LM glyco-and acyl-forms. 2 Fig. 6 shows the linear negative mode MALDI mass spectrum of SaeLM using 2,5-dihydroxybenzoic acid, in a mixture water:ethanol (1/1, v/v) as a matrix.
The spectrum is dominated by one major set of peaks, with a width of ϳ30 mass units, and separated by 486 mass units, predicting that the SaeLM major glyco-forms differ by three Manp residues. This observation is in total agreement with the structure deduced for the SaeLM mannan domain which corresponds to a polymer of trimannoside units. The peaks were assigned to deprotonated molecular ions (M-H) Ϫ typifying different glyco-forms of SaeLM. The distribution was centered on the peak centered at m/z 7075, tentatively attributed to a SaeLM glyco-form containing 37 Manp residues and an MPI anchor bearing three fatty acyl appendages, two hexadecanoic, and one octadecenoic acids, the main fatty acids detected by GC analysis. However, the width of 30 mass units observed for the peak could be explained by the presence of overlapping ions corresponding to less abundant mono-acylated species, containing either one hexadecanoic acid or one octadecenoic acid and 40 Manp units. In a similar way, the lowest molecular weight SaeLM glyco-forms recorded (Fig. 6) were in agreement with a tri-acylated species with 19 Manp residues or a mono-acylated species with 22 Manp residues. The highest molecular weight SaeLM glyco-forms were in agreement with a tri-acylated species with 58 Manp residues or a mono-acylated species with 61 Manp residues. So, this set of peaks characterized a tri-acylated acyl-form with a mannan domain composed of 19 -58 ␣-D-Manp units and mono-acylated acyl-forms with a mannan domain composed of 22-61 ␣-D-Manp units. The distribution of glyco-forms was centered on the species containing 37 or 40 Manp units, which were also the most abundant species. Besides these major set of peaks, peaks with a lower intensity were recorded corresponding to the same acyl-forms that differed by 162 mass units corresponding to a difference of one Manp unit (see inset in Fig. 6). These species are likely to correspond to minor partially polymerized SaeLM glyco-forms. Altogether, these data allowed us to proposed the structural model depicted in Fig. 7.
TNF-␣ Production by Macrophages-The potency of SaeLM to stimulate the production of TNF-␣ was investigated using human THP-1 monocyte cell lines. SaeLM, when tested at concentrations of 10 and 20 g/ml, induced a strong dose-dependent production of TNF-␣ (Fig. 8A), which was not inhibited by polymyxin B (not shown), indicating that the observed cytokine induction was not due to LPS contamination. In contrast, mycobacterial ManLAM, known to be a poor inducer of pro-inflammatory cytokines, induced a very weak amount of TNF-␣.
It has been previously demonstrated that mycobacterial LM and PIM stimulate the production of TNF-␣ in a TLR-2-dependent fashion (16,48). To investigate the TLR dependence of TNF-␣-inducing SaeLM activity, studies were conducted by measuring the inhibitory effect of cytokine production using specific anti-TLR-2 and anti-TLR-4 antibodies. As shown in Fig. 8B, whereas the anti-TLR-4 and an IgG2 isotype control antibodies had no affect on TNF-␣ production induced by SaeLM, the anti-TLR-2 antibody inhibited this production. These data clearly underscore the role of TLR-2 in mediating the stimulation of TNF-␣ production by THP-1 cells in response to SaeLM.
To gain better insights into the structure/function relationships, SaeLM activity was compared with that of M. bovis BCG LM (BCGLM) and TpaLAM lipomannan core (mahTpaLAM), both known to exhibit a pro-inflammatory activity. When tested at the same concentrations (10 and 20 g/ml), SaeLM was found to exhibit a stronger TNF-␣-inducing activity than BCGLM, the latter being more active than mahTpaLAM (Fig. 8A). These data confirm that the presence of an (␣136)-Manp chain is sufficient in providing pro-inflammatory activity for LM-like molecules as observed for mahTpaLAM. However, the presence of side chains, depending on their length and degree of sophistication, increases this activity. Nevertheless, one should not neglect the acylation pattern that might also influence the relative activity of these LM-like molecules, 2 because deacylated SaeLM (dSaeLM), obtained after an alkaline treatment, was unable to induce TNF-␣ (Fig. 8A). DISCUSSION LM-like molecules are powerful pro-inflammatory lipoglycans found in the cell wall of mycobacteria and some of the related actinomycetes genera, however the structure/function relationships underlying their activity are not fully understood. Nevertheless, it is now established that the intrinsic capacity of lipoglycans to induce a TLR-2-dependent production of proinflammatory cytokines derives from the lipomannan core of the molecule and that this activity is reduced when the lipomannan core is sterically masked by a significant arabinan domain. This has been clearly demonstrated with lipoglycans from Mycobacterium kansasii (16) and T. paurometabola (22). Indeed the LAM of these species exhibits a poor activity; however, upon chemical degradation of the arabinan domain by mild acid hydrolysis, the resulting lipomannan moiety elicits a powerful pro-inflammatory response with a magnitude similar to that of "free" LM. However, mycobacterial LM is a heterogeneous mixture, and the precise structural basis of the interaction with the receptor still remains obscure. Chemical synthesis of LM-like molecules can be hardly envisaged. So, further insights into deciphering these complex molecular interactions could benefit from the fine structural and functional characterizations of lipoglycan variants. Indeed, various actinomycetes species (31) offer the opportunity to study lipoglycans with a high molecular diversity that can be utilized to refine structure/function relationships.
In this context, we report here the isolation and the structural and functional characterization of a new LM variant in the Pseudonocardineae, S. aerocolonigenes (31). The investigation revealed an original structure, and furthermore, we demonstrate that the LM possessed potent pro-inflammatory activ-ity. SaeLM contained mannose as the sole carbohydrate. The main fatty acids esterifying SaeLM are 14-methylpentadecanoic, palmitic, and octadecenoic acids in agreement with the fatty acid composition found in the Saccharothrix genus (49). Per-O-methylation and detailed NMR studies revealed that SaeLM was composed from a (␣136)-Manp chain, which is further substituted at more than 80% of the O-2 positions by side chains composed of the Manp-(␣132)-Manp-(␣13 motif (Fig. 7). The structure of this mannan core, containing dimannoside side chains, is in contrast with that of mycobacterial LM where only single mannopyranosyl units substitute the (␣136)-Manp chain (4), except in a clinical isolate of M. kansasii where the LM has been reported to contain very few dimannoside side chains (50). Interestingly, some discrete trimannoside motifs, defined by spin systems Id, IIc, and IIIb, were identified and are probably located at a particular site in the structure, such as the beginning or the end of the chain. As previously stated the mannan domain of LM molecules is composed of a (␣136)-Manp chain substituted at some O-2 or O-3 positions by lateral side chains. On mycobacterial LM, one unanswered question remains: whether the linear and branched portions of the mannan core form distinct domains or whether 2,6-␣-Manp and 6-␣-Manp units intercalate at frequent regular intervals within the chain. In the case of SaeLM we found that proton H-1 of these two units correlated with their own C-6 resonances in the HMBC spectrum, and furthermore were distinguishable (Fig. 2C). Altogether, this suggests that 2,6-␣-Manp on one hand, and 6-␣-Manp units on the other hand, are interconnected and consequently form distinct domains.
1D 31 P NMR analysis of SaeLM showed that the MPI anchor contained four acyl-forms, with a significant predominance of mono-acylglycerol (62%) as compared with di-acylglycerol (38%) acyl-forms. This distribution is in contrast with the data on M. tuberculosis or M. bovis BCG lipoglycans where the acyl-forms bearing diacylglycerols are the most abundant. In addition, position 3 of the myo-inositol was not acylated. Acylation of myo-inositol on lipoglycans actually seems to be re- stricted to the Mycobacterium genus, because it was not observed in any of the lipoglycans investigated so far arising from the genera, Rhodococcus (25,26), Tsukamurella (22), Amycolatopsis (28), or Turicella (30). These findings are surprising because acylation of myo-inositol was, at least, observed on PIM, in Rhodococcus equi (25). This suggests that there is a different fatty acid modeling system in mycobacteria compared with the other actinomycetes genera. However it has been shown that mycobacterial lipoglycan biosynthesis occurs mainly from tri-acylated PIM precursors (6,51,52). Moreover, position O-3 of myo-inositol is the last one to be acylated during biosynthesis (48,53), and as such, tetra-acylated forms of PIMs may constitute a storage pool of PIMs. MPI anchors corresponding to P2 and P3 on one hand and P5 and P6 on the other hand do not show, as revealed by 2D 1 H-31 P HMQC-HOHAHA, any difference in terms of acylation of the myo-inositol and the glycerol units. The difference probably arises from the acylation state of the fourth potential acylation site of the MPI anchor that cannot be investigated by this approach, i.e. position 6 of the Manp unit linked at O-2 of the myo-inositol (1,4). Indeed it has been found, after purification of the individual acyl-forms of M. bovis BCG LM and analysis by MALDI-MS that P2 and P3 MPI differ by the presence of an additional fatty acid, with P3 containing the additional fatty acid. 2 In a similar way, P6 defined in the present study, which seems actually to be the equivalent of P2, is likely to differ from P5 by the absence of a fatty acid on the Manp unit, because P5 has been shown to bear one in this position. 2 SaeLM MPI anchor was found to be based on a diglycosylated myo-inositol unit substituted at positions O-2 and O-6, by a t-Manp unit and the mannan core, respectively. Indeed the Manp units glycosylating the myo-inositol were clearly identified by NMR experiments. H-1 of ␣-Manp-1 (linked at O-6 of myo-inositol) and of ␣-Manp-2 (linked at O-2 of myo-inositol) units were deshielded as compared with the other carbohydrate anomeric resonances and consequently could be detected despite their low abundance. They were clearly identified on the ROESY spectrum due to intense nOe contacts between their H-1 and H-2 or H-6 of myo-inositol, respectively (Fig. 5). The different anomeric protons of ␣-Manp-2 units corresponding to the acyl-forms characterized by the phosphates P2, P3, P5, and P6 could be readily distinguished. However the anomeric protons of the different ␣-Manp-1 units may overlap.
MALDI/MS, after optimization of sample preparation and recording conditions as well as the choice of a suitable matrix, has recently been shown to be a suitable method for analyzing the molecular distribution of M. bovis BCG LM, in terms of glyco-and acyl-forms. 2 MALDI/MS spectra of SaeLM showed a major set of peaks with a width of around 30 mass units (Fig. 6). They were attributed to a superimposition of ions corresponding to species bearing an MPI anchor containing either one or three fatty acyl groups. However, these results failed to reveal the heterogeneity of SaeLM acyl-forms, as shown by 31 P NMR. This is due mainly to the suppressive effects of MS experimentation that preclude one from obtaining spectra that would reflect the real abundance of the different acyl-forms, which has recently been observed with mycobacterial LM. 2 Nevertheless, MS analysis indicated that SaeLM was composed of a mixture of glyco-forms containing 19 -61 Manp units. The distribution of glyco-forms was centered on the species containing 37 or 40 Manp units. SaeLM appears to be larger in size than M. bovis BCG LM, which shows a distribution of glyco-forms of 15-45 Manp units, being centered on the species containing 26 Manp units. 2 However, SaeLM possesses a (␣136)-Manp chain (average of 14 units) with a size similar to that of M. bovis BCG LM. This results from the fact that SaeLM exhibits dimannoside side chains instead of single t-Manp units for M. bovis BCG LM (43).
In summary, SaeLM exhibits an original structure with the same core domains as described for mycobacterial LM, but with some differences in acylation, branching, and size of the side chains present on the mannan core. Altogether SaeLM, with dimannoside side chains, appears to be the most elaborated non-mycobacterial LM molecule identified to date (Fig. 7).
LM (16 -18) and lipomannan-domain-containing molecules (22) possess conserved structural motifs that are able to induce pro-inflammatory activity, however, in order for optimal induction, this structural moiety must be readily accessible to the host receptor(s). Interestingly, TpaLAM lipomannan core (mahTpaLAM), composed of an (␣136)-Manp chain with no side chains, is a powerful inducer of TNF-␣, demonstrating that an (␣136)-Manp chain is sufficient in providing pro-inflammatory activity and that branched t-Manp units are not necessarily required. LM-like molecules activate immune cell responses via TLR-2; however, the molecular dynamics of LM-like molecules binding to TLR-2, in particular, and in general of TLR ligands binding to their receptor(s) remains unclear (54). Accessory molecules such as LBP or CD14 (16) or heterodimerization with another TLR such as TLR-1 (55) participate in the recognition process of LM. Nevertheless, a direct interaction FIG. 8. TNF-␣ production by human THP-1 monocyte/macrophage cell line in response to SaeLM and various stimuli. A, ManLAM, SaeLM, BCGLM, mahTpaLAM, or dSaeLM were tested at 10 (black bars) and 20 (white bars) g/ml. Polymixin B, when previously added, had no effect on the amount of TNF-␣ released by these stimuli (not shown). LPS from E. coli 055:B5, at a concentration of 0.2 g/ml, induced 1020 pg/ml TNF-␣. B, TLR dependence of SaeLM pro-inflammatory activity. SaeLM at 10 g/ml and the various antibodies (anti-TLR-2, anti-TLR-4, and IgG2 isotype control) at a concentration of 10 (black bars) or 20 (white bars) g/ml were added to THP-1 cells. LPS at a concentration of 0.2 g/ml induced 510 pg/ml TNF-␣. ManLAM and LM were from M. bovis BCG; mahTpaLAM was prepared as previously described (22). BCGLM, LM from M. bovis BCG; dSaeLM, deacylated SaeLM; mahTpaLAM, mild acidic hydrolyzed TpaLAM, i.e. TpaLAM lipomannan core. between the pathogen-associated molecular patterns and TLR-2 receptor has been clearly shown to be involved in signaling (56). Here we found that SaeLM exhibited a stronger TNF-␣-inducing activity than M. bovis BCG LM, the latter being more active than mahTpaLAM. Altogether these data establish that a linear (␣136)-Manp chain, linked to the MPI anchor, is sufficient in providing pro-inflammatory activity. However adding side chains and extending their size increases this activity, most probably as a result of a higher affinity for TLR-2. In addition, the acylation pattern might also influence the relative activity of these LM-like molecules, because deacylated SaeLM (dSaeLM), obtained after an alkaline treatment, was unable to induce TNF-␣. Altogether, our findings provide better understanding of the structure/function relationship of TLR-2 lipoglycan-dependent signaling. In addition, and of more general interest, this study participates in the effort of deciphering the molecular basis of the recognition of pathogenassociated molecular patterns by TLRs. | 2018-04-03T03:45:03.789Z | 2005-08-05T00:00:00.000 | {
"year": 2005,
"sha1": "0850e216e699615ac190548f89df1acaafc2e7b9",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/280/31/28347.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "6c8cb26076daf6672692210cc7002359495f4cff",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
36564348 | pes2o/s2orc | v3-fos-license | Substituting Water for Sugar-Sweetened Beverages Reduces Circulating Triglycerides and the Prevalence of Metabolic Syndrome in Obese but Not in Overweight Mexican Women in a Randomized Controlled Trial
Abstract Background: Mexico's sugar-sweetened beverage (SSB) intake is among the highest globally. Although evidence shows that increases in SSB intake are linked with increased energy intake, weight gain, and cardiometabolic risks, few randomized clinical trials have been conducted in adults. Objective: The aim of this study was to determine if replacing SSBs with water affects plasma triglycerides (TGs) (primary outcome), weight, and other cardiometabolic factors. Methods: We selected overweight/obese (BMI ≥25 and <39 kg/m2) women (18–45 y old) reporting an SSB intake of at least 250 kcal/d living in Cuernavaca, Mexico. Women were randomly allocated to the water and education provision (WEP) group (n = 120) or the education provision (EP)–only group (n = 120). The WEP group received biweekly water deliveries, and both groups received equal monthly nutrition counseling. During nutrition counseling, the WEP group sessions included activities to encourage increased water intake, reduced SSB intake, and substitution of water for SSBs. Repeated 24-h dietary recalls, anthropometric measurements, and fasting blood samples were collected at baseline and at 3, 6, and 9 mo. The Markov–Monte Carlo method was used for multiple imputation; separate mixed-effects models tested each outcome. Results: An intent-to-treat (ITT) analysis indicated that the WEP group increased water intake and decreased SSB intake significantly over time, but there were no differences in plasma TG concentrations between groups at the end of the intervention (WEP at baseline: 155 ± 2.10 mg/dL; WEP at 9 mo: 149 ± 2.80 mg/dL; EP at baseline: 150 ± 1.90 mg/dL; EP at 9 mo: 161 ± 2.70 mg/dL; P for mean comparisons at 9 mo = 0.10). Secondary analyses showed significant effects on plasma TGs (change from baseline to 9 mo: WEP, −28.9 ± 7.7 mg/dL; EP, 8.5 ± 10.9 mg/dL; P = 0.03) and metabolic syndrome (MetS) prevalence at 9 mo (WEP: 18.1%; EP: 37.7%; P = 0.02) among obese participants. Conclusions: Providing water and nutritional counseling was effective in increasing water intake and in partially decreasing SSB intake. We found no effect on plasma TGs, weight, and other cardiometabolic risks in the ITT analysis, although the intervention lowered plasma TGs and MetS prevalence among obese participants. Further studies are warranted. This trial was registered at http://www.clinicaltrials.gov as NCT01245010.
Introduction
Overweight and obesity and their related chronic diseases are public health problems in Mexico (1,2). By 2002, the leading causes of death in the country were coronary heart disease and diabetes (1). The prevalence of metabolic syndrome (MetS) 6 among Mexican women aged >20 y was 52.2% in 2006 2 Author disclosures: None of the authors consulted with the Danone Research Center, but some of the authors received grants to conduct clinical studies (S. Barquera), grants for epidemiologic analyses/talks on beverage patterns at the British Nutrition Society (B. Popkin), or partial support for sabbatical research at the University of North Carolina at Chapel Hill (S. Hernández-Cordero). S. Rodríguez-Ramírez, M. A. Villanueva-Borbolla, T. González de Cossio, and J. Rivera Dommarco, no conflicts of interest. 3 Supplemental Tables 1-4 are available from the Online Supporting Material link in the online posting of the article and from the same link in the online table of contents at http://jn.nutrition.org. 6 Abbreviations used: EP, education provision; HbA1c, glycosylated hemoglobin; ITT, intent-to-treat; MET, metabolic equivalent; MetS, metabolic syndrome; RCT, randomized controlled trial; SSB, sugar-sweetened beverage; WEP, water and education provision.
(3) and that for hypertriglyceridemia was 26.9% (3,4). In 2012, the combined prevalence of overweight and obesity among women >20 y was 73.0%, with obesity representing 37.5% (2). In the same way that substantial health benefits can be achieved with modest weight losses of 5-7% of initial weight (5,6), evidence suggests that a decrease in elevated TG concentrations is associated with a decrease in cardiovascular disease risk (7,8).
MexicoÕs intake of sugar-sweetened beverages (SSBs) is among the highest worldwide. In 2006, the per capita energy contribution from SSBs in adults was 411 kcal/d, or 22.3% of total energy intake (9)(10)(11), and was equally high in 2012, representing 19.0% of total energy intake (12). Evidence from a combination of longitudinal cohorts, small clinical trials, and randomized controlled trials (RCTs) in children shows that increases in SSB intake are linked with increased energy intake, weight gain, and an array of cardiometabolic risks, such as hypertriglyceridemia, low HDL cholesterol, type 2 diabetes, and MetS, among others (13)(14)(15)(16)(17)(18)(19)(20)(21). Nevertheless, critics of the results of the limited number of RCTs conducted in adults have argued that more evidence is needed to support conclusions about the negative effects of SSBs on health (22)(23)(24)(25). One recent 3-arm RCT in U.S. adults tested the replacement of caloric beverages with noncaloric beverages (water or diet beverages) as a strategy to promote weight loss. The results showed no differences in weight loss from baseline weight in the water group or in the low-caloric beverage group compared with that in the control group. However, in a secondary analysis (combining water and low-caloric beverage vs. control group), participants in the beverage-replacement combined group were 2 times as likely to achieve a 5% weight loss by the end of the intervention (P = 0.04) (26). Evidence, albeit limited, suggests that substituting water for SSBs may facilitate weight loss, especially in subjects participating in weight-loss programs (27). Reduction in total energy intake with the subsequent meal in adults (28), short-term effect of increased satiety, reduced feeling of hunger (29), and increased energy expenditure as a result of water-induced thermogenesis (30,31) are some of the suggested potential mechanisms.
We conducted a 9-mo clinical trial to determine whether replacement of SSBs with water, through water provision and nutrition counseling, could reduce plasma TG concentrations as the primary outcome and weight and other cardiometabolic risk factors as secondary outcomes in overweight and obese Mexican women. Secondary analyses examined the effect of initial weight status on the primary outcome.
Participants and Methods
Design. Hernández-Cordero et al. (32) described in detail the methods of this RCT elsewhere. Briefly, this RCT, conducted in Cuernavaca, Mexico, consisted of 2 intervention groups: water and education provision (WEP) and education provision (EP) only. The study was conducted according to the guidelines in the Declaration of Helsinki, and all procedures involving human subjects were approved by the institutional review board of the Mexican National Institute of Public Health. Written informed consent was obtained from all subjects. The study was registered at clinicaltrials.gov (NCT01245010).
Participants. Women aged 18-45 y with a BMI $25 to <39 kg/m 2 who reported SSB intakes of at least 250 kcal/d were recruited, randomly allocated to the intervention groups, and followed for 9 mo. An advertising campaign identified potential participants interested in joining the study. Applicants were screened via telephone to determine if they fulfilled the age and BMI criteria. In those who did, three 24-h dietary recalls (nonconsecutive days, including 2 weekdays and 1 weekend day) were administered by trained interviewers to identify their usual intake of SSBs. The procedures for the analysis of the dietary information are explained in detail below. A broader description of exclusion criteria was published elsewhere (32).
Sample size calculation and random assignment. This study was powered to observe a 31 6 58 mg/dL decrease, from baseline to the end of the intervention, in plasma TG concentrations and a weight loss of 1.8 6 3.4 kg. We needed a sample size of 120 cases, which considered 2-sided tests, with 90% power and an a of 0.05, and allowed for >75% attrition. Women fulfilling all selection criteria (n = 240) were randomly assigned to either of the treatment groups through blocked randomization. Assignments to each of the 24 blocks within the groups were made by random numbers generated with Microsoft Office Excel. Each block included 10 participants.
Intervention. The intervention lasted 9 mo. Because of the characteristics of this intervention, it was not possible to make staff and participants unaware of treatment. The 2 groups were treated identically except that we provided water to the WEP group along with nutrition counseling, including individualized and group meetings targeted to the rationale and strategies to increase water intake, reduce SSB intake, and substitute water for SSBs (see Supplemental Table 1 for detailed characteristics). To ensure water availability, the WEP women received bottled water at home and/or picked it up every 2 wk. We provided 2-3 L of water per participant per day with 1 additional L/d to account for possible consumption by other family members. Women of both groups participated in monthly face-to-face meetings with a dietitian and a psychologist (1 set for each group) either individually or in a group (2-10 participants each). At the end of group meetings, each woman identified her healthy diet goal for the next month. Individual meetings consisted of nutrition counseling with regard to the goal. The WEP and EP groups met separately and received equal attention. For ethical reasons, after final measurements, the EP group participated in an extra meeting with regard to water and SSB intake.
Outcomes and measurements. The primary outcome was change in plasma TG concentrations over a 9-mo period. The secondary outcomes were change in weight and other MetS indicators: waist circumference, percentage of body fat, fasting glucose, total cholesterol, HDL cholesterol, LDL cholesterol, glycosylated hemoglobin (HbA1c), and blood pressure. In addition, we evaluated serum and urine osmolality and estimated MetS prevalence. MetS was defined according to the International Diabetes Federation (33) as waist circumference >80 cm plus any 2 of the following criteria: TGs >150 mg/dL, HDL cholesterol <50.0 mg/dL, high blood pressure (systolic >130 mm Hg and/or diastolic >85 mm Hg), and fasting glucose >100 mg/dL. All measurements were collected at baseline and at 3, 6, and 9 mo except for urine samples and air displacement plethysmography, both of which were measured at baseline and 9 mo, and sociodemographic information, which was collected at baseline only. All assessments were conducted on weekdays between 0700 and 1100 h at the Mexican National Institute of Public Health, except for water delivery and dietary information, which were obtained at the participantÕs home or another place of her preference.
Fasting blood samples were collected in non-anticoagulated and EDTA tubes (for HbA1c determination). Samples were immediately frozen at 280°C until analysis at the end of the intervention. Urine samples were collected and stored until determination of urine osmolality. All analytic measurements were performed at the Mexican National Institute of Public Health. Plasma TG concentrations were measured after lipase hydrolysis in an automatic analyzer with a tungsten lamp (Prestige 24i; Tokyo Boeki Medical System). The interassay CV was 4.4%. Total cholesterol was determined by using enzymatic hydrolysis and oxidation; the interassay CV was 3.9%. HDL cholesterol was measured by using an enzymatic colorimetric direct method after eliminating chylomicrons, VLDL cholesterol, and LDL cholesterol by enzymatic digestion. Glucose concentrations were measured by using an automatized glucose oxidase method, with an overall interassay CV of 2.1%. The proportion of HbA1c was determined by an immunocolorimetric method in whole blood. Finally, serum and urine osmolality were measured by using freezing point depression with a micro osmometer (Fiske 210 Micro-Sample Osmometer; Advanced Instruments).
Resting blood pressure was measured with a digital sphygmomanometer (Omron model HEM-781 INT) on the right arm after 5 min of rest with the participant seated and her back supported. Three measurements were taken with at least 2 min between each measurement, and the mean was used.
Weight was assessed in tight-fitting swimsuits or spandex shorts without shoes with a Tanita (model BWB-627-A, 100-g precision) digital scale. Height was measured at baseline only by using a calibrated, wallmounted stadiometer (model 17802, 2-mm precision; Shorr Productions). Waist circumference was measured in a light-weight hospital gown by using a Gulick tape measure. Waist measurements were obtained at 2 points, the midpoint between the sternum and the umbilicus and the iliac crest, following standard procedures (34). Total body fat was evaluated by using air displacement plethysmography (Bod Pod Life Measurement). This technique is reliable and validated for evaluating body composition (35). Subjects were fasting at the time of measurement. The Bod Pod was calibrated before each measurement by using a 49.273-L cylinder. Subjects were tested while wearing a swimsuit and a swimming cap to compress the hair (35,36). Volume of thoracic capacity was used to correct body volume (corrected body volume = total body volume 2 thoracic capacity). Body fat mass (in kg) was calculated by using SiriÕs equation (37).
A 24-h recall assessed dietary intake during a face-to-face interview on 3 nonconsecutive days during the same week (2 weekdays and 1 weekend day). The recall included a complete audit of foods the participant had consumed during the previous 24 h, and specific probes for all beverages and water included measurement cups for a better estimation of liquid intake. We estimated total energy intake from solid foods and beverages as the average of the 3-d intake for each subject according to the Mexican National Institute of Public Health food composition table, with links to and consistency checks with the USDA National Nutrient Database for Standard Reference (38).
Physical activity was measured by an accelerometer (Actigraph GT3X) worn at the waist for at least 8 h for 4 consecutive days. We estimated total metabolic equivalents (METs) per day as the average MET over 24 h. We estimated intervention adherence through the records of participantsÕ attendance at the individual and group meetings, visits, and phone calls.
Sociodemographic information, collected by questionnaire, included age, years of education, and housing condition, such as flooring and roof materials, ownership of home appliances, and number of rooms. We constructed an indicator of socioeconomic status or well-being through a principal components analysis (39). The statistical models included a standardized factor as a continuous variable. This methodology, which has been validated for describing socioeconomic differentiation within a population, allowed us to classify participantsÕ households into socioeconomic groups (39). Supplemental Table 2 summarizes the timing of measurements and contacts.
Adverse events. We closely monitored the development of any adverse event (any symptom or safety concern requiring medical attention reported by a participant during a contact). Participants reporting potential adverse events were referred to the projectÕs physician.
Statistical analysis. All analyses were performed by using Stata version 12.1 (StataCorp). We performed an intent-to-treat (ITT) analysis. For continuous variables, the Markov-Monte Carlo method was used to impute missing data, generating 10 imputations. The results from the imputation were combined by using the MI Stata command in all analyses (40,41). Baseline demographic characteristics, dietary intake, and primary and secondary outcomes were described by treatment group, with means and SDs for continuous variables and percentages for categorical characteristics. The main effects of time, treatment group, and time by treatment group interaction were examined in separate mixed-effects models for each outcome by using the independent structure of the covariance matrix and taking into account the randomization block. Given that we found no differences when considering the randomization block, we present results without it. We tested both the mean outcomes across time between groups and the absolute and relative changes from baseline to the end of the intervention between groups in all outcome variables, both unadjusted and adjusted for baseline characteristics. Because we found no difference with adjusted models, we present unadjusted results only.
Post hoc secondary analyses according to weight status at baseline with the use of the WHO-recommended cutoffs (overweight, BMI $25.0-29.9 kg/m 2 ; obese, BMI $30.0 kg/m 2 ) were performed to examine the hypothesis that the effect of the intervention would differ across BMI categories, as found in several SSB interventions among young children or adolescents (20,42). Mean outcomes across time between groups and changes from baseline to 9 mo were tested by the interaction of the group effect and weight status at baseline by using mixed-effects models and linear regression models, respectively. We tested the effect of the intervention on MetS at 9 mo by using logistic regression and its effect modification by weight status at baseline. P values <0.05 were considered significant in all analyses. We present mean 6 SEs for continuous variables or as specified and percentages for categorical variables.
ITT analysis
Participants. Of the 1756 women screened, 268 fulfilled the selection criteria and were randomly allocated to the WEP or the EP group. From these, 240 agreed to participate in the study, and baseline measurements were taken. The retention rate for participants with baseline measurements was higher in the WEP group (85.0%) than in the EP group (72.5%) (P = 0.03) (Fig. 1). Attendance by the WEP group (mean 6 SD: 7.3 6 2.4 sessions) was greater than that by the EP group (6.4 6 2.4 sessions) (P = 0.01).
Sociodemographic characteristics and dietary intake among dropouts and women completing the study were similar except for parity, with a greater proportion of nulliparous women finishing the study (28.6% completers, 13.7% dropouts; P = 0.04) (Supplemental Table 3). Baseline characteristics were not different between groups (Tables 1-3). Overall, participants were, on average (6SD), 33.3 6 6.7 y old and obese (BMI: 31.2 6 3.7 kg/m 2 ), 26.2% were nulliparous, and 45.0% had completed middle and high school (data not shown).
Reported dietary intake and physical activity. Reported dietary intake is presented in Table 3. Reported water intake increased in both groups, with a greater increase in the WEP group (P-interaction < 0.001). The increase in water intake started early in the intervention (change from baseline to 3 mo: WEP, 976 6 67 mL/d; EP, 142 6 67 mL/d; P < 0.001). By the end of the intervention, on average, women in the WEP group increased water intake by 1210 6 102 mL/d and those in the EP group by 239 6 91 mL/d (P < 0.001) ( Table 4). Even though participants in both groups decreased their SSB intake in all stages, reduction was greater in the WEP group (change from baseline to 9 mo: WEP, 2252 6 19 kcal/d; EP, 2115 6 27 kcal/d; P < 0.001) ( Table 4). Women in the EP group tended to have a greater decrease in reported solid food intake (P-interaction = 0.07). Both groups reported a decrease in total energy intake by the end of the intervention, with no difference between groups (change from baseline to 9 mo: WEP, 2585 6 55 kcal/d; EP, 2567 6 66 kcal/d; P = 0.8) ( Table 4). Physical activity, measured as METs/d, did not differ between the groups throughout the intervention (Table 3).
Outcomes. The effects of the intervention on the study outcomes are shown in Table 2. The primary outcome, change in plasma TG concentrations, did not differ between the groups (P-interaction = 0.10). There was no significant change at any stage between the groups (Table 4).
Women in both groups lost weight, with no difference between groups (P-interaction = 0.40) ( Table 3). By the end of the intervention, mean weight loss was 21.2 6 0.4 kg in the WEP group and 20.8 6 0.4 kg in the EP group (P = 0.40) ( Table 4). Changes in other outcomes (waist circumference, percentage of body fat, total cholesterol, LDL cholesterol, HDL cholesterol, fasting plasma glucose, HbA1c, systolic and diastolic blood pressure) were not significant (Tables 2-4).
Results by weight status at baseline (secondary analysis)
We tested the effect of the intervention considering the weight status at baseline in the primary and secondary outcomes and found significant results for the primary outcome (plasma TGs) and MetS prevalence. There was no difference in TG concentration at baseline by weight status (144 6 7.60 vs. 160 6 6.00 mg/dL for overweight and obese participants, respectively; P = 0.09). A significant treatment 3 time effect appeared when we considered baseline BMI (overweight vs. obese) for TG concentrations throughout the intervention (P-interaction = 0.02) (Supplemental Table 4). The effect of the intervention differed between women who started the intervention while overweight (BMI $25.0-29.9 kg/m 2 ) and those who started while obese (BMI >30 kg/m 2 ). Among the latter participants, TG concentrations decreased from baseline to 9 mo in the WEP group (228.9 6 7.70 mg/dL; P value for change <0.001), with no change in the EP group (8.50 6 10.9 mg/dL; P value for change = 0.4) (Fig. 2A). There was no difference in MetS prevalence at baseline by weight status (24.5% and 33.8% for overweight and obese, respectively; P = 0.1). The effect of the intervention on MetS prevalence differed by baseline weight status after adjusting for change in physical activity from baseline to 9 mo and for age (P-interaction = 0.02). The estimated MetS prevalence at 9 mo was lower in obese women in the WEP group (18.1%) than in those in the EP group (37.7%) (P value for comparison between groups in obese women = 0.05) (Fig. 2B).
Adverse events
Twenty-two participants from the WEP group reported an adverse event during the intervention. The most common adverse events reported were tiredness, nausea, stress, or frequent urge to urinate. The projectÕs physician assessed their severity and relatedness to the intervention. All participants with reported adverse events were treated and monitored until they improved. No subjects were removed from the study because of an adverse event.
Discussion
This clinical trial showed a significant increase in reported water intake in both the WEP and the EP groups, with the greatest increase in the WEP group supported by an improvement in urine osmolality after 9 mo. Both groups reported significant declines in total energy and SSB intake, and both groups demonstrated a reduction in weight and BMI over time. However, no significant improvements in plasma TG concentrations, weight, or other cardiometabolic risk indicators were observed by intervention group in the ITT analysis. A possible explanation for the lack of effect in the overall sample is the incomplete replacement of SSB consumption. Even though the participants in the WEP group increased water intake, they did not completely replace SSB consumption, which was still considerable at the end of the intervention (155 6 4 kcal/d or 418 6 11 mL/d). There is evidence that a reduction in SSB intake of 355 mL/d is associated with weight losses of 0.5 kg at 6 mo (95% CI: 0.1, 0.8 kg; P = 0.006) and 0.7 kg at 18 mo (95% CI: 0.2, 1.1 kg; P = 0.003) (43). Our results, which showed increased water intake but incomplete substitution for SSBs, are consistent with results from 2 other studies: 1 in Cuernavaca, Mexico (44), the city of our study, and the other in The Netherlands (45), in which changes in water consumption did not result in changes in SSB intake. The first, a cross-sectional qualitative study, explored knowledge of the benefits of water intake among adults with low and high SSB intakes. Participants had similar water intake amounts whether their SSB consumption was low or high, suggesting that drinking water does not necessarily replace SSB consumption (44). The second, a secondary analysis of an RCT in adolescents, showed that a reduction in SSB intake was not explained by an increase in the consumption of water or diet drinks (45).
Another explanation relates to the fact that SSB intake decreased in both groups and that a considerable percentage of women (36.8%) in the EP group had a water intake >1.2 L/d at the end of the intervention (S. Rodríguez-Ramírez, T. González-Cossio, M. Mendez, K. Tucker, I. Méndez-Ramírez, S. Hernández-Cordero, B. Popkin, unpublished data, 2013). The reduction in SSB intake and increase in water intake in the EP group was unexpected, because those participants did not receive information on healthy beverage consumption. Despite our requests that WEP participants not discuss the intervention with the EP participants, contamination from the WEP to the EP group is possible, which would make both groups very similar and affect the intervention results. The nutrition counseling that both groups received did not address weight loss or changes in beverage consumption patterns but instead covered general topics, such as sodium intake, fat content in the diet (unsaturated vs. saturated), and including vegetables in the diet. Nevertheless, it is possible that women in the EP group were motivated by joining this weight-loss study and decided to modify some behaviors that are related to a healthier lifestyle (e.g., increasing water intake or reducing SSBs, topics that received extensive media coverage in Mexico during this period). We adhered to strict attention control limits for both groups. Another potential explanation relates to the total energy intake of participants. In addition to the decrease in SSB intake in the EP group, the participants in this group tended to have a greater decrease in energy intake from solid foods than did those in the WEP group. Thus, even with the WEP groupÕs greater reduction in SSB calories, average total energy intake did not differ between groups at the end of the intervention.
Finally, the fact that a large percentage of women (50%) entered the study with TG concentrations in the normal range (<150 mg/dL) might explain the interventionÕs lack of effect. There is evidence suggesting that the beneficial changes in lipid profile depend on initial concentrations, with a greater response among those with higher concentrations at the beginning of any intervention (46,47). Thus, only half of our study population had the potential to reduce TG concentrations.
Few RCTs have addressed a research question similar to ours in adult populations. Tate et al. (26) studied the replacement of caloric beverages with water or diet beverages as a method to lose weight and improve some cardiometabolic indicators over 6 mo in U.S. overweight and obese adults and included attention controls. There was an improvement in hydration status in the water group, as in our study, measured by urine osmolality, but no significant differences in other metabolic indicators in the ITT analysis except for a significant improvement in fasting glucose in the water group compared with the control. In addition, Tate et al. found a significantly greater likelihood of a 5% weight loss in the 2 intervention arms in a secondary analysis. Another RCT in women and men aged 55-75 y tested the hypothesis that premeal water consumption (500 mL/meal Á d 21 ) would lead to greater weight loss in U.S. overweight and obese individuals consuming a hypocaloric diet than in those who consumed the same hypocaloric diet only (i.e., without premeal water consumption) during 12 wk (48). Adults in the premeal water group had a greater weight loss than did those adhering to the hypocaloric diet only. The authors concluded that, when combined with a hypocaloric diet, water intake of 500 mL before each meal leads to a greater weight loss than a hypocaloric diet alone in adults. The water intake may cause a reduction in energy intake from the meal. The difference between these trial results and our study results might be explained by the difference in the ages of the study populations [participants in the Dennis et al. (48) trial were 55-75 y old vs. 18-45 y old in our trial], the length of the follow-up (3 vs. 9 mo, respectively), and the specific instructions provided to participants (intake of 500 mL of water per meal vs. increase in water intake and decrease in SSB intake, respectively). Finally, in a recently published 12-wk weight-loss phase of a 1-y RCT in overweight and obese U.S. women and men, researchers tested the hypothesis that the amount of weight lost (over 12 wk) and maintained (for 9 mo) in a behavioral management program would be equivalent in participants consuming nonnutritive sweetened beverages compared with those consuming water (49). The authors reported that both groups lost weight during the 12-wk weight-loss phase, with a greater weight loss among the participants in the nonnutritive sweetened beverages group (mean 6 SD: 5.95 6 3.94 kg) than among those in the water group (4.09 6 3.74 kg; P < 0.0001). The results of this trial are difficult to interpret and compare with ours, because no dietary data (food or beverage intake) are included and the follow-up period is shorter (no data on the maintenance period were presented). Our study has some limitations. As discussed by Hernández-Cordero et al. (32), ideally a clinical trial should have a blinded design. However, in food intervention studies this is not possible. The unblinded design might result in an overestimation of the effect of the intervention if there is overreporting of an outcome measure or a change in the promoted behavior. We discuss the latter below. As for the outcome variables, those that are subjective are often found to be biased (50), in contrast with physiologic outcomes, which correspond to our primary outcome and the other cardiometabolic risk factors measured in our study. For dietary information, which we used to evaluate change in beverage intake (including water and SSBs) and dietary intake, we treated all participants identically in interviews to reduce the potential bias of collecting this information differentially. Another potential bias due to the unblinded design is performance bias, which results from a systematic difference in the group follow-ups (51). To reduce this bias, we treated all participants according to a strict protocol. In addition, there is a greater chance of attrition bias. In our study, the EP group had a lower retention rate. The potential effect of a low retention rate is selection bias, which we minimized by using an ITT analysis in our main analysis (51). Another potential limitation that might explain the lack of effect in the ITT analysis is the fact that our control group (the EP group) received nutrition counseling. We 1 Values are means 6 SEs. Data were analyzed by using intent-to-treat analysis, 10 imputations. Changes were calculated from baseline to 3, 6, and 9 mo without adjustments for covariates; n = 120 for WEP and n = 120 for EP. A repeated-measures mixed-effects model analysis was used to test the mean through time, all P . 0.05 (data not shown). EP, education provision; HbA1c, glycosylated hemoglobin; mOsm, milliosmole; WEP, water and education provision. 2 Measured at baseline and 9 mo only. decided to include nutrition counseling for the EP group to ensure attention control comparability between the groups, so that the only differences between them were the water provision and the additional information on increasing water intake while decreasing SSB intake. However, even though the topics included in the counseling did not address weight loss, both groups might have been willing to modify dietary behaviors not addressed by the intervention (i.e., the EP group increased water intake). A potential way to overcome this could have been to include a third comparison group, which would have made the study more expensive and logistically more complex.
Another potential limitation of our study is misreporting. In-depth analyses of our dietary data suggest a high proportion of underreporting, defined by the disparity between reported energy intake and predicted energy requirements from doubly labeled water equations adjusted for energy deficits on the basis of weight changes and total energy expenditure (S. Rodríguez-Ramírez, M. Mendez, S. Hernández-Cordero, T. González de Cossio, B. Popkin, unpublished data, 2013). These analyses indicate that underreporting increased from 11% of the sample at baseline to 42% by the end of the intervention. The misreporting made it difficult to interpret the potential impact of dietary changes throughout the study on the lack of effect of the intervention in the ITT analysis. In addition to the misreporting expected in a weight-loss trial, described in other studies (54)(55)(56)(57)(58)(59), in our study the underreporting of SSB consumption might be higher in the WEP group given that the intervention discouraged SSB consumption, a phenomenon that others have reported (58,59).
Although percentage of body fat was not 1 of our main outcomes, the Siri equation that was used to estimate it has not been validated in Hispanics but has been used by other scholars (60,61). Finally, another potential limitation is the restriction of our study population to women. This puts a constraint on the conclusions we can draw from our results, which are applicable only to overweight and obese women.
Secondary analyses suggest that weight status at baseline was an effect modifier of TG change during follow-up and of the MetS at the end of the study. Plasma TG concentrations and the MetS decreased among obese women in the WEP group. This change among obese women is possibly explained by a greater physiologic response to a modification intervention in subjects with greater risk (i.e., heavier initial weight), as others have suggested (62). A pilot RCT found that baseline BMI was an effect modifier in an intervention to examine the effect of decreasing SSB consumption on body weight in adolescents. Among subjects in the upper BMI tertile, BMI change differed significantly between the intervention and control groups (20). Similar results were reported in a child cohort in the United States (42). Another possibility is a stronger desire for behavior change among obese subjects in our trial. Although initial higher weight predicted low compliance in weight-management treatments (63,64), in our study the change in SSB intake was greater among women with a BMI >30 kg/m 2 , even after considering the potential effect of underreporting, as defined above (change from baseline to 9 mo in plausible reporters: overweight WEP, 2236 6 31 kcal/d; overweight EP, 2148 6 41 kcal/d; P = 0.090; obese WEP, 2228 6 41 kcal/d; obese EP, 242 6 40 kcal/d; P = 0.003). Water intake was similar. Obese women in the WEP group had the highest water intake at 9 mo (plausible reporters: overweight WEP, 1532 6 123 mL/d; overweight EP, 1043 6 124 mL/d; P < 0.008; obese WEP, 2071 6 133 mL/d; obese EP, 905 6 148 mL/d; P < 0.001). The results of secondary analyses are worth mentioning considering the high impact in countries such as Mexico, where the obesity prevalence is considerably high-37.5% of Mexican women in 2012 (2). Furthermore, the existing evidence of the association of hypertriglyceridemia with risk of coronary heart disease (65,66) and the potential benefits of decreasing TG concentrations (8) highlights the importance of the findings of this study. However, these results should be considered cautiously, because further investigation is needed.
In conclusion, overall, this study found that providing water and nutritional counseling was effective in increasing water intake but insufficient to achieve a complete substitution of water for SSBs among these overweight and obese Mexican women, which may have contributed to the lack of change in plasma TGs, weight, and other cardiometabolic risks in the ITT analysis. Other potential explanations of the lack of effect are that both groups decreased SSB intake, resulting in a great proportion of the EP group having an SSB intake similar to the WEP group; total energy intake did not differ between groups because of the trend of decreased energy intake from solid foods among women in the EP group; and the baseline mean plasma TG values were near normal in our study population. Secondary analyses suggest that the intervention lowered plasma TGs and MetS among obese women only. The results of both the ITT and secondary analyses indicate the need for more research in efficacy trials focused on the effect of SSB intake reduction on MetS risks and the possible differential effect according to initial weight status. | 2018-04-03T03:49:13.700Z | 2014-09-03T00:00:00.000 | {
"year": 2014,
"sha1": "55d54306df54310d4625bbd75f35b3bba1cf7e15",
"oa_license": null,
"oa_url": "https://academic.oup.com/jn/article-pdf/144/11/1742/28301559/1742.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "55d54306df54310d4625bbd75f35b3bba1cf7e15",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3182253 | pes2o/s2orc | v3-fos-license | Hox Transcription Factors: Modulators of Cell-Cell and Cell-Extracellular Matrix Adhesion
Hox genes encode homeodomain-containing transcription factors that determine cell and tissue identities in the embryo during development. Hox genes are also expressed in various adult tissues and cancer cells. In Drosophila, expression of cell adhesion molecules, cadherins and integrins, is regulated by Hox proteins operating in hierarchical molecular pathways and plays a crucial role in segment-specific organogenesis. A number of studies using mammalian cultured cells have revealed that cell adhesion molecules responsible for cell-cell and cell-extracellular matrix interactions are downstream targets of Hox proteins. However, whether Hox transcription factors regulate expression of cell adhesion molecules during vertebrate development is still not fully understood. In this review, the potential roles Hox proteins play in cell adhesion and migration during vertebrate body patterning are discussed.
Introduction
Homeobox genes (Hox genes) were initially identified in Drosophila through genetic mutations that resulted in transformations of one body segment to another-socalled homeotic transformations [1]. Homeoboxes are 183bp sequences that encode highly conserved 61-amino-acid homeodomains with helix-turn-helix motifs that are responsible for binding specific DNA sites [2]. Homeodomain proteins are transcription factors that modulate expression levels of their target genes [3,4]. In amniotes including mammals and birds, 39 Hox genes are arranged in four clusters on different chromosomes. Numerous genetic analyses of lossand gain-of-function mutations in mice have revealed that Hox genes play pivotal roles in determining the identities of cells and tissues in the developing embryo. In adult animals, Hox expression is required for the proliferation and differentiation of hematopoietic cells [5][6][7] and renewal of the endometrium [8][9][10]. Because HOX genes are frequently deregulated in human cancer cells, HOX proteins can be used as both diagnostic markers and therapeutic targets for malignant tumors [11].
Recent studies have converged on identifying downstream Hox target genes. Genome-wide techniques such as microarrays and chromatin immunoprecipitation have been used to identify Hox-regulated genes in Drosophila, mice, and cultured cells. The Hox target genes identified thus far are very diverse with regard to their roles in cellular identity and function. The proteins encoded by the target genes are involved in transcriptional regulation, signal transduction, cell shape, and cell adhesion and migration, as well as in the cell cycle and cell death [12][13][14][15]. However, the diverse mechanisms of Hox regulation pose a challenge for elucidating the exact mechanisms by which Hox proteins determine cell identities and where in the molecular cascade they exert their effects. The mechanisms by which Hox transcription factors regulate cellular events are not fully understood.
Since the mouse neural cell adhesion molecule (N-CAM), a mediator of cell adhesion in nervous system tissues during embryonic development, was first identified as a Hox target [16], a number of studies have reported that other cell adhesion molecules, such as cadherins and integrins, are downstream targets of Hox proteins. Cadherins constitute a large superfamily of transmembrane glycoproteins that mediate calcium-dependent intercellular adhesion in most tissues and play important roles in a wide variety of cellular events [17,18]. Integrins are heterodimers composed of two transmembrane proteins, namely, and subunits [19]. The and extracellular domains cooperatively bind to extracellular matrix components such as collagen, laminin, 2 BioMed Research International (Table 1). Furthermore, potential roles for Hox proteins in cell adhesion and migration during vertebrate development will be discussed.
Structural and Functional Organization of Hox Genes in Drosophila and Mammals
In Drosophila, eight Hox genes are clustered in two groups: the Antennapedia complex (ANT-C) and Bithorax complex (BX-C) ( Figure 1). The order of genes along the chromosome corresponds to their domains of function along the anteriorposterior axis of the animal. reduced (Scr) and Antennapedia (Antp) are required for the identities of the first and second thoracic segments, respectively. Ultrabithorax (Ubx) is responsible for specifying third thoracic segment identity, and Abdominal A (Abd-A) and Abdominal B (Abd-B) contribute to specifying abdominal segment identities. In homeotic mutants, these specific segmental identities can be changed. For example, a loss-offunction mutation in Ubx gives rise to flies with two sets of wings, due to the transformation of the third thoracic segment into one with second thoracic segment identity. This transformation, referred to as "anteriorization, " is caused by the functional substitution of the more anterior gene Antp for Ubx. In mammals, 39 Hox genes are organized in four different clusters (HoxA, HoxB, HoxC, and HoxD) found at four distinct chromosomal loci ( Figure 1). These clusters are thought to have arisen by two duplication events during the emergence of the vertebrates. Based on the nucleotide sequence similarities between the Hox genes and their Drosophila counterparts, these genes are classified into 13 homology groups, referred to as paralogs [20]. As observed in Drosophila, the order of these paralogs on their respective chromosomes shows collinearity with the spatiotemporal expression pattern of these genes in the embryo [21]. Hox expression can be seen in the neural tube, neural crest, paraxial mesoderm, and surface ectoderm, along the anteriorposterior axis. The 3 Hox genes are expressed more anteriorly and earlier, while the 5 Hox genes are expressed more posteriorly and later [22,23]. Morphological analyses of Hox knockout mice show that the segmental identity of the body along the anterior-posterior axis is primarily determined by the posterior-most Hox gene expressed in the segment [24]. Disruption of all Hox10 paralogs results in the conversion of lumbar vertebrae into thoracic vertebra-like structures with rib projections. Similarly, when all Hox11 paralogs are deleted, sacral vertebrae are transformed into vertebrae with lumbar identity [25]. Thus, homeotic transformations comparable to those in Drosophila occur in mutant mice that are null for all the paralogs belonging to a particular group. To directly investigate how Hox cluster duplications contributed to morphological innovations in vertebrates during evolution, mutant mouse embryos, in which full Hox clusters are deleted, have been generated. Mice lacking all HoxA and HoxD functions in their forelimbs show an early developmental arrest of the limbs and severe truncations of distal elements, suggesting that the evolutionary recruitment of Hox proteins into growing appendages leads to distal extension of tetrapod appendages [26]. Deletion of both HoxA and HoxB clusters results in a heart-looping defect that is recognized as an atavistic phenotype, suggesting that both HoxA and HoxB clusters were necessary for vertebrate heart evolution [27]. In addition, a growing body of recent work highlights the significance of functional organization of Hox gene clusters in vertebrate evolution [28][29][30][31][32].
Cell Adhesion Molecules Identified as Hox Realizators during Segment-Specific Organogenesis in Drosophila
In Drosophila, 17 different proteins that contain cadherin domains have been identified. Of these, E-cadherin and two N-cadherins are considered classical types, while the remaining 14 cadherins are regarded as nonclassical cadherins [35]. In addition, Drosophila has 5 integrin subunits ( PS1-5) and 2 integrin subunits ( PS and ]) [36]. These cell adhesion molecules play versatile roles in the development and adult life of Drosophila and interact with cytoplasmic proteins to form adhesion complexes that link their intracellular domains with the cytoskeleton [37].
Posterior spiracles connect the tracheal respiratory systems of Drosophila larvae to the external environment. The Hox gene Abd-B is required to induce the specification and morphogenetic movements required for posterior spiracle formation, as evidenced by the lack of spiracles in Abd-B mutants and formation of ectopic spiracles when Abd-B is ectopically expressed [38,39]. A study by Lovegrove et al. [40] has provided a framework for understanding how Abd-B controls posterior spiracle formation. Abd-B activates three transcription factors, spalt (sal), empty spiracle (ems), and cut (ct), and a signaling molecule, unpaired (upd, the ligand of the JAK/STAT pathway), the expression of which leads to the activation of realizator molecules controlling cell adhesion. The Abd-B direct target Ct promotes the E-cadherin expression that is responsible for ectodermal cell invagination during the formation of the spiracular chamber, the internal tube connecting the trachea to the exterior of the larva. The expression of four nonclassical cadherins in different spiracle cell domains is controlled by several regulators (Sal, Ems, Ct, and Upd) that partially overlap in expression. E-cadherin and nonclassical cadherins cooperate to control spiracle cell invagination, suggesting that these adhesive molecules, which function in the Abd-B-regulated molecular cascade, play crucial roles in spiracle organogenesis.
The salivary gland is a simple tubular organ composed of two major cell types: secretory and duct cells [58]. The Hox protein Scr, which forms a transcriptional complex with the extradenticle and homothorax homeodomain proteins, is required for salivary gland formation, as evidenced by the complete absence of salivary glands because of Scr loss of function [41]. Although Scr is critical for the specification of salivary gland fates, the protein cannot directly maintain salivary gland cell identity because it disappears early in salivary gland development [58]. Once specified, the salivary gland primordium forms a placode of columnar epithelial cells within the ventral ectoderm [42]. The PS1 gene, which encodes an integrin subunit, is expressed in the salivary gland primordium formed within the ventral ectoderm. At later embryonic stages, PS1 expression is maintained in invaginating and posteriorly migrating secretory cells that keep in contact with the visceral mesoderm substratum. Embryos carrying Scr mutations lack PS1 expression in the salivary primordium, suggesting that PS1 is a downstream target of Scr [42]. In PS1 mutants, the distal tip of invaginating secretory cells reaches the turning point of the visceral mesoderm, but these cells fail to migrate posteriorly [41]. These salivary gland defects, observed when Scr and PS1 expression is lost, suggest that integrin PS1 participating in the Scr-directed molecular cascade is essential for the salivary gland to migrate posteriorly along the visceral mesoderm.
Hox-Regulated Cell Adhesion Molecule Expression in Cultured Normal and Cancer Cells
The neural cell adhesion molecule (N-CAM), a member of the immunoglobulin superfamily, is involved in cell adhesion, intracellular signaling, and cytoskeleton dynamics [59].
The effects of Hox proteins on N-CAM promoter activity have been investigated by cotransfecting NIH 3T3 mouse embryonic fibroblasts with constitutively active Xenopus Hox constructs and a reporter gene construct containing the mouse N-CAM promoter sequence. Hox2.5 (Hoxb9) greatly increases the transcriptional activity of the reporter gene, while transfection of Hox2.4 (Hoxb8) eliminates its activity [16]. Hoxc6 also stimulates the transcriptional activity driven by the N-CAM promoter [49]. Together, these findings suggest that N-CAM is a downstream target for regulation by Hoxb8, Hoxb9, and Hoxc6. HOXD3 overexpression in human erythroleukemia HEL cells results in an increase of cell-extracellular matrix adhesiveness, giving rise to elevated 3 integrin expression levels [45,46]. Human lung carcinoma A549 epithelial cells transfected with HOXD3 exhibit an increase in 3 integrin expression and this modification promotes migratory and invasive behavior [33,60]. HOXD3 expression elicits phenotypic changes in human umbilical vein endothelial cells (HUVECs), switching them from a resting to angiogenic or invasive state by enhancing v 3 integrin expression [47]. HOXD3 directly binds to the 3 integrin promoter in human microvascular endothelial cells [61]. While HOXD3 causes an increase in 3 integrin expression in several cell lines, HOXB3, which is paralogous to HOXD3, is not involved in 3 integrin expression in endothelial cells [62]. Although the HOXA3 paralog is functionally similar to HOXD3 with respect to promotion of cell migration, these transcription factors do not have common downstream target genes [63]. 3 integrin mRNA levels are increased in endometrial adenocarcinoma cells transfected with a HOXA10 expression vector and are decreased in the cells treated with a HOXA10 antisense construct [54]. HOXA10 directly regulates 3 integrin expression in endometrial cells, mediating the effects of steroid hormones, estrogen and progesterone, on 3 integrin expression [54]. The HOXA10 transcription factor interacts with a specific 3 integrin cis element, activating 3 integrin transcription during differentiation of U937 cells into a myeloid lineage [55]. Increased adhesion of differentiating U937 cells to fibronectin is dependent upon a HOXA10induced increase in 3 integrin expression [55]. Thus, expression of 3 integrin can be controlled by at least two HOX proteins that belong to different paralogous groups, possibly reflecting the redundant functions of the different HOX paralogs.
In addition to 3 integrin, expression of several integrins is reportedly regulated by HOX transcription factors. An approximate 20-fold increase in 8 integrin expression levels is caused by ectopic Hoxa11 expression in human embryonic kidney 293 cells [56]. During development, 8 integrin and Hoxa11 are coexpressed in mouse metanephric mesenchyme BioMed Research International 5 cells. Mutations in the 8 integrin gene give rise to a bud branching morphogenesis defect that is very similar to that observed in Hoxa11/Hoxd11 mutant mice. Furthermore, a regional reduction in 8 integrin expression is found in the developing kidneys of Hoxa11/Hoxd11 mutant mice [56]. These findings suggest that 8 integrin is a major realizator of Hoxa11/Hoxd11 function in the developing kidney.
In ovarian cancer epithelial cells, HOXA4 suppresses cell motility and spreading through the medium by increasing cell-cell adhesion and 1 integrin protein levels [48]. Loss of HOXD1 expression in HUVECs results in a decrease in cell motility and cell-extracellular matrix adhesiveness, accompanied by decreasing 1 integrin expression levels, suggesting HOXD1 is a positive regulator of cell motility and cell-extracellular matrix adhesiveness in endothelial cells [44]. Thus, it is possible that cell-extracellular matrix interactions mediated by different types of integrin molecules are dependent on the assortment of HOX genes expressed and the amount of protein they produce in nonmalignant and malignant cells.
A Role for Hox Proteins in Epithelial to Mesenchymal Transition and Its Reverse Process in Normal and Cancer Cells
Epithelial to mesenchymal transition (EMT) is an event in which adherent epithelial cells are converted into migratory mesenchymal cells that can invade the extracellular matrix. The EMT process is essential for gastrulation and neural crest migration during the development of the early vertebrate embryo. EMT also plays a role in cancer metastasis. Mesenchymal to epithelial transition, the converse of EMT, is observed in many aspects of embryonic development and tumor metastasis, suggesting that epithelial and mesenchymal morphologies are reversible [64]. HOX expression is reported to be closely associated with the transition between epithelial and mesenchymal states. HOXA7 transcripts are absent from normal ovarian surface epithelial cells, but HOXA7 protein is produced in ovarian tumors derived from epithelial cells, which often resemble epithelia composing the Müllerian duct. Ectopic HOXA7 expression in immortalized ovarian surface epithelial (IOSE-29) cells induces E-cadherin expression and downregulates expression of the mesenchymal marker vimentin, enhancing the epithelial phenotype [50]. Hoxa10 is required for proper patterning of the uterus during embryonic development and functional endometrial differentiation in adults [65]. Downregulation of HOXA10 expression in endometrial carcinomas correlates with increased tumor grade and promotes tumor growth and invasive properties [53]. Forced expression of HOXA10 in endometrial carcinoma (SPEC2 and KLE) cells induces E-cadherin expression, suppresses vimentin expression, and inhibits their invasive behavior [53]. The findings described above suggest that HOXA7 and HOXA10 expression promotes mesenchymal to epithelial transition.
In contrast, HOXD3 overexpression in lung cancer A549 cells transforms them from epithelial to mesenchymal morphology ( Figure 2) and causes a simultaneous reduction in E-cadherin expression levels and increase in 3 and 3 expression [33]. This was the first study reporting that HOX gene expression enhances the invasive and metastatic properties of human cancer cells. Primary breast carcinomas and distant metastases of various organs exhibit significantly higher HOXB7 expression levels than normal mammary epithelial cells [51]. Overexpression of HOXB7 in MCF10A cells, an immortalized cell line derived from normal human mammary epithelial cells, induces their transformation from cobblestone-like epithelial morphology to spindle-shape mesenchymal morphology, which brings about a dramatic reduction in expression of E-cadherin and tight junction proteins, claudin 1, claudin 4, and claudin 7, as well as an elevation in -smooth muscle actin expression [51]. Similarly, HOXB9 overexpression in MCF10A cells transforms them from an epithelial phenotype into a mesenchymal phenotype by reducing E-cadherin expression levels and increasing vimentin expression [52]. These findings suggest that HOXD3, HOXB7, and HOXB9 transcription factors serve as EMT inducers in immortalized cells and cancer cells.
Whether EMT-inducing HOX proteins have the ability to regulate adhesion molecule gene expression directly or where in the signal transduction pathway HOX proteins exert their effect to induce EMT warrants clarification. HOX proteins have been reported to control expression of some regulatory molecules. HOXA10 inhibits expression of Snail, a zincfinger transcription factor, in endometrial carcinoma cells [53]. Snail, a key regulator of EMT, downregulates E-cadherin expression, leading to the loss of epithelial morphology in cells undergoing migration during embryonic development as well as tumor progression [66][67][68]. These results clearly suggest that downregulation of HOXA10 expression induces EMT by elevating Snail expression levels. HOXB9 induces elevated expression of signaling molecules, TGF-1 and TGF-2, in MCF10A cells, leading to increased cell motility and acquisition of mesenchymal phenotypes [52]. Members of the TGF-family play crucial roles in initiating and maintaining EMT during embryonic development and tumor metastasis [69,70]. These findings indicate that HOXB9 expression induces EMT by activating the TGF-signaling pathway.
During development of the vertebrate embryo, neural crest cells initially reside within the dorsal neural tube, subsequently undergo EMT to migrate to distant locations, and then differentiate into a wide range of derivatives. When neural crest cells delaminate from the neuroepithelium, Ncadherin and cadherin 6B are downregulated and 1 integrin and cadherin 7 are upregulated [71]. The EMT process is controlled by a hierarchical gene regulatory network in which transcription factors and signaling molecules operate [72]. A recent study [43] has demonstrated that anterior Hox genes interact with components of this network to induce neural crest fates in the chick embryo. Expression of Hoxb1 in the trunk neural tube induces expression of the key transcription factors Snail and Msx1/2, leading to downregulation of Ncadherin and cadherin 6B expression and upregulation of cadherin 7. These changes in cell adhesion molecule expression possibly reflect that Hoxb1 causes neural crest EMT. It is interesting to note that expression of Hox genes participates [33] were fixed and stained for nuclei and F-actin by using DAPI and phalloidin-rhodamine, respectively. A549-vec cells have epithelial morphology (a, b), while A549-HOXD3 cells have spindle-shape mesenchymal morphology (c, d). A reduction in E-cadherin expression and an increase in 3 and 3 integrin expression were observed in A549-HOXD3 cells, as compared to A549-vec cells [33].
in EMT events that occur during embryonic morphogenesis as well as tumor progression.
Possible Association between Hox Expression and Cell-Cell and Cell-Extracellular Matrix Interactions in the Vertebrate Embryo during Development
When neural crest cells delaminate from the dorsal neural tube by EMT, these cells lose N-cadherin on their surfaces. [73][74][75]. As mentioned previously, HOXD3 promotes cell motile activity and invasiveness in lung cancer cells [33].
To investigate whether HOXD3 expression regulates cell adhesiveness in dorsal neural tube or roof plate cells in the early mouse embryo, transgenic mouse embryos were generated that overexpress HOXD3 in these cell types under the control of the Wnt1 regulatory element [34]. Dorsal neural tube cells expressing HOXD3 expand ventrally within the neural tube (Figures 3(a), 3(b), 3(e), and 3(f)). This finding raises the possibility that HOXD3-expressing roof plate cells propagate in the dorsal neural tube and then migrate ventrally. Furthermore, in the neural tube ventricular zone, a large number of progenitor cells that do not express N-cadherin protein can be observed in HOXD3expressing transgenic embryos (Figures 3(c), 3(d), 3(e), and 3(f)). Although HOXD3 expression is localized in the dorsal half of the neural tube and in cells immediately adjacent to the floor plate, progenitor cells that do not express N-cadherin are distributed throughout the ventricular zone. This finding indicates that HOXD3 expression has a non-cell-autonomous effect, negatively affecting N-cadherin expression in cells at a distance from those expressing HOXD3. Therefore, signaling molecules or secreted proteins whose expression is induced by HOXD3 likely reduce N-cadherin expression.
Gastrulation is an essential process in the development of most animals. In amniotes, gastrulation begins with the acquisition of asymmetry in the early embryo. The movements of epiblast cells towards the midline of the embryo form the primitive streak. At the streak, epiblast cells undergo EMT, ingress, and migrate inwardly to their proper positions where they differentiate into mesodermal and endodermal tissues. Consequently, the three definitive germ layers, ectoderm, mesoderm, and endoderm, are organized. The crucial role of FGF signaling in regulating cell migration is highlighted by the effect of altering fibroblast growth factor receptor 1 (FGFR1) expression. In Fgfr1-deficient [34]. These embryos were sectioned and analyzed using in situ hybridization. Expression of lacZ (control) is restricted to roof plate cells within the neural tube, while HOXD3 expression is localized not only in the dorsal neural tube, but also within the ventricular zone and in ventral regions of the neural tube. (c, d) N-Cadherin expression in the thoracic neural tubes of 12.5-day lacZ-and HOXD3-expressing transgenic embryos. Transverse sections were stained using anti-human N-cadherin antibodies [34]. N-Cadherin is strongly expressed in the ventricular zone of mouse embryos, epiblast cells fail to undergo EMT, which is required for ingression through the primitive streak [76]. The defect is attributed to a failure in Snail upregulation and E-cadherin downregulation. This finding shows that FGFR1 regulates epiblast cell migration by differentially regulating the intercellular adhesion properties of these cells at the primitive streak. Furthermore, this study suggests that Snail expression downstream of FGFR1 is required for normal downregulation of E-cadherin. In the early chick embryo, PDGF signaling plays a major role in the migration of mesodermal cells during gastrulation [77]. PDGFA expression in the epiblast controls N-cadherin expression and activates PDGFR , which is required for migration of mesodermal cells away from the primitive streak. The timing of ingression is orchestrated by temporal and spatial collinear activation of Hox genes that starts in the epiblast [78]. Expression of posterior Hox genes can delay the time at which cells ingress from the epiblast into the primitive streak and nascent mesoderm. Within a region of epiblast cells expressing a given Hox gene, a subpopulation of epiblast cells that express the neighboring 5 Hox gene exists. These cells acquire slightly different migratory properties, and their ingression is slightly delayed. Ingressing cells expressing Hox genes from successive paralogous groups might sort out from each other along the anterior-posterior axis [24,78]. The target genes of Hox proteins and the mechanism by which they control ingression remain to be elucidated; however, the targets might include genes encoding factors that regulate EMT, such as cell-cell and cell-extracellular matrix adhesion molecules [79].
In the developing mouse embryo, Hox3 paralogs play crucial roles in the formation of neural crest, somatic mesoderm, and endoderm-derived structures in the cervical region, including the pharyngeal arches [80,81]. Hoxa3 is essential for the development of the thymus, thyroid, parathyroid glands, and ultimobranchial bodies [82]. These organs develop concurrently and they are composed of cells that migrate from their original sites in the pharynx and pharyngeal pouches to their final positions in the cervical and upper thoracic regions. The ultimobranchial bodies fuse with the thyroid; the cells disperse within the thyroid lobes and then differentiate into calcitonin-producing C-cells. Mice doubly mutant for Hoxa3 and Hoxb3 or Hoxa3 and Hoxd3 show that the ultimobranchial bodies fail to migrate to their normal positions in the thyroid, suggesting that expression of Hox3 paralogs is required for the organized movement of primordial organs in the pharyngeal tissues [83]. The thymus and parathyroid glands originate from both the neural crestderived mesenchymal cells of the pharyngeal arches and the pharyngeal endoderm. Conditional deletion of Hoxa3 alleles from neural crest cells results in the development of ectopic thymus and parathyroid glands [84], raising the possibility that Hoxa3 controls neural crest cell migration in pharyngeal regions. In the chick embryo, knockdown of Hoxa3 function by using antisense morpholino oligonucleotides disrupts the migration of epibranchial placode-derived cells and neural crest cells, indicating that Hoxa3 is required for the migration of these cell types [85]. Although these findings show that Hoxa3 and its paralogs are regulators of cell migration, the target genes for Hox3 proteins are not known. Genes encoding molecules involved in regulating cell-cell and cellextracellular matrix interactions could be candidate Hox3 paralog targets.
During vertebrate limb development, posterior Hox genes in the HoxA cluster are expressed in a specific spatiotemporal manner along the proximodistal axis. Hoxa13 is expressed in the autopod during normal limb development. In the chick embryo, misexpression of Hoxa13 in the entire limb bud results in a marked size reduction of the zeugopodal cartilage due to homeotic transformation into cartilage of a more distal type [86]. When limb mesenchymal cells are dissociated and cultured in vitro, Hoxa13-expressing cells sort out from Hoxa13-nonexpressing cells. This finding indicates that Hoxa13 expression is involved in modulation of cellcell adhesiveness. Mice homozygous for a Hoxa13 loss-offunction mutation show major defects in the formation of autopod skeletal elements [87]. Autopod-derived mesenchymal cells in homozygous Hoxa13 mutant embryos fail to form chondrogenic condensations in vitro, and mutant cells in the distal region fail to sort out from wild-type cells in the proximal region [57]. This failure in cell sorting reflects the fact that Hoxa13 expression is involved in determining cell surface properties. Eph proteins, which constitute a large family of receptor tyrosine kinases, interact with cell surfacebound ligands, ephrins [88,89]. Eph/ephrin juxtacrine signaling modulates cell morphology, motility, and attachment. A marked reduction in EphrinA7 expression prevents mesenchymal cells in the autopod of homozygous Hoxa13 mutant embryos from forming chondrogenic condensations in vivo and in vitro [57]. EphrinA7 has been shown to be a direct downstream target of Hoxa13 and Hoxd13 during limb development [90]. Furthermore, using a CHIP-onchip approach (chromatin immunoprecipitation with DNA microarray technology), the gene loci of cadherin 12 (also known as Br-cadherin or N-cadherin 2) and protocadherins are identified as direct Hoxd13 binding sites in the developing mouse limb bud [91]. It has been reported that the cadherin 12 protein is exclusively expressed in the developing and adult mouse brain [92,93]. Cadherin 12 does not seem to function in the limb bud. On the other hand, N-cadherin is abundant in the distal limb bud and increases in the distal region as limb development proceeds [94,95]. N-Cadherin-positive mesenchymal cells segregate from N-cadherin-negative cells in vitro, suggesting that N-cadherin plays an important role in cell sorting. However, the relation between N-cadherin and expression of Hox genes during limb development is presently unknown.
Concluding Remarks
In this review, cell adhesion molecules mediating cell-cell and cell-extracellular matrix interactions, whose expression is directly or indirectly controlled by Hox transcription factors, have been the focus. In Drosophila, cadherins, components of the hierarchical Abd-B-regulated molecular pathway, play an important role in the formation of posterior spiracles during development. Integrin molecules participate in the Scr-directed molecular cascade that is required for salivary gland formation and migration. In cultured normal and malignant mammalian cells, expression of several Hox genes enhances cell-extracellular matrix adhesion and cell motility by activating integrin expression. Several Hox proteins play a role in epithelial-mesenchymal transition and its reverse process by reducing and elevating cadherin expression. Hox proteins likely do not regulate cadherin expression directly; Hox proteins might control cadherin expression by using transcription factors and signaling molecules as intermediaries. To elucidate the exact processes governed by Hox proteins, it is worthwhile to investigate whether cell adhesion molecule expression is directly controlled by Hox proteins or where in the Hox-directed molecular cascade cell adhesion molecules function.
In this review, I have discussed the necessity of Hox expression for neural crest migration, gastrulation, migration of organs in the pharyngeal regions, and limb bud formation in the vertebrate embryo during development. These developmental processes require precise regulation of cell adhesion and migration. How Hox proteins are related to expression of cell adhesion molecules during vertebrate body patterning is not fully understood. The highly redundant functions of Hox genes pose a challenge when attempting to clarify the association between Hox transcription factors and expression of a diverse set of cell adhesion molecules. However, as the gaps in the puzzle are filled by future research findings, the precise mechanisms by which Hox proteins govern expression of cell adhesion molecules will be uncovered.
Conflict of Interests
The author declares that there are no conflicts of interest regarding the publication of this paper. | 2017-08-30T06:18:56.323Z | 2014-07-21T00:00:00.000 | {
"year": 2014,
"sha1": "eb5184429602cc88c53f0f6cde7353521222c8a1",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2014/591374.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3ed716f0e893b9d94e8e99e42800fad7b62fdb02",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
195695194 | pes2o/s2orc | v3-fos-license | Hormonal Contraceptives, Female Sexual Dysfunction, and Managing Strategies: A Review
In recent decades, hormonal contraceptives (HC) has made a difference in the control of female fertility, taking an unequivocal role in improving contraceptive efficacy. Some side effects of hormonal treatments have been carefully studied. However, the influence of these drugs on female sexual functioning is not so clear, although variations in the plasma levels of sexual hormones could be associated with sexual dysfunction. Permanent hormonal modifications, during menopause or caused by some endocrine pathologies, could be directly related to sexual dysfunction in some cases but not in all of them. HC use seems to be responsible for a decrease of circulating androgen, estradiol, and progesterone levels, as well as for the inhibition of oxytocin functioning. Hormonal contraceptive use could alter women’s pair-bonding behavior, reduce neural response to the expectation of erotic stimuli, and increase sexual jealousy. There are contradictory results from different studies regarding the association between sexual dysfunction and hormonal contraceptives, so it could be firmly said that additional research is needed. When contraceptive-related female sexual dysfunction is suspected, the recommended therapy is the discontinuation of contraceptives with consideration of an alternative method, such as levonorgestrel-releasing intrauterine systems, copper intrauterine contraceptives, etonogestrel implants, the permanent sterilization of either partner (when future fertility is not desired), or a contraceptive ring.
Introduction
In recent decades, hormonal contraception (HC) has made a difference in the control of female fertility, taking an unequivocal role in improving contraceptive efficacy. Moreover, there are numerous studies that state that the use of hormonal contraceptives is very prevalent in the female population of childbearing age [1][2][3][4][5][6][7][8]. In a study carried out by Hall et al. in 2012, it was estimated that 63% of women of reproductive age worldwide who were married or in a relationship were using some type of contraception, with the contraceptive pill as the third most commonly used method (9% of women aged 15-19 years) [3,9]. Combined oral contraception seems to be the most popular form of reversible contraception in Europe and the United States [7,8].
The popularity and widespread use of hormonal contraceptives is partly due to their benefits, such as: (1) Being a highly effective and reversible form of contraception; (2) the woman has control over
Materials and Methods
The aim of this review is developing, assimilating, and synthesizing the existing evidence about the influence of hormonal contraception on female sexual function. In addition, we intended to identify gaps in knowledge in this field in order to design new studies that may fill those gaps in the future. Our review focuses on the use of hormonal contraceptives in women of childbearing age and on the influence of these drugs on female sexual function [1,2]. In addition, the study reviews the differences in the influence of the HCs on female sexual function (FSF) according to the hormonal composition and the mechanism of action of the different HCs in order to determine which one has the lowest profile of secondary effects in the sexual area. On the other hand, to our knowledge, this is the latest effort to offer an overview of the recommended strategies in cases in which the use of HCs is associated with sexual dysfunction.To achieve this purpose, we performed a scoping review following PRISMA guidelines (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) (Figure 1). In this review, we selected key articles based on hormonal contraception and female sexual function. PubMed and Cochrane were chosen as the main databases used due to the extensive contents of biomedical research they offer, their free access, and their ease of use. Our search term combinations were: "Hormonal contraception" AND "female sexual function" OR "female sexual dysfunction." The filters "publication date: From 2000/01/01 to 2019/01/31" and "review" were applied in the search in order to limit the amount of material available. No language restrictions were applied. Similar and related articles that were considered of special interest for our review were also included, and they were compiled though cross-referencing. Similarly, some relevant clinical practice guidelines were included. The 64 papers that were included were chosen because they fit the topic of the review (presenting information about female sexual dysfunction, hormonal contraception, hormonal variations, and their relationship with female sexual function; directly treating the impact of hormonal contraceptives in female sexual function; or providing relevant information about the management strategies of female sexual dysfunction associated with the use of HCs). We reviewed six prospective observational studies, eight clinical trials, 19 cross-sectional studies, 22 reviews, and nine other works that include consensus and clinical practice guidelines. Most of the studies were carried out in European countries, although there were also studies carried out in the US, Asia, Australia, and South America. The population of the studies reviewed varied between 40 and 18,787, although in the case of clinical trials, the largest population analyzed was 600 subjects. on the influence of these drugs on female sexual function [1,2]. In addition, the study reviews the differences in the influence of the HCs on female sexual function (FSF) according to the hormonal composition and the mechanism of action of the different HCs in order to determine which one has the lowest profile of secondary effects in the sexual area. On the other hand, to our knowledge, this is the latest effort to offer an overview of the recommended strategies in cases in which the use of HCs is associated with sexual dysfunction.To achieve this purpose, we performed a scoping review following PRISMA guidelines (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) (Figure 1). In this review, we selected key articles based on hormonal contraception and female sexual function.
PubMed and Cochrane were chosen as the main databases used due to the extensive contents of biomedical research they offer, their free access, and their ease of use. Our search term combinations were: "Hormonal contraception" AND "female sexual function" OR "female sexual dysfunction." The filters "publication date: From 2000/01/01 to 2019/01/31" and "review" were applied in the search in order to limit the amount of material available. No language restrictions were applied. Similar and related articles that were considered of special interest for our review were also included, and they were compiled though cross-referencing. Similarly, some relevant clinical practice guidelines were included. The 64 papers that were included were chosen because they fit the topic of the review (presenting information about female sexual dysfunction, hormonal contraception, hormonal variations, and their relationship with female sexual function; directly treating the impact of hormonal contraceptives in female sexual function; or providing relevant information about the management strategies of female sexual dysfunction associated with the use of HCs). We reviewed six prospective observational studies, eight clinical trials, 19 cross-sectional studies, 22 reviews, and nine other works that include consensus and clinical practice guidelines. Most of the studies were carried out in European countries, although there were also studies carried out in the US, Asia, Australia, and South America. The population of the studies reviewed varied between 40 and 18,787, although in the case of clinical trials, the largest population analyzed was 600 subjects. We summarized the findings and best practice recommendations for addressing a woman's contraception and its potential association with sexual function. We excluded those articles that focused on male sexual dysfunction, menopause, and sexual dysfunction related to medical disease, We summarized the findings and best practice recommendations for addressing a woman's contraception and its potential association with sexual function. We excluded those articles that focused on male sexual dysfunction, menopause, and sexual dysfunction related to medical disease, such as oncological pathology. Every attempt was made to combine as much similar data as possible. Institutional review board approval was not needed for this review.
Hormonal Contraceptives
The combined oral contraceptive (COC) was first approved in 1960. Since then, it has undergone many evolutions in dosage, hormone type, and regimen. It has been used by more than 100 million women worldwide and has the widest geographic distribution of any method of contraception [10]. In this section, we will provide detailed information about hormonal contraceptives in terms of the existing types, their hormonal composition, their mechanism of action, and the alterations in hormonal function that derive from them.
Types
At present, there are twenty different contraceptive methods approved by the FDA [20], ten of which are female hormonal contraceptive methods: Eight are reversible contraceptive methods, and two are emergency contraceptive methods. In Table 1, we can see the different categories of hormonal contraceptives mentioned.
Route of administration
Dosing frequency Levonorgestrel 1.5 mg Oral Swallow the pills as soon as possible within 3 days after having unprotected sex.
Ulipristal Acetate
Oral Swallow the pills within 5 days after having unprotected sex.
Hormones
The hormonal composition of hormonal contraceptives is based on progestins alone or on a combination of progestogens and estrogens [10,[20][21][22][23][24]. Several different progestins are used in combined oral contraceptives (COCs). These progestins may also have estrogenic, antiestrogenic, androgenic, antiandrogenic, or antimineralocorticoid activity [10]. Most progestins are 19-nortestosterone derivatives. Progestins may be classified according to their chemical structure as an estrane (norethindrone, norethindrone acetate, ethynodiol diacetate) or as a gonane (LNG, desogestrel, norgestimate). In general, gonane progestins appear to be more potent than the estrane derivatives (smaller doses can be used), but other differences between the estrane and gonane compounds are difficult to characterize [10]. Table 2 shows the classification of progestogens used in hormonal contraception according to their androgenic potency. Among the contraceptive progestins available in the United States, norgestrel and levonorgestrel are the most androgenic; norethindrone and norethindrone acetate are less androgenic; and desogestrel, etonogestrel, norgestimate, dienogest, and drospirenone are the least androgenic [2]. Newer progestins (norgestimate and desogestrel) have little or no androgenic activity, whereas other progestins (cyproterone acetate, drospirenone, and dienogest) have antiandrogenic activity [10]. The varying progestational "potencies" attributed to different COC preparations are based on pharmacological experimental models. Many variables affect the potency of COCs (including dosage, bioavailability, protein binding, receptor binding affinity, and interindividual variability), making it difficult to extrapolate the results of isolated experiments to provide clinically relevant information in humans. There is no clear clinical or epidemiological evidence that compares the relative potencies of currently available COCs [10]. Systemic progestins may be associated with a loss of sexual desire due to the suppression of ovarian function and endogenous estrogen production [6]. Along the same line of reasoning, in their study about women's self-reported sexual desire across natural cycles, Roney and Simmons observed that levels of salivary progesterone negatively predicted women's sexual desire [25,26]. Furthermore, based on the findings by Grebe et al., effective dosages of progestin should be associated with a stronger positive linkage between women's loyalty/faithfulness to their relationship partners and the frequency with which they engaged in sexual intercourse with their partners [26,27]. However, contraceptive pills with progestogens with antiandrogenic effect do not affect sexual desire, according to some reports [28,29]. In recent studies, drospirenone and dienogest have reported a positive effect on sexual response as well as attraction, desire, satisfaction, and coital frequency [28,30], perhaps due to the ability to reduce the activity of 5-alpha reductase [31]. With regard to estrogens as hormonal components of hormonal contraceptive methods, three types of estrogens are used in COCs (as it can be seen in Figure 2): Ethinylestradiol (EE), estradiol valerate (E2V), and 17 beta-estradiol (E2). E2V is rapidly metabolized to E2 [10]. Due to its biochemical structure, estradiol has less impact on the synthesis of hepatic proteins than ethinyl estradiol, which is likely to result in a better metabolic and vascular profile [3]. The new formulations of launched COCs have lower doses of estrogen, and EE has been replaced by more "physiological" forms of estrogen, such as 17β-estradiol (E2) or E2-Valerate (E2 V) [32]. There is some evidence to suggest that estrogens play an essential role in female sexuality, and prior research has found that declining sexual functioning in women is most closely related to declining estrogen levels [6,33] Similarly, levels of salivary estradiol positively predicted women's sexual desire, conversely to progesterone [25,26]. Regarding loyalty and faithfulness, dosages of estradiol should predict a weaker positive linkage between women's loyalty/faithfulness to their relationship partners and frequency of sexual intercourse (not including masturbation and sexual fantasies; independently of androgenicity of sexual hormones) [26,27].
Mechanism of Action of Hormonal Contraceptives
In Table 1, we can see a summary of the different categories of hormonal contraceptives mentioned with their respective mechanism of action of hormonal contraceptives. The mechanism of action of hormonal contraceptives depends on their hormonal composition and the route of administration.
Combined hormonal contraceptives (CHCs) encompass oral contraceptives (pill), patch, and the vaginal ring. Their mechanism of action is similar.
With regard to combined oral contraceptives (COCs), they have multiple mechanisms of action due to both their estrogenic and progestational components: The suppression of pituitary gonadotropin secretion (inhibiting ovulation), the increase of cervical mucus viscosity (impairing sperm transport), the suppression of the luteinizing hormone (LH), and the impairment of ovulation [10].
The patch is a 20 cm 2 square matrix system that delivers 200 mg of norelgestromin (the primary active metabolite of norgestimate) and 35 mg of ethinylestradiol (EE) daily to the systemic circulation. Following the first application of the patch, serum hormone levels increase gradually over the first 48-72 h, reach a plateau, and then remain constant during the remainder of the 21-day period. Compared with COC, plasma hormone levels remain constant, and the peak levels are lower because first-pass hepatic metabolism and gastrointestinal enzyme degradation are avoided. Curiously, although peak levels are lower, the area under the curve, which represents overall EE exposure, is larger. One patch is applied weekly for three consecutive weeks, followed by a one patch-free week. The patch can be placed on one of four sites: The buttocks, upper outer arm, lower abdomen, or upper torso, excluding the breast [10].
The ring releases 15 mg of EE and 120 mg of the progestin etonogestrel (ENG) (the active metabolite of desogestrel) per day, which is absorbed through the vaginal epithelium. Serum hormone levels increase immediately after ring insertion and then decrease slowly over the cycle [10]. The vaginal route is an ideal method of drug administration, and the advantages of this method are well established. By avoiding gastrointestinal absorption and the hepatic first-pass effect, the vaginal administration of contraceptives enables the use of lower hormonal doses and the achievement of steady drug concentrations [34].
There is another group of hormonal contraceptives only composed of progesterone. This group can include the progestin-only pill, depot medroxyprogesterone acetate (DMPA), and the etonogestrel implant. Progestin-only pills (POPs, the "mini-pill") provide reliable, reversible contraception and have very few contraindications. The main mechanism of action is the alteration of the cervical mucus (more viscid, less copious) and the inhibition of sperm penetration. Negative luteinizing hormone (LH) feedback leads to the suppression of ovulation in up to 50% of users. POPs containing desogestrel may inhibit ovulation more consistently [21].
DMPA is administered intramuscularly at three-month intervals (every 12-13 weeks) and is thus considered a long-acting reversible contraceptive (LARC) by some and a short-acting reversible contraceptive (SARC) by others. DMPA works primarily by inhibiting the secretion of pituitary gonadotropins, thereby suppressing ovulation. Women enter a hypoestrogenic state, and their
Mechanism of Action of Hormonal Contraceptives
In Table 1, we can see a summary of the different categories of hormonal contraceptives mentioned with their respective mechanism of action of hormonal contraceptives. The mechanism of action of hormonal contraceptives depends on their hormonal composition and the route of administration.
Combined hormonal contraceptives (CHCs) encompass oral contraceptives (pill), patch, and the vaginal ring. Their mechanism of action is similar.
With regard to combined oral contraceptives (COCs), they have multiple mechanisms of action due to both their estrogenic and progestational components: The suppression of pituitary gonadotropin secretion (inhibiting ovulation), the increase of cervical mucus viscosity (impairing sperm transport), the suppression of the luteinizing hormone (LH), and the impairment of ovulation [10].
The patch is a 20 cm 2 square matrix system that delivers 200 mg of norelgestromin (the primary active metabolite of norgestimate) and 35 mg of ethinylestradiol (EE) daily to the systemic circulation. Following the first application of the patch, serum hormone levels increase gradually over the first 48-72 h, reach a plateau, and then remain constant during the remainder of the 21-day period. Compared with COC, plasma hormone levels remain constant, and the peak levels are lower because first-pass hepatic metabolism and gastrointestinal enzyme degradation are avoided. Curiously, although peak levels are lower, the area under the curve, which represents overall EE exposure, is larger. One patch is applied weekly for three consecutive weeks, followed by a one patch-free week. The patch can be placed on one of four sites: The buttocks, upper outer arm, lower abdomen, or upper torso, excluding the breast [10].
The ring releases 15 mg of EE and 120 mg of the progestin etonogestrel (ENG) (the active metabolite of desogestrel) per day, which is absorbed through the vaginal epithelium. Serum hormone levels increase immediately after ring insertion and then decrease slowly over the cycle [10]. The vaginal route is an ideal method of drug administration, and the advantages of this method are well established. By avoiding gastrointestinal absorption and the hepatic first-pass effect, the vaginal administration of contraceptives enables the use of lower hormonal doses and the achievement of steady drug concentrations [34].
There is another group of hormonal contraceptives only composed of progesterone. This group can include the progestin-only pill, depot medroxyprogesterone acetate (DMPA), and the etonogestrel implant. Progestin-only pills (POPs, the "mini-pill") provide reliable, reversible contraception and have very few contraindications. The main mechanism of action is the alteration of the cervical mucus (more viscid, less copious) and the inhibition of sperm penetration. Negative luteinizing hormone (LH) feedback leads to the suppression of ovulation in up to 50% of users. POPs containing desogestrel may inhibit ovulation more consistently [21].
DMPA is administered intramuscularly at three-month intervals (every 12-13 weeks) and is thus considered a long-acting reversible contraceptive (LARC) by some and a short-acting reversible contraceptive (SARC) by others. DMPA works primarily by inhibiting the secretion of pituitary gonadotropins, thereby suppressing ovulation. Women enter a hypoestrogenic state, and their progesterone is low due to anovulation. DMPA also increases the viscosity of cervical mucus (minor mechanism of action) and induces endometrial atrophy [21].
The single-rod etonogestrel subdermal implant (Implanon/Implanon NXT/Nexplanon) is a LARC. The single-rod implant contains 68 mg of the progestin etonogestrel (ENG) and provides contraception for three years. The ENG implant works primarily by inhibiting ovulation and consistently does so until the beginning of the third year of use. Ovarian activity, including estradiol synthesis, is still present. The ENG implant causes a thickening of the cervical mucus and changes in the endometrial lining [21].
The last group is formed by intrauterine contraceptives (IUCs). This group includes copper intrauterine devices (Cu-IUDs) and levonorgestrel-releasing intrauterine systems (LNG-IUS). Only LNG-IUS are explained in this section, because Cu-IUDs do not have a hormonal component. The chief mechanism of action of all IUCs is the prevention of fertilization; they may also have post-fertilization effects, including the potential inhibition of implantation. The LNG-IUS produce a weak foreign body reaction and endometrial changes that include endometrial decidualization and glandular atrophy. The primary mechanism of action is via changes in the amount and the viscosity of cervical mucus, which acts as a barrier to sperm penetration. Ovulation is likely inhibited in some women, but it is preserved in most study subjects. Endometrial estrogen and progesterone receptors are suppressed, which results in changes in bleeding patterns and may contribute to its contraceptive effect [22].
Hormonal Alterations of Hormonal Contraceptives and Their Influence on Female Sexual Function
In contrast to animal species in which linear relationships exist between hormonal status and sexual behavior, sexuality in the human population is remarkably complex and is not determined so simply by the level of sexual steroids [29].
Hormonal contraceptives (HCs) are responsible for a decrease of circulating androgen levels [1,2,29,35], as well as a decrease of the baseline serum levels of estradiol [6,29,35] and progesterone [35] and the inhibition of oxytocin functioning [35]. However, the concentrations of the follicle-stimulating and luteinizing hormones are similar in freely cycling women and in women using HCs [35]. Decreased circulating androgen levels with oral combined hormonal contraceptive (CHC) use, and its negative effects on sexual life, occur by two mechanisms, as follows: (1) An oral CHC increases sex hormone-binding globulin (SHBG) and decreases free testosterone, and (2) androgen production from the ovary is suppressed with an oral CHC. This antiandrogenic effect may be magnified with an oral CHC containing an antiandrogenic progestin [2]. Thus, all CHCs are antiandrogenic, although some formulations, depending on the specific progestin, are more so than others. The patch and the vaginal ring are more antiandrogenic than the pill [1]. As expected, the baseline serum levels of estradiol and progesterone are significantly higher in freely cycling women than in women using an HC. Nevertheless, the concentrations of the follicle-stimulating and luteinizing hormones are similar in both groups [35]. In respect of oxytocin, its functioning is likely to be altered by this variation in the peripheral estradiol and progesterone levels that were found to be altered in women using HCs, and, therefore, a potential mechanism could be related to the direct binding of progesterone to oxytocin receptors (OXTRs), thereby inhibiting OXTR functioning.
The association between hormones and sexuality is multidimensional, as several hormones are important in the regulation of sexual behavior [29].
Though some evidence shows that testosterone has a role in sexual function for women, these conclusions are derived primarily from studies involving postmenopausal women reporting sexual dysfunction [2]. It has been established that sexual desire, autoeroticism, and sexual fantasies in women depend on androgen levels [29]. However, the relevance of changes in androgen levels for an individual woman is unclear, and some women may be more sensitive to androgen level alteration than others [2]. The review by Casey et al. mentioned that most of the studies showed alterations in SHBG and testosterone levels; however, an overall lack of association was found between CHCs and sexual desire [2]. In other studies, decreased levels of estrogen and testosterone in older women have been associated with decreased libido, sensitivity, and erotic stimuli [29]. In addition, it has been found that patients using birth control pills may present with decreased libido. On the other hand, there are reports that suggest that progestogens with antiandrogenic effects in contraceptive pills do not affect sexual desire [29]. While there is conflicting evidence concerning a link between progestins and libido, there is some evidence to suggest that estrogens play an essential role in female sexuality. In this respect, prior research has found that declining sexual functioning in women is most closely related to declining estrogen levels [6].
Finally, with regard to oxytocin, Scheele et al. [35] describe in their work the possible functional implications of oxytocin in female sexuality and the alterations that occur in women who take hormonal contraceptives. Multiple lines of evidence suggest that the hypothalamic peptide oxytocin (OXT) is a key factor modulating pair-bonding behaviors, which means a strong affinity that develops in humans and some species between a mating couple.
In humans, peripheral OXT concentrations are significantly higher in new lovers compared with singles. Likewise, OXT reduces jealousy ratings and neural responses in an imagery task of sexual partner infidelity. OXT also increases the arousal induced by infant photos in nulliparous women and promotes responsiveness to infant crying and laughter by reducing activation in anxiety-related neural circuits. Moreover, OXT has been found to increase the intensity of orgasm and contentment after copulation. Nevertheless, OXT seems to not have an effect on vital signs. The results of the research by Scheele et al. [35] indicate that endogenous OXT concentrations at baseline positively predicted striatal responses to the romantic partners' faces in all female participants. This mechanism was disturbed in those women using an HC, indicating that the partner-specific modulatory effects of OXT are antagonized by gonadal steroids. HC use alters women's pair-bonding behavior (evident in decreased attractiveness ratings of masculine faces), reduced neural response to the expectation of erotic stimuli (a preference shift towards olfactory cues of genetic similarity), and increased sexual jealousy. Furthermore, women who use an HC while choosing partners are more likely to initiate an eventual separation, and wives who discontinue HC use tend to be less satisfied with marriage if they perceive their husband's face to be less attractive. On the other hand, women prefer masculine faces and exhibit higher levels of intersexual competition related to attractiveness at peak fertility in the menstrual cycle; however, these cyclical shifts were found to be diminished in women using an HC. In conclusion, OXT interacts with the brain reward system to reinforce partner value representations in both sexes, a mechanism which may significantly contribute to stable pair-bonding in humans and appears to be altered in women using an HC.
Sexual Dysfunction
To talk about the effects on sexual function, it is first convenient to define the concept of sexual dysfunction, as well as the types of female sexual dysfunction that are currently described. In this section, the methods used and validated to quantify the degree of sexual dysfunction are also briefly discussed. In addition, an estimate of the prevalence of sexual dysfunction in the female population of childbearing age is shown.
According to the DSM-5 (Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition), sexual dysfunctions are a heterogeneous group of disorders that are typically characterized by a clinically significant disturbance in a person's ability to respond sexually or to experience sexual pleasure [18,36]. On the contrary, we would define "sexual health" as a state of physical, emotional, mental, and social well-being related to sexuality; it is not merely the absence of disease, dysfunction, or infirmary. Sexual health requires a positive and respectful approach to sexuality and sexual relationships, as well as the possibility of having pleasurable and safe sexual experiences, free of coercion, discrimination, and violence [37].
Therefore, optimal sexual function transcends the simple absence of dysfunction [18]. In this regard too, multiple studies have shown a strong positive association between sexual function and the health-related quality of life [18]. Having said that, it can be gathered that the female sexual function is complex and multifactorial, and it is influenced by many biological, psychological, and environmental factors [2,5,18,29]. Therefore, a complete understanding of women's sexual function requires the individual assessment of these factors. The biopsychosocial approach recognizes that biological, psychological, interpersonal, and sociocultural factors can all affect female sexual function, and these factors interact with each other in a dynamic system over time. Biological factors may include hormonal changes that affect the libido or medical/anatomical problems that affect genital sexual response. Psychological factors include mood symptoms, like depression or anxiety, or negative behaviors such as critical self-monitoring during sexual activity. Some examples of interpersonal factors include general satisfaction in the woman's relationship with her partner, which is closely tied to overall sexual satisfaction, as well as quality of communication in the relationship. Finally, some sociocultural factors to consider include the woman's attitudes about menopause and aging, as well as religious, cultural, and other social values regarding sex [18].
When assessing alterations of sexual function possibly related to hormonal contraceptives, other factors that may also affect it should be taken into account. For example, sex hormones (mainly low levels of estradiol), physical and mental well-being, availability of a partner, feeling for her partner, illness and its treatments, changes in social circumstances, and low socioeconomic status could have an impact on women's desire and sexual responsiveness [5,18]. Therefore, there are several factors that can affect female sexual function which should be explored by health providers for an adequate diagnostic and therapeutic approach to sexual dysfunction. However, there are studies that show in their results that sexual health is not a widely explored area for health providers in general. Mercer et al. showed that only 21% of women with persistent sexual problems discuss it with their healthcare provider [18,38]. Furthermore, a recent survey in the USA reported that the majority of gynecologists routinely ask patients about their sexual activities, but most other areas of patients' sexuality, such as sexual problems, including pleasure and satisfaction, are not routinely discussed [34,39].
Theoretical models of women's sexual response can provide a framework for a better understanding female sexual dysfunction. Three of these models are briefly explained here. First, according to the Masters-Johnson model, sexual response progresses predictably and linearly from excitement to plateau, orgasm, and resolution. The main focus of this model is on the physical response of the genitals. Secondly, Helen Singer Kaplan noted that many individuals had problems with sexual desire, denoting the importance of desire to sexual response. In the 1970s, she modified the Masters-Johnson model to a three-phase model of desire, excitement, and orgasm. Thirdly, in 2000, Rosemary Basson and colleagues proposed an alternative circular model of female sexual response. This model has several distinguishing features. On the one hand, spontaneous desire (or "sexual drive") on the part of the woman is not always the starting point for sexual activity. On the other hand, this model emphasizes that sexual stimuli often precede physical arousal and desire, and sexual arousal and desire often co-occur. Finally, the Basson model acknowledges that both physical and emotional satisfaction are important outcomes of engaging in sexual activity. This physical and emotional satisfaction can lead to higher emotional intimacy, which, in turn, can lead to greater receptivity and seeking out of sexual stimuli-hence, the circular model [18].
There has been debate regarding which model best reflects the experiences of women. In a study of 133 women, most of whom were in their 40s and 50s, women who had Female Sexual Function Index (FSFI) scores falling into the "dysfunctional" range and postmenopausal women were more likely to endorse the Basson model [18,40].
With the concept of sexual dysfunction now developed, we may now discuss the types of sexual dysfunction that are described. Four types of female sexual dysfunction are currently recognized: (1) Female orgasmic disorder, (2) female sexual interest/arousal disorder, (3) genito-pelvic pain/penetration disorder, and (4) substance/medication-induced sexual dysfunction. In order to quantify sexual dysfunction in a fairly objective way, there are two commonly used instruments in sexual function studies: The Female Sexual Function Index (FSFI) and the Female Sexual Distress Scale-Revised (FSDS-R) [18]. The Female Sexual Function Index (FSFI) is a 19 item scale with six domains: Desire, arousal, lubrication, orgasm, pain, and satisfaction. In this scale, questions are graded on a Likert scale, and domains are weighted and summed to give a total score ranging from 2-36, with a cutoff of less than 26.55 suggesting sexual dysfunction. The FSFI has been validated in multiple languages, across age groups, and for multiple sexual disorders [18,41].
Why is it important to read up on sexual dysfunction? Sexual problems are common, estimated to affect 22-43% of women worldwide [18]. Overall, 27% of all reproductive-age US women (aged 18-44 years) report sexual dysfunction, with low sexual desire being the most common, and 10.8% of these women also experience related distress [2]. The prevalence of sexual dysfunction peaks at midlife, with 14% of women aged 45-64 reporting at least one sexual problem associated with significant distress [18]. The proportion with a notable or severe problem in desire, arousal, activity, or satisfaction ranges from 19-25% [5].
The Effects of Hormonal Contraceptives on Sexuality
This section presents different results found in the literature about the effects of hormonal contraceptives (HCs) on female sexuality (including results that advocate for positive or negative effects or the absence of sexual effects). It also discusses the peculiarities of the different types of HCs on sexuality.
Hormonal Contraceptives Do Not Have Sexual Effects
Some studies have found no change in sexual function with some hormonal contraceptives (HC) [2,3,6,10,[42][43][44][45][46]. A recent systematic review of 36 studies involving more than 13,000 women reported no significant changes in sexual desire with the use of oral combined hormonal contraception (CHC) [43]. Another study [47] also reported high satisfaction rates with both LNG-IUS and copper IUC but no difference in sexual function overall or within psychological domains. In another recent study, no association was found between any LARC method and sexual satisfaction scores [48].
On the other hand, Reed et al. explored the relationship between oral contraceptive (OC) use and the risk of developing vulvodynia [49]. Further analysis showed no association between vulvodynia and previous OC use (HR 1.08, 95% CI 0.81-1.43, p = 0.60). In a study by Iliadou et al. [50], patients reporting mixed urinary incontinence (MUI) were divided into three groups according to contraceptive use. Of 196 women with MUI, 16 were currently using OC, and 178 reported no current use. Among the 8493 controls, 6321 were not using OC, and 2056 were (p < 0.0001). A systematic review of the literature found that sex drive is unaffected in most women taking OC, 3.5% of women taking OC reported a decrease in sexual desire, 12.0% reported an increase, and most of them (84.6%) reported no change [43]. However, the effects of other forms of hormonal contraception on sex drive have not been studied as comprehensively as OC [1].
Positive Effects
According to the studies reviewed, hormonal contraceptives have a series of non-contraceptive effects which can influence and improve different areas of female sexual function. Some of these non-contraceptive effects are: Relief of gynecologic pain [1]; improved appearance, self-confidence, and self-esteem [2]; decrease of anxiety and discomfort [2]; loss of fear of having an unwanted pregnancy [6]; more stable levels of hormones throughout the cycle [51]; and less bleeding with the consequent lower risk of anemia [51]. All these effects contribute to the well-being of women and, consequently, to a possible improvement in the female sexual function. Similarly, hormonal contraceptives have described positive effects on some areas of female sexuality. The most frequently affected areas are: Sexual desire, orgasm number and intensity, satisfaction, and arousal. As mentioned, HCs may help to eliminate the fear of pregnancy, presumably providing a more relaxed and enjoyable sexual experience [1]. Similarly, it is reasonable to consider that an improved appearance would promote self-confidence and increase self-esteem, thereby having a positive effect on sexual function [2]. In a comparison between the vaginal ring, an oral CHC containing a third-generation progestin, subdermal contraception, and no hormonal contraception (control group), the three groups using an HC had increased positive indicators of sexual function (sexual interest and fantasies, orgasm number and intensity, and satisfaction) and decreased negative indicators (anxiety and discomfort). The same results were obtained in a comparison between etonogestrel implant and no contraception [2,52]. LNG-IUS have also been positively associated with sexual desire, arousal, orgasm, and overall sexual function compared with no contraception [2,53].
Furthermore, it may be advantageous for women to have more stable levels of hormones throughout the cycle. Because of the monthly fluctuations in estrogens, progesterone and androgens are associated with a range of symptoms, both genital (i.e., vaginal bleeding, heavy menstrual bleeding (HMB), dysmenorrhea, and pelvic pain) and systemic (i.e., depression, fatigue, headache, irritable bowel symptoms (IBS), asthma, and allergy), triggered by a local and systemic rise in inflammatory molecules released by mast cells when estrogen levels drop [51].
Negative Effects
To begin with, diminished sexual pleasure experienced by some women who use hormonal contraceptive methods may also be a barrier for their use [54], and this could imply an increase in the woman's vulnerability to unintended pregnancy [54]. Consequently, it is important to keep in mind that hormonal contraceptives could have associated side effects that have an influence on female sexual function. Some of these effects could be: Vaginal dryness [2,10,51], a decrease of lubrication [2,51], and pelvic floor symptoms such as dyspareunia [3,51], urinary incontinence, vestibulodynia, and interstitial cystitis [3]. COCs have been also associated with long-and short-term anatomical changes, such as atrophic vulvovaginitis and a decrease of thickness of the labia minora and vaginal introitus area [1]. Negative effects on some areas of female sexuality have been described with HCs, such as: Decreased sexual desire [2,6,10,54], frequency of intercourse [2,54], arousal [2,54], pleasure [2,54], orgasm [2,54], sexual thoughts [54], interest, and enjoyment [6,54].
In contrast to the above section, Elaut et al. [46] and Li et al. [55] defend in their studies that desire and coital frequency naturally increase around ovulation and premenstrually, and COC-associated ovulation inhibition and cycle regulation may blunt this effect, with the corresponding negative impact on libido [10]. Furthermore, longer durations of oral CHC use and younger ages at initiation have been associated with a higher relative risk of vestibulodynia [2], with the resulting negative impact on female sexual function.
Effects on Sexual Function According to the Type of Hormonal Contraceptive
Combined oral contraceptives are widely studied. Nevertheless, other hormonal contraception methods have fewer studies about their influence on sexual function. In this section, the results obtained from the studies reviewed for each type of hormonal contraceptive will be presented. Table 1 shows a summary of this information.
Contraceptive Patch
Concerning patch-related sexual effects, this could be considered the most innocuous CHC. Gracia et al. [56] found that among recent COC users, slight increases in sexual function scores were noted with patch use. However, they concluded that for both products, these changes are not likely to be clinically significant [1,34]. Therefore, it would be advisable to expand the research in this regard.
Contraceptive Ring
With regard to ring-related sexual effects, there are mixed results. On the one hand, two studies showed a decrease in sexual function with vaginal ring compared with COCs [56,57], and one study showed similar results but compared with the patch [58]. However, an improvement in sexual function including sexual desire, fantasies, and satisfaction, accompanied by a reduction of sexual distress, has been described with the vaginal ring [1, 2,10,34]. In another study [34], compared with nonusers of hormonal contraception, both vaginal ring and COC users reported significant improvements for anxiousness, sexual pleasure, frequency and intensity of orgasm, satisfaction (all p < 0.001), sexual interest, and complicity (p < 0.01). However, only women in the vaginal ring group reported a significant increase in sexual fantasies (p < 0.001 versus nonusers), while ratings for sexual interest and complicity were significantly higher in ring users versus COC users [34]. As suggested by the researchers, these data indicate that both oral and vaginal contraception seem to improve to some extent the sexual life of women and their partners, whereas the vaginal ring seems to exert a further beneficial effect on the psychological aspects of sexual functioning [59].
Vaginal contraception offers many benefits, including high efficacy, good tolerability, ease of use, once-a-month dosing, and a favorable pharmacokinetic profile, with the added benefits of positive effects on the vaginal microbiome and on sexual parameters [34]. In addition, good cycle control and less fluctuating serum hormonal levels could contribute to the high degree of users' acceptability and satisfaction. Most importantly, a discussion about the vaginal delivery of contraceptive hormones offers the opportunity to stimulate an open dialogue about vaginal functions, thus ultimately contributing to enhancing women's sexual well-being and reproductive health [34]. Consequently, it could be a good hormonal contraceptive option.
Depot Medroxyprogesterone Acetate (DMPA) DMPA is a highly effective method of contraception. It has been used as a contraceptive agent since 1967 by millions of women worldwide, particularly in less developed regions [21]. In respect of DMPA-related sexual effects, there are mixed results. Despite decreased libido being a common complaint among DMPA users and the fact that progestins have been observed to decrease interest in sex [6], positive sexual effects are also described with this method [6,60]-some reviews even reveal that DMPA is unlikely to be associated with sexual function in women [1,2,6]. However, further research would be needed to support these claims.
Etonogestrel Implant
Etonogestrel implant-related sexual effects are described as negative effects. It has been associated with a lack of interest in sex, a decreased libido, and a reduced sex drive. In addition, a decreased libido has been observed as a significant cause for implant discontinuation [1,6].
Levonorgestrel-Releasing Intrauterine Systems (LNG-IUS)
Intrauterine contraceptives (IUCs) are long-acting reversible contraceptive (LARC) methods that are used by over 150 million women worldwide. IUCs are highly effective methods of contraception that can be used by women of all ages. Rates of IUC use vary throughout the world, from a maximum of 41% in China to a minimum of 0.8% in sub-Saharan Africa [22]. They have generally been associated with positive sexual effects. They have been reported to improve desire, sexual function, and arousal [1,2,60]. Moreover, they seem to improve the health-related quality of life through the improvement of dysmenorrhea and symptoms in patients with endometriosis and adenomyosis, among other things [22].
Other Non-Hormonal Methods of Contraception and Their Effect on Sexual Function
Copper Intrauterine Devices (Cu-IUDs) There has been no evidence to suggest that the copper IUD is associated with an altered libido [6].
Vasectomy/Tubal Ligation
As a non-hormonal contraceptive method, the effect of sterilization on sexual function extends beyond a simple hormonal effect into the psychological aspects of permanent pregnancy prevention, whether positive (i.e., relief and comfort in the knowledge that sexual activity will not result in pregnancy) or negative (i.e., regret that pregnancy is no longer possible) [2].
Nonuse of Contraception
Female sexual function is complex and multifactorial and is influenced by many biological, psychological, and environmental factors [2,5,18,29]. Therefore, a complete understanding of women's sexual function requires the individual assessment of these factors. Consequently, sexual dysfunction does not have to be associated with hormonal contraception. The use of no contraception was associated with a higher rate of the FSD than the use of either CHCs or nonhormonal methods. Furthermore, lower rates of sexual dysfunction were noted among women using either copper IUC (21%) or a levonorgestrel intrauterine systems (LNG-IUS) (10%) than among women using no contraception (35%). Among other reasons, diminished sexual function perceived to be related to contraception may lead to the nonuse of effective contraception, and, conversely, the nonuse of contraception may in itself be a factor in sexual dysfunction, perhaps owing to concerns about unintended pregnancy [2].
The Sexual Side Effects of Hormonal Contraceptives are not Well Studied
Existing evidence for an association between sexual dysfunction and contraception is inconsistent, and additional research is needed [2]. Findings from studies comparing women using non hormonal contraception with those using hormonal methods have shown mixed results [2]. The sexual side effects of hormonal contraceptives are not well studied, particularly with regard to their impact on libido [1]. Similarly, there is no clear information about the effect of HCs on pelvic symptoms and sexual function, nor on how they affect a woman's quality of life in relation to bowel and bladder symptoms, regardless of period control and menstrual bleeding. Moreover, the association between COC use and the presence of any type of urinary incontinence (UI) is unclear, and results suggest that the effect of current COC use on dyspareunia per se is inconsistent [3].
Healthcare care providers must be aware that hormonal contraceptives can have negative effects on female sexuality so they can counsel and care for their patients appropriately [1]. In order to better evaluate any possible effect on mood or libido, practitioners should assess patients prior to initiation of hormonal contraception to establish their baseline [60]. The lack of consistency in findings highlights the complex and multifactorial nature of female sexual function and focuses on the need for a comprehensive approach to management [2].
Management Strategies for Sexual Dysfunction Secondary to Hormonal Contraceptives
This section approaches the therapeutic possibilities for female sexual dysfunction described in the literature. In addition, some keys are given for the management of sexual dysfunction secondary to hormonal contraceptives ( Figure 3).
First, when addressing a new sexual complaint, a thorough history using a biopsychosocial approach should be undertaken (Table 3) [18], including an assessment of any current or past psychiatric disorders; medication use and health problems; a history of emotional, physical, or sexual abuse; beliefs and attitudes regarding sex, menopause, and aging; and body image concerns. Particular attention should be paid to symptoms of depression, anxiety, and sleep problems, all of which are common during the menopause transition. Providers should inquire about alcohol or drug use, as substance use disorders are also associated with sexual dysfunction. Any health or sexual problems affecting the woman's sexual partner(s) should also be explored. Providers should inquire about relationship discord or communication issues, and if present, recommend therapy with a certified and specialized therapist [18]. A multidisciplinary approach to the management of female sexual dysfunction (FSD) is suggested, particularly when multiple contributing or complicating factors are identified, and this may consist of consultations with other professionals, such as a sex therapist, a pelvic floor physical therapist, and a sexual health specialist [2]. First, when addressing a new sexual complaint, a thorough history using a biopsychosocial approach should be undertaken (Table 3) [18], including an assessment of any current or past psychiatric disorders; medication use and health problems; a history of emotional, physical, or sexual abuse; beliefs and attitudes regarding sex, menopause, and aging; and body image concerns. Particular attention should be paid to symptoms of depression, anxiety, and sleep problems, all of which are common during the menopause transition. Providers should inquire about alcohol or drug use, as substance use disorders are also associated with sexual dysfunction. Any health or sexual problems affecting the woman's sexual partner(s) should also be explored. Providers should inquire about relationship discord or communication issues, and if present, recommend therapy with a certified and specialized therapist [18]. A multidisciplinary approach to the management of female sexual dysfunction (FSD) is suggested, particularly when multiple contributing or complicating factors are identified, and this may consist of consultations with other professionals, such as a sex therapist, a pelvic floor physical therapist, and a sexual health specialist [2]. Table 3. Main data to be collected in the clinical history in case of symptoms of sexual dysfunction.
Information that should be collected in the medical record by health providers in response to a complaint of sexual dysfunction: 1. Current or past psychiatric disorders. 2. Medication use and health problems. 3. History of emotional, physical, or sexual abuse. 4. Beliefs and attitudes regarding sex, menopause, and aging. 5. Body image concerns. 6. Symptoms of depression, anxiety, and sleep problems. 7. Alcohol or drug use and substance use disorders. 8. Health or sexual problems affecting the woman's sexual partner(s). 9. Relationship discord or communication issues.
Second, lifestyle counselling should be given by the health providers. General lifestyle counselling that may be useful for all types of female sexual dysfunction include recommending setting aside time for connecting with one's partner, increasing the woman's exposure to sexual Table 3. Main data to be collected in the clinical history in case of symptoms of sexual dysfunction.
Information that should be collected in the medical record by health providers in response to a complaint of sexual dysfunction:
Second, lifestyle counselling should be given by the health providers. General lifestyle counselling that may be useful for all types of female sexual dysfunction include recommending setting aside time for connecting with one's partner, increasing the woman's exposure to sexual stimuli such as erotic literature or films, encouraging the maintenance of a healthy weight, ensuring adequate physical activity and sleep, enhancing skills for coping with stress, and recommending books women can use for self-education (Table 4) [18]. Table 4. General lifestyle counselling.
1.
Setting aside time to connect with one's partner 2.
Increasing the woman's exposure to sexual stimuli: erotic literature or films 3.
Encouraging maintenance of a healthy weight 4.
Ensuring adequate physical activity and sleep 5.
Enhancing skills to cope with stress 6.
Recommending books women can use for self-education.
When choosing a new hormonal contraception method, health care providers (HCPs) should give information about all available methods in order to make a shared decision [34]. In the Contraceptive CHOICE Project, a prospective cohort study of 10,000 women 14-45 years who want to avoid pregnancy for at least one year and are initiating a new form of reversible contraception, 47% of women who had an interest in a CHC method selected a different method than the one they originally intended to use after receiving counselling about several CHC methods, including the pill, patch, and ring. Awareness of the decision-making factors that affect women's choices regarding methods of contraception may enable HCPs to make more informed recommendations that are targeted to the needs of each of their female patients [4]. The prescription of a contraceptive method is a great opportunity to clarify the multidimensional components of sexual health, including elements of anatomy and physiology of the sexual response [34].
Few clinical remedies or recommendations exist for women experiencing HC-related sexual side effects [54]. Unfortunately, no guidelines exist for the management of sexual dysfunction potentially associated with CHCs in reproductive-age women [2]. As such, when CHC-related female sexual dysfunction is suspected, the recommended therapy is discontinuation of a combined hormonal contraceptive, with consideration of an alternative method of contraception, such as LNG-IUS, a copper IUC, a etonogestrel implant, the permanent sterilization of either partner when future fertility is not desired, or a contraceptive ring (for women who prefer a CHC for cycle control and no contraceptive benefits) [2]. The ring appears to be a reasonable alternative to an oral CHC for women with sexual function concerns. Likewise, LARC methods also appear to be a reasonable alternative [2]. Nevertheless, switching to another combined oral contraceptive may provide some benefit, but there is no clear difference between androgenic or non-androgenic progestins [10]. In addition, the combination of dehydroepiandrosterone (DHEA) and an OC was not associated with improvements in sexual function, and it further negated the benefit of OCs on acne [2]. When COC-related female sexual dysfunction is suspected, another possible option could be to consider formulations with a shorter hormonal free interval (HFI). Formulations with a shorter HFI (24/4 and 26/2) have recently been developed with the aim of offering a reduction in hormone withdrawal-associated symptoms together with a more powerful ovarian suppression. Estradiol valerate/dienogest (E2V/DNG) is administered on a 26/2 regimen and has been shown to offer a high contraceptive efficacy, an improvement in hormone withdrawal-associated symptoms (including but not limited to headache and pelvic pain), and an improvement in sexual function [51,61]. In conclusion, the best contraceptive is one that fulfills women's needs with acceptable side effects and at an affordable price in different settings [32].
Other options to improve HC-related sexual dysfunction could be vaginal lubricants and moisturizers. They are the first-line treatment for vaginal dryness and consequent dyspareunia [2], side effects that are frequently associated with hormonal contraceptives, mainly with combined oral contraceptives. The majority of women participating in a daily study reported positive perceptions of lubricant use, including increased pleasure and comfort [62]. Sharing information on the high frequency of use and positive results experienced across age-groups may be helpful in counseling reproductive-age women about using lubricants [62].
It appears that supraphysiological serum testosterone levels may be necessary to yield any benefit on sexual desire and arousal [18]. The use of compounded testosterone products for transdermal use is on the rise, but these products are not FDA-approved [18], and they can be associated with several side effects. Meanwhile, testosterone therapy in postmenopausal women has been associated with improvements in multiple dimensions of sexual function, including sexual desire, subjective arousal, vaginal blood flow, and frequency of orgasm [2]. Testosterone released from patches has also been described to produce positive effects on mood and sexual behavior and to increase bone mass significantly [63].
With regard to hormonal therapy with exogenous estrogens, results are controversial. On the one hand, exogenous estrogens have been shown to be an effective treatment for low libido and hypoactive sexual desire disorder [6], and, on the other hand, hormone therapy (estrogen with or without progesterone) does not appear to have a significant impact on sexual function, with the exception of vaginal estrogen in women with the genitourinary syndrome of menopause [18]; that is to say, hormonal therapy with estrogen is efficient with regard to genital atrophy, but it is not efficient in regard to sexual desire [29].
Furthermore, although dehydroepiandrosterone (DHEA) supplementation could have positive effects on the female libido [29] by restoring androgen levels in COC users, there is minimal evidence that this correlates with improved sexual functioning [10]. There is also evidence that bupropion and, to a lesser extent, sildenafil, are effective for treating antidepressant-induced sexual dysfunction in women, although some conflicting evidence exists [18].
To conclude, even today, most of the contraceptives available on the market and those currently undergoing research and development interfere with ovulation or follicular development and also affect women's steroid production [32]. This mechanism of action is associated with several side effects, negative sexual effects included, that could be avoided by new contraceptives strategies. For that purpose, research conducted over the past few decades has provided more information on gamete physiology and interaction, offering new opportunities for the development of novel contraceptives that could act by interfering with the process of gamete interaction or with the chemo-attraction or chemo-repulsion of spermatozoa to the fertilization site without affecting the hormonal system [32].
Discussion
As discussed in the review above, hormonal contraception (HC) has made a difference in the control of female fertility since its approval by the FDA almost 60 years ago, and it is also widely used in the female population of child bearing age. Side effects, such as sexual dysfunction, may be sufficient reasons for the discontinuation of this contraceptive method. This represents an increase of the risk of unwanted pregnancy, with the possible worsening of women's wellbeing. However, female sexual function is complex and multifactorial and, despite an association between hormonal contraception and sexual dysfunction having been described in the past, the evidence on that topic is inconsistent.
Sexual problems are common, estimated to affect 22-43% of women worldwide [18], and influencing some types of female sexual dysfunction such as orgasm, sexual interest/arousal, and genito-pelvic pain. As a consequence of the multiple medications on sexual functioning, a specific category has been included in the new American DSM-5 classification system: Substance/ medication-induced sexual dysfunction) [18]. As said above, female sexual function is complex and multifactorial, and a biopsychosocial approach to sexual problems is recommended. It could be said that an HC can influence female sexual function in two different ways. On the one hand, an HC could have a negative influence on sexual function as a biologic factor, because HC use has been associated to hormonal changes. On the other hand, an HC could have a positive influence on sexual function, in psychological terms, since HC use has been associated with an improvement in mood symptoms and self-perception. Different options for hormonal contraception exist. There are three main groups: Combined hormonal contraception (pill, patch and vaginal ring); progestin-only contraceptives (POPs, DMPA, and implant); and intrauterine devices (LNG-IUDs). The hormonal composition of hormonal contraceptives is based on progestins alone or on a combination of progestogens and estrogens. Apparently, norgestimate and desogestrel, among progestogens, and 17B-estradiol (E2) and E2-valerate (E2V), among estrogens, have a profile less associated with side effects than the others in their respective groups.
The association between hormones and sexuality is multidimensional, as several hormones are important in the regulation of sexual behavior [29]. Hormonal contraceptives (HCs) seem to be responsible for a decrease of circulating androgen levels [1,2,29,35], baseline serum levels of estradiol [6,29,35], and baseline serum levels of progesterone [35], as well as the inhibition of oxytocin functioning [35]. However, the concentrations of the FSH and LH were similar in freely cycling women and in women using an HC [35]. These hormonal alterations can be translated into negative effects on the female sexual function, with reports of a decrease of the libido, increased sexual jealousy, and alterations on women pair-bonding behavior. It has been established that sexual desire, autoeroticism, and sexual fantasies of women depend on androgen levels [29]. However, the relevance of changes in androgen levels for an individual woman is unclear, and some women may be more sensitive to androgen level alteration than others [2]. Furthermore, while there is conflicting information concerning a link between progestins and libido, there is some evidence to suggest that estrogens play an essential role in female sexuality [6]. On the other hand, multiple lines of evidence suggest that the hypothalamic peptide oxytocin (OXT) is a key factor modulating pair-bonding behaviors, and it has been found to increase the intensity of orgasm and satisfaction after copulation. This mechanism was disturbed in those women using an HC, indicating that the partner-specific modulatory effects of OXT are antagonized by gonadal steroids. So, it could be said that HC use alters women's pair-bonding behavior, reduces neural response to the expectation of erotic stimuli, and increases sexual jealousy.
According to the studies reviewed, hormonal contraceptives have a series of non-contraceptive effects, which can be related to an improvement on different areas of female sexual function such as sexual desire, orgasm number and intensity, satisfaction, and arousal. All these effects contribute to the well-being of women and, consequently, to a possible improvement in the female sexual function.
Combined oral contraceptives are widely studied, and most studies are based on COCs or used them as a comparative method of contraception. Nevertheless, other hormonal contraception methods have fewer studies about their influence on sexual function. There are mixed results with ring-and DMPA-related sexual side effects. The patch could be considered the most innocuous CHC regarding sexual side effects. The implant has been associated with negative sexual effects, such as a lack of interest in sex, a decreased libido, and a reduced sex drive. LNG-IUS have generally been associated with positive sexual effects, so it could be considered the most innocuous HC regarding sexual side effects. However, more studies are needed because of the inconsistency of current available data.
Finally, with regard to treatment options for sexual dysfunction, few clinical remedies or recommendations exist for women experiencing these sexual side effects [54]. Moreover, no clear guidelines exist for the management of sexual dysfunction potentially associated with CHCs in reproductive-age women [2]. First, when addressing a new sexual complaint, a thorough history using a biopsychosocial approach should be undertaken [18]. A multidisciplinary approach to the management of female sexual dysfunction (FSD) is suggested, particularly when multiple contributing or complicating factors are identified, and this may consist of consultations with other professionals, such as a sex therapist, a pelvic floor physical therapist, and a sexual health specialist [2]. Second, lifestyle counselling should be given by the health providers ( Figure 3) [18]. When choosing a new hormonal contraception method, health care providers (HCPs) should give information about all available methods in order to make a shared decision [34]. When CHC-related female sexual dysfunction is suspected, the recommended therapy is the discontinuation of combined hormonal contraceptives with consideration of an alternative method of contraception, such as LNG-IUS, a copper IUC, an etonogestrel implant, the permanent sterilization of either partner when future fertility is not desired, or a contraceptive ring (for women who prefer CHCs for cycle control and non-contraceptive benefits) [2]. The ring appears to be a reasonable alternative to oral CHCs for women with sexual function concerns. Likewise, LARC methods appear to be a reasonable alternative too [2]. Other alternatives could be switching to another combined oral contraceptive [10] or formulations with a shorter hormonal free interval (HFI) [51,61]. Furthermore, with regard to other possible strategies against sexual dysfunction, some studies show positive results on female sexual function with exogenous testosterone [2,18,29], exogenous estrogens [2,6], dehydroepiandrosterone (DHEA) [10,29], tibolone [29], bupropion, and sildenafil [18]. Some alternative options to improve HC-related sexual dysfunction could be vaginal lubricants and moisturizers.
Conclusions
The results of the studies reviewed seem to indicate that hormonal contraception could influence different aspects of female sexual function. However, there are contradictory results between the different studies regarding the association between sexual dysfunction and hormonal contraceptives, so it could be firmly said that additional research is needed.
Meanwhile, it could be said that hormonal contraception has been associated with different alterations in sexual functioning. So, when addressing a new sexual complaint that is time-related with the beginning of hormonal contraception, health care providers should give information about other methods and try to switch them to a method less associated with sexual dysfunction. Vaginal rings and patches are possible options in case of women preferring combined hormonal contraception who report side effects with the pill.
To conclude, a multidisciplinary approach to the management of female sexual dysfunction is mandatory, and health care providers should give lifestyle counselling apart from proposing different treatment options. An adequate relationship with the patient, as well as the routine monitoring of possible sexual dysfunction, are essential in addressing these difficulties. Undoubtedly, the best contraceptive is one that fulfills the women's needs with acceptable side effects and agreed with the prescriber. | 2019-06-28T13:22:02.363Z | 2019-06-01T00:00:00.000 | {
"year": 2019,
"sha1": "53844f8fb6d4d1ee3dcc731e88371df3326bf9f7",
"oa_license": "CCBY",
"oa_url": "https://res.mdpi.com/d_attachment/jcm/jcm-08-00908/article_deploy/jcm-08-00908.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "53844f8fb6d4d1ee3dcc731e88371df3326bf9f7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
85958343 | pes2o/s2orc | v3-fos-license | Study on the Occurrence Regularity of Invasive Whitefly Bemisia Tabaci Population
The B biotype whitefly Bemisia tabaci (Gennadius) (Hemiptera: Aleyrodidae) is an invasive species in China, witch severely damage the production of numerous crops through direct feeding and transmission of plant viruses. In order to clarify the major biological characteristics of a whitefly as an alien invasive species, reveal the seasonal growth and decline of its population quantity, the law of fluctuation from year to year and it’s influencing factors and improves the monitoring, prevention and control level. Our study investigates the main biological characteristics and the population quantity’s fluctuation of the whitefly Bemisia tabaci in Linhai, Zhejiang province, China. Adult whiteflies were investigated in the greenhouse and field from 2006 to 2011 with using yellow sticky boards. The results show that the whiteflies can produce 11 generations each year with an evident generation overlapping. The number of whiteflies in the greenhouse started to increase from June, with a significant increase after July and then reached its peak during August and September. With the temperature drop, the whitefly population started to decrease after mid-October. The observation of the insects indicated that whiteflies are capable of surviving within the whole year under the greenhouse condition. On the other hand, the overwintering frequency for the whitefly in the open field was approximately 20%. Moreover, the main factors that affect the population dynamics of whiteflies in the field include the initial population number, climate condition, farming system and flood inundation, among which, the temperature condition is the most important.
INTRODUCTION
The whitefly Bemisia tabaci (Gennadius) (Hemiptera: Aleyrodidae) is a famous species complex pest with a worldwide distribution.Some of the whitefly complex members severely damage the production of numerous crops, such as kidney bean, cotton, tomato and tobacco, through direct feeding and transmission of plant viruses (Brown et al., 1995;Chu et al., 2004;Lin et al., 2004).In China, numerous important economic crops in Guangxi, Yunnan, Guangdong, Hainan, Zhejiang and Shanghai have suffered significant damages since the invasion of the B biotype whitefly from the mid-to the late-1990s, resulting in huge production loss (Luo and Zhang, 2002;Wan et al., 2009).This situation has worsened in many regions in China with the continuous spread and virus transmission of the invasive whitefly (Zhang et al., 2005;Alemandri et al., 2012).The whitefly has invaded Taizhou by transporting nursery-grown plants, which, in turn, spread and cause damages to other plants, such as melons, solanaceous vegetable, beans, cruciferae and many other vegetables, resulting in a serious loss in field production.Many efforts have been conducted on the taxonomy, biology, ecology and control of the whitefly.Others successively described the basic biological characteristics of the whitefly.It investigated the invasion mechanism of the whitefly, focusing on the competition between the invasive and indigenous whiteflies (Boukin et al., 2007;Dinsdale et al., 2010).However, many issues have not been deeply and systematically studied, particularly the population ecology and movement rules of the whitefly.Many efforts have been conducted on the taxonomy, biology, ecology and control of the whitefly.Luo and Zhang (2002) successively described the basic biological characteristics of the whitefly.Wan et al. (2009) investigated the invasion mechanism of the whitefly, focusing on the competition between the invasive and indigenous whiteflies.However, many issues have not been deeply and systematically studied, particularly the population ecology and movement rules of the whitefly.In the current study, greenhouse rearing and yellow-board trapping systems were applied to investigate the population infestation regularity and the influencing factors of the invasive whitefly Bemisia tabaci from 2006 to 2011.This study aims to clarify the main biological characteristics and reveal the rule of the seasonal growth and decline of population quantity and influencing factors of the invasive whitefly.
MATERIALS AND METHODS
The sexual attraction on dynamics of the adult whitefly in an open field was observed during winter and spring to analyze the relationship between the whitefly surviving rate and climate changes.The biological characteristics of the whitefly in a greenhouse were observed from 2006 to 2007.Poinsettia and kale cultivated in pots with a nylon cover were used as host plants.The whiteflies used in the experiment were collected from a local region (Taizhou, China).
The adult whiteflies were attracted with a yellow sticky board for the investigation.The monitored information included the following: • Location: A suburban vegetable production base in Linhai Later of Dec.
Early of Jan.
Mid of Jan.
Later of Jan.
Early of Feb.
Mid of
Feb. after mid-to late October.Therefore, the whitefly populations of the fourth, fifth, sixth, seventh and eighth generations were much higher and can cause more losses to crop productions compared with the other generations.
Biological features of the whitefly:
The developmental stages of the whitefly include the egg, nymph, pupa and adult stages.The adult whiteflies prefer to feed and lay eggs on young leaf of host plants.
The whitefly eggs are elliptical in shape and gather on young leaves on the upper part of the plants.The eggs are typically laid on the back of the leaf blade with only a few on the surface.The eggs are laid in random and usually in heaps.The roots of the eggs have an egg pedicle that inserts into the plant tissue.Freshly produced eggs are light white covered with wax, making them hard to identify with a naked eye.With the help of the magnascope, the egg pedicle that is fixed vertically onto the blade tissue can be seen clearly.The egg pedicle gradually becomes brown-black during the development.Approximately 80 to 120 eggs were observed for each of the female whitefly.The whitefly nymph is divided into four stages.Newly emerged nymphs move within 1 cm to 2 cm on the leaf around the egg shell and then choose the proper position to settle down, usually on the part around the leaf vein.
Nymphs will not move from its position on the leaf thereafter during ecdysis.In the fourth stage, the nymph secretes amount of wax and the body wall becomes thick and hard with a smooth surface.During this period, the nymph becomes subnymph and turns yellow.
The greenhouse observation and open field investigation indicated that the adult whiteflies gradually move to the top leaves of the plants with the growth of the host plant, forming the vertical distribution pattern of the different whitefly stages from the top to the bottom of the host plants.The adult whitefly and newly produced eggs are found on the top leaves, the brown-black eggs and nymphs in the first to third stages are mostly found in the middle part of plants, while the subnymph and puparium mainly gather on the lowest part of the leaves.Two red eyes can easily be identified in the late pupa stage before eclosion.During eclosion, the back of the puparium breaks into an inverted "T"-shaped crack where adults exuviate gradually, starting from the chest to the head.The newly emerged adults usually stay beside the larva puparium with two wings quilled on the back of the body.The two wings spread after approximately 15 min.Whiteflies are typically arranged in pairs of female and male and a large number of whiteflies sometimes gather together.Adult whiteflies tend to hide between overlapped leaves.The adult whiteflies become active and capable of short-distance flying at temperatures ranging from 25°C to 30°C.And adult whiteflies tend to fly around immediately with the disturbances.The adults will become slow and stay quietly on a leaf when the temperature declines to under 15°C.Furthermore, the adult whiteflies start to die gradually when the temperature decreases further to 7°C.However, some adult whiteflies retain their ability to fly and newly emerged adult whiteflies were observed even when the temperature decreases to as low as 12°C.with the greenhouse system, respectively, with an average of 41 days.These differences can be attributed to the fluctuation caused by the high and low temperatures of winter and spring in the different years.The population peak of the adult whitefly in the open field was similar to that in the greenhouse system.From 2007 to 2011, the summer peak of the whitefly population emerged from late April to late July, from mid-June to mid-September, from late May to mid-September, from mid-May to late July and from mid-May to early September.Accordingly, in the open field system, the number of whiteflies during the summer peak amounted to 2556, 2873, 459, 245 and 147, accounting for 54.5, 86.1, 52.9, 65.9 and 68.4% of the total number of whiteflies for that year, respectively.In the greenhouse system, the number of whiteflies during the summer peak amounted to 4048, 2610, 435, 258 and 265, accounting for 64.4,79.3, 58.5, 57.8 and 75.1% of the total number of whiteflies for that year, respectively.Furthermore, a relatively low number of the whitefly population was recorded after the summer peak period and generally lasted for 20 days to 45 days.The whitefly population during the autumn peak appeared from late October to mid-November, from mid-to late November, from early October to late November, from mid-September to early October and from mid-to late October.In the open field system, the number of adult whiteflies during the autumn peak amounted to 689, 25, 204, 14 and 20, accounting for only 14.7, 0.7, 23.5, 3.8 and 9.3% of the total whiteflies for that year, respectively.In the greenhouse system, the numbers of whiteflies during the autumn peak amounted to 719, 35, 190, 51 and 26, accounting for only 11.4, 1.1, 25.5, 11.4 and 7.4% of the total whiteflies for that year, respectively.The whitefly population decreased drastically during mid-to late December, which is when the insect populations in both systems were relatively low and whiteflies transfer to warm overwintering areas.The movement locus for the greenhouse system can be expressed as follows:
Annual movement regularity:
The results of our study indicate that the movement cycle of the whitefly population takes approximately around 9 to 10 years and the population tends to decrease with a low ebb movement recent years, but is expected to return to an ascending movement eventually.
Base number of the whitefly population:
Base number is the pacing factor for the production and development of a population.According to the whitefly monitoring results of the greenhouse and open field systems from 2007 to 2011, together with statistical analysis, the whitefly number of April is the most important base number for the existence and development of the whitefly population for the whole year.The whitefly population density in April is positively correlated with the trapping quantity for the whole year as follows: Y = 8.3879M 4 +1286.9(n = 10, r = 0.7082 * ) Fig. 3: Population dynamics of whitefly and temperature relationship (2009)(2010)(2011) Meteorological condition: Analyzing the relationships between the meteorological elements and the adult whitefly population, temperature was proven to be the most significantly related with the whitefly population dynamics.Figure 3 shows the statistical analysis of the conducted experiment from 2008 to 2010 (including 36 periods of 10 days each).This analysis indicates that the population density (M) varies with temperature.The linear model of M with respect to temperature is given by M = 0.607T-4.0645,(n = 36; r = 0.6826**) and the curvilinear model is given by M = 0.0071T2+0.3513T-2.2188,(n = 36; r = 0.6846**).The results show that the whitefly experiences difficulty in surviving when the average temperature in the period of 10 days below 5°C to 8°C and that a temperature range of 8°C to 10°C is the state for the whitefly to survive criticality.When the average temperature during the 10 days is in the range of 10°C to 20°C, the whitefly population is in the state of low density and when the temperature is above 20°C, the population increases rapidly and continues to increase with increasing temperature.When the average temperature is above 30°C, the whitefly population is significantly affected by the rising temperature, leading to the valley period of the population dynamics.Then, with the suitable meteorological condition during summer and autumn, the population peak of autumn reappears, leading to autumn damage.
Tillage conditions:
The whitefly population base number and temperature variation are the two main factors that affect the whitefly population dynamics movement.However, for the habitat and reproduction environment of the whitefly, the tillage conditions become the key factor instead. Results of the monitoring on the two systems (greenhouse and open field) show that the whitefly can basically live and reproduce during the whole year, except in the open field from January to March during winter (non-warm winter).Therefore, the average density of the whitefly population in the greenhouse was higher than that in the open field during winter and spring (2.74 times from January to March and 1.67 times from April to May).No whitefly population difference was observed between the two systems during June and the synchronous changes in population occurred from July to December.However, during warm winter, the trapped number of whiteflies in the greenhouse was significantly higher (one to three times) than that in the open field (Table 2).These results may be attributed to the whitefly overwintering in the greenhouse.
Flood inundation caused by typhoon:
This study shows that typhoon and flood inundation have great effects on the whitefly population.As can be seen in Table 3, the suburban vegetable base in Linhai was intruded by typhoon on September 9, 2008, causing the inundation damage of over 30 ha of vegetables.Accordingly, the whitefly population decreased sharply, with an 84.16% decrease in the open field and 81.11% in the greenhouse.In addition, the vegetable base was damaged again by another typhoon on October 11 of the same year.Considering that the intensity of this typhoon was relatively weaker than the previous one, the vegetables in the greenhouse was only submerged for a short period of time.Consequently, the whitefly population decreased by 90% in the open field system, whereas only a 14.08% decrease was observed in the greenhouse and the low population density lasted until the end of the year.Furthermore, the Cucumis melon in the sci-tech demonstration district greenhouse was not affected by the flood; and the whitefly population remained over 0.5 cm -2 (equal to 300 to 500 whiteflies per sticky board) on average from September to November.Therefore, a large scale inundation has a continuous and remarkable effect on the control of the whitefly population.
CONCLUSION
In this study, 11 whitefly generations for a whole year were obtained in Linhai, Zhejiang province according to laboratorial rearing and observation.These results provide important information for the monitoring and precaution of whitefly outbreaks.
Through the continuous monitoring of the whitefly in the two systems-greenhouse and open field of the suburban vegetable base, the whitefly population was found to develop annually in the greenhouse of Linhai, Zhejiang province.The whitefly population can also overwinter in the open field with only 20% of the overwintering frequency.The key factor for the whitefly survival in winter is the average temperature from mid-December to mid-February of the following year.The critical temperature for the survival of whiteflies ranged from 8°C to 8.5°C, indicating that whiteflies cannot overwinter with an average temperature below 8°C, but they can overwinter above 8.5°C.The whitefly population dynamics presents bimodal seasonal change patterns and the early presence of adult whiteflies varies per year.Adult whiteflies emerge in January in the open field system for some early years and in late April for some later years.The emergence of the whitefly has an average of 41 days (0 to 79 days) and is delayed in the greenhouse compared with the open field.The peak periods of the whitefly population in the open field and greenhouse systems are generally similar, in which the summer peak occurs from late April to late July for some early years and from mid-June to mid-September for some later years.The summer peak generally occurs from mid-May to early September.The adult whitefly population in the summer peak of the open field system accounted for 65.56% (52.9 to 86.1%) of the total population in a year, whereas in the greenhouse system, the summer peak accounted for 67.02% (57.8 to 79.3%) of the total population in a year.The autumn peak period generally appears from mid-October to late November, with an occurrence frequency of 60 to 80%.The adult whitefly population in the autumn peak of the open field system accounts for 15.83% (9.3 to 23.5%) of the total population in a year and in the greenhouse system, accounts for 13.93% (7.4 to 25.5%) of the total in a year.The whitefly population drops significantly in mid-to late December and the populations in both the open field and the greenhouse systems were maintained with a low density because of the transferring of the whitefly to the warm overwintering sites in the open field and greenhouse.The results of this study are consistent with the data obtained from the laboratory, which provide valuable information for timely control of these serous pests in practice.
The experimental and statistical results indicate that several factors affect the whitefly population dynamics.These factors include the base number of the population, meteorological condition, tillage conditions and typhoons.Among the analyzed factors, temperature was found to be the most important factor.Studies conducted on the various temperature effects on the Bbiotype whitefly development indicated that the temperature in the range of 20°C to 32°C is the appropriate temperature range for the population growth and reproduction of the whitefly (Liu et al., 2012;Parrella et al., 2012).In addition, the most suitable temperature is 26°C and the highest intrinsic rate of increase is at 29°C.Statistical analysis of the constant field monitoring and meteorological data from 2008 to 2010 indicated that the whitefly population density (M) changes with the average temperature (T, °C) within a specific period of 10 days in a month.The linear model of this relationship is given by M = 0.607T-4.0645,n = 36; r = 0.6826** and the curvilinear model is given by M = 0.0071T2+0.3513T-2.2188,n = 36; r = 0.6846**.The results of this study show that the whitefly experiences difficulty in surviving when the average temperature for the 10 days is below 5°C to 8°C and that the population is in the state of surviving criticality at temperatures ranging from 8°C to 10°C.The whitefly population was in the low density state at an average temperature ranging from 10°C to 20°C.And the whitefly population increased rapidly with increasing temperature above 20°C.Finally, the whitefly population decreased at an average temperature above 30°C.Thereafter, the whitefly population returned to its peak, causing a serious damage to the vegetables during autumn.
Based on a constant monitoring of the whitefly population for five years in the fields from 2007 to 2011, this study revealed that the whitefly population gradually decreases in the suburban vegetable base in Linhai.The whitefly population movement locus of whitefly for the open field system is given by Mopenfield = 312.21N2-6187.1N+30787(n = 5, r = 0.9834**) and that for the greenhouse system is given by Mgreenhouse = 564.7N2-10506N+49013(n = 5, r = 0.9967**).Further investigation is necessary to test if the variation rules revealed in this study are caused by the intrinsic factor of the long-term movement of the whitefly population or by external environmental forces, such as farming, climate, natural enemy, or pest management and control.
Feb. 2006 (mid of Dec.)-2007 (mid of Feb.) of Dec.)-2008 (mid of Feb.) of Dec.)-2009 (mid of Feb.) of Dec.)-2010 (mid of Feb.) of Dec.)-2011 (mid of Feb.) Fig. 1: Population dynamics of whitefly in Linhai(2007)(2008) Based on the whitefly population dynamics from 2007 to 2011, the years 2007 and 2008 are clearly the outbreak years, whereas 2009 to 2011 are the years with a relatively less whitefly population.Using M as the number of trapped whiteflies for the whole year (number/sticky) and N as the quantized year (2002 was treated as the first invasion year; thus, the values of N = 6, 7, 8, 9, 10 during 2007 to 2011), the whitefly movement locus equation was obtained after the statistical analysis.The movement locus for the open field system can be expressed as follows: M open field = 312.21N 2 -6187.1N+30787(n = 5, r = 0.9834 ** )
Table 1 :
Relationship between numbers of trapped whitefly adults and temperature of winter in the open field
Table 2 :
Sticky trapped numbers of whiteflies monthly from 2007 to 2011 in greenhouse and open field (numbers/sticky)
Table 3 :
Investigation of the effect to whitefly population caused by typhoon and inundation in vegetable base(2008, Linhai) | 2017-10-19T11:17:27.771Z | 2013-11-05T00:00:00.000 | {
"year": 2013,
"sha1": "53a1e8193e3f29c39161c35bc7d951734e2d9b89",
"oa_license": "CCBY",
"oa_url": "https://www.maxwellsci.com/announce/AJFST/5-1514-1520.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "53a1e8193e3f29c39161c35bc7d951734e2d9b89",
"s2fieldsofstudy": [
"Environmental Science",
"Biology",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
239881408 | pes2o/s2orc | v3-fos-license | Mathematics education in a time of crisis—a viral pandemic
“Crisis” is a word that has been used (and possibly abused) a lot in recent years. It may indicate a difficult moment for an individual, a strong feeling of being upset, or a disturbance in a person’s existence. Social crises are different. Across the world, the crisis from the COVID-19 pandemic is inescapable. It strongly upsets us, it disturbs our existence, and unites or divides us in different ways. In the language of economics, especially in classical economics, “crisis” specifically designates a period of economic depression, namely, the phase of a business cycle that is the consequence of generalised overproduction, the basic characteristics of which are a rapid transition from prosperity to depression, a fall in production, widespread unemployment, falling prices, low wages, and falling profits. A well-known example is the general depression of economic activity that began with the Wall Street crash in 1929, which spread to other countries and lasted until the Second World War. Even in the case of the current pandemic, in all countries the emergence of the pandemic has been accompanied by the emergence of an economic crisis. There is a question that has sparked many political discussions and controversies with inextricable ethical implications: Which is more critical in this pandemic era, the health crisis or the economic crisis? In this time of global crisis due to the COVID-19 pandemic we, our loved ones, and all people suffered tremendous disruptions, pains, and fears. The physical suffering, the isolation, and the compelling demands to care for others in new ways have been real and deep. While we take time to care for each other, ourselves, and others, as scholars it is also our responsibility in a time of crisis to interpret the changing world and develop appropriate research agendas. We know that crises are not new. Even pandemics are not new. Nevertheless, we are living in a new era. Crisis theorists explain why our crises are becoming more frequent and larger in scale (Topper & Lagadec, 2013). The world is becoming increasingly interconnected due to advanced technologies for the movement of information, people, and goods. This interconnectivity changes the nature of any potential crisis from a chain
Crisis and change
"Crisis" is a word that has been used (and possibly abused) a lot in recent years. It may indicate a difficult moment for an individual, a strong feeling of being upset, or a disturbance in a person's existence. Social crises are different. Across the world, the crisis from the COVID-19 pandemic is inescapable. It strongly upsets us, it disturbs our existence, and unites or divides us in different ways.
In the language of economics, especially in classical economics, "crisis" specifically designates a period of economic depression, namely, the phase of a business cycle that is the consequence of generalised overproduction, the basic characteristics of which are a rapid transition from prosperity to depression, a fall in production, widespread unemployment, falling prices, low wages, and falling profits. A well-known example is the general depression of economic activity that began with the Wall Street crash in 1929, which spread to other countries and lasted until the Second World War. Even in the case of the current pandemic, in all countries the emergence of the pandemic has been accompanied by the emergence of an economic crisis. There is a question that has sparked many political discussions and controversies with inextricable ethical implications: Which is more critical in this pandemic era, the health crisis or the economic crisis?
In this time of global crisis due to the COVID-19 pandemic we, our loved ones, and all people suffered tremendous disruptions, pains, and fears. The physical suffering, the isolation, and the compelling demands to care for others in new ways have been real and deep. While we take time to care for each other, ourselves, and others, as scholars it is also our responsibility in a time of crisis to interpret the changing world and develop appropriate research agendas.
We know that crises are not new. Even pandemics are not new. Nevertheless, we are living in a new era. Crisis theorists explain why our crises are becoming more frequent and larger in scale (Topper & Lagadec, 2013). The world is becoming increasingly interconnected due to advanced technologies for the movement of information, people, and goods. This interconnectivity changes the nature of any potential crisis from a chain of events in a relatively contained ecosystem or society to having global reach. This interconnectivity also accelerates the chains of events. These days, crises can ripple through the globe virtually instantaneously. Stock markets respond immediately as they are globally interdependent. Ideas, concepts, and fears spread as quickly through social media. These ripple effects can impact individual behaviour and community action, from local to national communities.
Thus, crisis theorists have been warning us (humanity) for decades about increasingly large and frequent crises. Such a warning had reached the mathematics education community before: the 2017 Mathematics Education and Society conference's theme was "Mathematics Education and Life at Times of Crisis". Participants in the conference envisioned social crises and the climate crisis. Now we know that these crises connect with the pandemic-see, for example, Ezeibe et al. (2020) on social crises and Banerjee (2020) on the climate crisis in relation to the pandemic. But such warnings extend much further back. We look back to 1973, when Rittel and Webber (1973) theorised problems that are addressed by natural scientists and social planners and coined the concept of a "wicked problem" to describe the problems they were addressing as inescapable highstakes human problems that cannot be clearly defined due to their complex interconnectedness, and that have no clear solution nor method for testing solutions. Steffensen (2017) has pointed to the way climate change presents such problems and identified the importance of such problems to mathematics education. In 1992, Beck developed the concept of "risk society" to theorise the shift in risks humans face from natural disasters to disasters caused by humans. We know that natural risks are increasingly amplified and even caused by human activity (IPCC Working Group 1, 2021).
Topper and Lagadec (2013) drew on Mandelbrot's application of fractal geometry to volatile financial markets to underpin their proposal for how to respond to an increasingly unpredictable world. Using this approach in the current crisis, we see that we have to look through the massive changes and upheaval in the pandemic to examine what lies beneath-the structures that have not changed. When people experience crises, we understandably focus on the significant upheavals that dominate our attention, but Mandelbrot's approach directs us to zoom out and see what we experience as change as representative of something greater and relatively invariant. This approach can help us understand the roots of a crisis. This approach can help us manage the big questions about crises that we face as a field of mathematics education researchers.
Given that we can expect crises more frequently and larger in scale, our research needs to extend beyond this particular crisis to prepare for future crises. That is the goal of this special issue on mathematics education in a time of crisis in this context of a viral pandemic.
To clarify terminology, the virus is called SARS-CoV-2. The disease caused by the virus is called COVID-19. The pandemic is not the virus or the disease alone; it is the social manifestation of the spread of the virus through our interconnected global social systems. The pandemic is both a result of social and environmental realities and a driver of change in these realities. This special issue is focused on the impact of the pandemic on mathematics education in three contexts: • the teaching and learning of mathematics (in elementary, secondary, and tertiary contexts), • mathematics education as a research field, and • mathematics in society.
We distributed the call for papers for this special issue on April 7, 2020. In the call, we said that authors "may wish to position this particular crisis in the context of other interrelated crises that grip our world, such as, climate change, human migration, the rise of xenophobic nationalism, and growing inequalities". We invited both essays and empirical studies. We received 161 full manuscripts with authors from 36 countries 1 including sixteen countries in Europe, eight in continental Asia, four in South America, three in North America, three in the Pacific Islands (including Australia and New Zealand), and two in Africa.
From the wealth of strong papers we received, it was difficult to choose which papers to move forward in our process. We expected a single special issue but now have a double issue with 20 papers. Many of the papers that we did not move forward for the special issue have been submitted and published elsewhere. We aimed for a set of papers that represented diverse approaches to the crisis and geographic distribution. In relation to geographic distribution, we were interested in both the location of the authors and the contexts of the research. We knew that the crisis was impacting different regions differently, so we wanted to ensure that the special issue would represent a range of perspectives across contexts.
In this introduction to the special issue, we take an opportunity to reflect on crisis in general and the pandemic in particular from the perspective of mathematics education researchers, informed by the extensive reading and interaction we have engaged in as special issue editors. In our "narrative construction of reality" (Bruner, 1991), we organise our reflections diachronically, adopting a human dimension of time as "time whose significance is given by the meaning assigned to events within its compass" (p. 6). We therefore first reflect on the moment of crisis, which is our present. Then we will see this crisis as an opportunity to look back and find connections from the past, which may nurture our reflection on the present. Finally, we look forward from this crisis to think about future research and action.
The moment of crisis
At the start of the pandemic at the beginning of 2020, those who followed world news would see similar messages repeat and replicate in different parts of the world. Many countries would first hear about the virus in other countries. Life seemed "normal" until we started to hear about cases emerging in neighbouring regions, countries, and then in our own area. We thought none of this would affect us or our lives until suddenly the government announced that we are in lockdown and people started to make panic purchases that left supermarket shelves empty of daily essentials. Face masks and hand sanitisers became essential items in many countries. We had to navigate and learn the new restriction rules that government officials announced and implemented overnight. Universities and schools made announcements about switching to online teaching or remote learning, and suddenly everyone had to learn to use new online platforms. There was a lot of resistance, frustration, and uncertainty among teachers, students, and parents (Matthews et al., 2021, in this special issue). Online education resources that had been neglected by many for years became the essential teaching and learning guide (Borba et al., 2010;Salmon, 2011). Educators had to quickly learn how to teach and engage students in an online environment (Albano et al., 2021, in this special issue). Activities that were designed to be carried out face-to-face had to be re-designed using breakout rooms, digital whiteboards, or screenshare.
Many researchers had to pause their research projects as schools shut down and the classroom environment drastically changed due to the pandemic. At the same time, some would see this as a unique opportunity to understand how human society was responding to challenges in a crisis, but our research activities were also being challenged because of the crisis. Should our responsibility be to document, describe, or explain this crisis (see, e.g., Little, 1991)? Or should we try to theorise or predict how this crisis is going to play out and end? How do we balance our ethical responsibility to not put a burden on our participants when they are experiencing high levels of stress and uncertainty but still fulfil our research roles? Many researchers had to navigate and negotiate these questions with ourselves, our colleagues, and our participants when deciding on whether to pursue research during this time.
As time went on, people started to adjust to the "new normal". Lockdowns became a repeat occurrence in different places. While initially the international community were being physically separated due to travel restrictions, our life appeared to be more closely connected by this pandemic as we realised that we shared many similar experiences in our local context. Online meetings and conferences became commonplace. People started to play around with online backgrounds and filters to decorate our monotonous work-fromhome life. In between lockdowns, some started to say that they enjoyed working from home, not having to commute to workplaces.
Questions about this special issue
As we, the special issue editors, started to read queries from scholars interested in contributing to this special issue, we could see that the crisis had made it difficult in different parts of the world to carry out empirical research in schools and with families. Some researchers resorted to investigating their own teaching practice and their own response to the crisis (e.g., Krause et al., 2021;Maciejewski, 2021; both in this special issue). Others turned to media and textual analysis and showed fascinating and critical demands on statistical literacy and visual representation during this pandemic (e.g., Rubel et al., 2021;Sousa Silva et al., 2021; both in this special issue). The mathematics curriculum was under scrutiny in terms of preparing citizens to respond to the pandemic (e.g., Zavaleta, 2021, in this special issue). Government response to the crisis also became a telling lens in revealing how citizens trust or distrust authority in a time of crisis (e.g., Allen & Trinick, 2021; in this special issue).
With the vast number of manuscripts submitted to this special issue, the review process was also being challenged. Many potential reviewers found it difficult to commit to reviewing papers. For those who responded, a few raised the question of whether it could be too early to start reflecting on the crisis situation, queried the contribution of papers that focused on documenting the crisis, or raised concerns about prospective non-empirical papers.
Some colleagues raised the concern that those who managed to submit a paper to the special issue were privileged individuals who had time to write at a time of crisis. They noted historic gender inequities in who would take on the sudden demands in home care of children (Flaherty, 2020;Vomvoridi-Ivanovic & Ward, 2021) and noted the advantages that wealth, and access to research and secure health services confer to the few. We sympathised with these questions and weighed the benefits of research responding to one of the most significant events in education against the worries about the first voices in this research being dominated by voices of privileged demographics. We looked at the authors of the submitted papers by gender and found women barely outnumbered men based on our experiences with gendered names. 2 We considered the regions from which we received papers and found more diverse representation than historical representation as identified by Mesa and Wagner (2019). In our selection of which papers to invite to move forward in the process, we aimed for representation from diverse regions, considering both the region of the author and the region focused on in the research. Because we received so many strong papers, more than we could accommodate in the special issue, we turned away some papers on the basis of regional representation, and encouraged the authors to submit their work as regular papers in Educational Studies in Mathematics or in other venues. We similarly turned away papers that were not sufficiently focused on the pandemic or did not deal with aspects of it specific to mathematics education.
A related question that emerged in discussing this special issue is this: When is the appropriate time to do research on an emergent phenomenon? We see that there is value in documenting what is going on with the pandemic situation in various contexts. The documentation will certainly allow for more informed reflection on the era in future scholarship. Additionally, the scholarship already prompts reflection. Indeed, all the papers in this special issue have an element of reflection in them. The prompt to share reflections and practices when we are still facing a terrible problem is not only a challenge, but it is also a way to act as a community, to keep us socially close, in a time when we are prevented from meeting physically. Possibly, this also explains the reason why the special issue call received so many papers, even in such a short time. Krause et al. (2021, in this special issue) witnessed a tension between a certain "situatedness" of the current reality and the "generality" (what goes beyond the pandemic situation). The relatively short deadlines we asked for in the call for papers enabled authors to seize fully the situatedness, in a sense contributing to it. On the other hand, we asked for contributions that were able to go beyond the current situation, towards a more general view of crisis. Skovsmose's (2021, in this special issue) paper took on that challenge directly.
Responsibilities of mathematics educators in crisis
What are our responsibilities as mathematics educators in such a time of crisis? We recognise that this is a difficult question because of the many competing demands we face. A mathematics educator in a time of big or small crisis would face local, immediate challenges. At the same time, there is also the possibility to step back to ask big questions. Many would find it important to devote time to love the people in their families and communities at a time of crisis. Whether we are mathematics teachers or teacher educators, we would find it important in our teaching roles to help our students achieve their immediate needs, even if those needs relate to problematic systems. For example, students may have a "need" to learn how to prove trigonometric identities in order to pass a course and qualify for a biology programme that sets them up for their career goal. Even if we doubt the value of trigonometric identities in school curriculum (and perhaps especially in a pandemic) or question the focus on these identities for biologists, we may feel obliged to think that it is still important to support the student's success in the systems that are currently in force. In our research, we may find it important to study the local, immediate needs but also look at the big questions and examine the structure beneath the crisis. What is invariant? And where that structure is unjust, how can it be changed? And it could be equally important to reflect on how all these different levels of action speak to each other: family/community, students, practice-level research, system-level research.
In the papers we received for this special issue, we saw scholars looking for accessible data that would help the field understand the pandemic. This was not easy. The pandemic makes it hard to start new studies involving participants. And we know that studies of social structures really need researchers to listen to the people most impacted by the structures. Over time, we expect that we will see more research that uses data that are harder to access, with deep engagement with the people most impacted by the crises.
We know some mathematics teachers and mathematics teacher educators who have seen the pandemic as a prompt to re-examine their teaching (e.g., Brunetto et al., 2021, and, in this special issue Albano et al., 2021;Gosztonyi, 2021;Maciejewski, 2021). One may wonder how students would accept a focus on the usual skills and knowledge when they are bombarded daily with media coverage and government releases about the pandemic situation. One should expect that this supposedly powerful mathematics would be used in class to address the most obvious disruption of our era. We would expect a call from students and from society, echoing the decades of injunction from Ubiritan D'Ambrosio (1994,2007,2015) and others (e.g., Mendick, 2017), to examine the complicity of mathematics in the structures that allowed the virus to thrive, in addition to the possibilities for using mathematics for justice in these times. However, speaking from our own experiences, we see students, teachers, families, and politicians focused on the compelling, immediate, local needs. Many are distracted from asking the deep questions, distracted by our social systems and the immediate needs of disrupted networks.
One thing that is immediately clear in pandemic teaching is the inequities, including: • unequal access to internet • unequal access to computers and tablets • unequal availability of space at home for uninterrupted time • unequal competing demands for time.
Even while teachers and school systems work very hard at combatting them, these inequities persist. Again, this is another example of something that is invariant in this time of massive change. For example, we have research that shows inequities persisting through the pandemic: rural families in Turkey have greater challenges than others (Yılmaz et al., 2021, in this special issue), the needs of Indigenous students are ignored (Allen & Trinick, 2021, in this special issue), students of colour are marginalised (Matthews et al., 2021, in this special issue), and students who have recently migrated are ignored. The effects of poverty are magnified.
Crisis as an opportunity to look back
For many of us, the pandemic situation appeared as a totally new phenomenon, in front of which we felt completely unprepared. But looking at human history, we may indeed recognise that this phenomenon is not new, rather a sort of periodic feature. We have to recognise also that the memory of previous pandemics is not felt in the same way in the world: it is indeed stronger, and still felt as a trauma, for many Indigenous marginalised groups around the world-for example, that of Māori in New Zealand as documented by Allen and Trinick (2021,
in this special issue).
On the other hand, referring to past epidemics may be a way to gain tools for reflection on the present, without feeling the psychological pressure that such a critical situation may place on us. This is the choice made by Gosztonyi (2021, in this special issue) in her experience with a group of secondary teachers deepening the scientific debate arisen in the XVIII century between Bernoulli and d'Alembert about the smallpox epidemic and the risks and advantages of inoculation (a primitive antecedent of vaccination). In her perspective, historical texts are proposed as transitional objects in the interaction with teachers, indirectly stimulating discussions about the problems with which they are concerned.
The past may emerge in our reflection also in sharp contrast with the present. In one of the outcomes of the pandemic crisis, schools closed and teachers had to face a sudden, unexpected change from face-to-face to distance teaching: Albano et al. (2021, in this special issue) report the subjective point of view of Italian teachers by means of logbooks and show that two temporal periods may be identified, namely, the period of bewilderment and the period for reflection and elaboration. Such a reflection/elaboration is prompted by the current situation through a contrast with the past situation. This contrast reveals key elements of the teaching-learning system in which teachers were embedded before the disturbance. Imagining and advocating totally or partially different educational/school settingspossible worlds in Bruner's words (1986), as Albano et al. pointed out-realises an implied critique of the existing/past world. But the pandemic experience has taught us that the past world rapidly evolved into the existing world, which is, in turn, rapidly evolving into a past world. This evolution leaves us with a minimal sense of what is indeed the actual world and with unstable visions for the future.
A historicized approach is also helpful to understand the present. Ziols and Kirchgasler (2021, in this special issue) explored how distinctions of health and pathology have been dynamically interwoven with mathematics education for two centuries. In this way, they open a dialogue about implications of these historical traces for issues of injustice today. This kind of vision aligns with Mandelbrot's approach to addressing crisis: to look through or past the shocking disruptions to identify what is invariant (Topper & Lagadec, 2013).
Adopting a Bourdieusian approach, Allen and Trinick (2021, in this special issue) framed the difference between Māori and English-language schools' capacity to maintain continuity of mathematics instruction while schools were closed due to the COVID-19 pandemic, as linked to the limited bank of digital mathematics resources in the Māori language. They interpreted this disparity as the outcome of socially determined differences in cultural capital, which are heritages of the pre-pandemic past. Borba (2021) saw the pandemic as a prompt to reflect on mathematics education as a research field, particularly in the growing awareness of the way humans and their media depend on each other for mathematics learning and teaching. Others in the special issue suggested a new research agenda in their discussions. We identify an important opportunity and need to reflect on the field in the ways identified above, to understand the past and present manifestations of research in the field.
We note that the tremendous response to this special issue demonstrates the resiliency of the field and showcased many well-developed research methodologies and collaboration networks. We find the results of the survey done by Bakker et al. (2021) to be of interest. Just before the pandemic struck, they surveyed mathematics educators around the world to ask what themes research in mathematics education should focus on in the coming decade. They asked respondents a year later (in November 2020) if the pandemic changed their views on the themes. Nine of their respondents identified no changes in their views, eight identified clearly different views, and 45 saw the importance of their initial themes reinforced. One way to see these results is as evidence that the field already understood the important issues that the pandemic revealed. However, we should be careful about this conclusion because we know for ourselves that we use the tools we already know to interpret new phenomena. While we see authors in this special issue using theories and methodologies that fit their previous research approaches, we also see changes. Many researchers are becoming increasingly interested in and aware of the work in our field on online teaching media: for this special issue we received many manuscripts from authors studying the move to online teaching who, as far as we know, have not addressed this teaching medium before. Nevertheless, the fact that the field has people specialised in online teaching research (e.g., Borba et al., 2010), for example, for the past thirty years with relatively few people needing to refer to until now, shows that something is working in, shall we say, the ecology of the educational research field that allows us to be responsive to a diverse range of situations.
Crisis as an opportunity to look forward
Crises also prompt us to look forward to projected and potential futures. The Secretary General of the United Nations, in his July 2020 lecture (the Nelson Mandela Lecture), noted that "The pandemic has demonstrated the fragility of our world. It has laid bare risks we have ignored for decades: inadequate health systems; gaps in social protection; structural inequalities; environmental degradation; the climate crisis" (Guterres, 2020). He added that "The virus poses the greatest risk to the most vulnerable: those living in poverty, older people, and people with disabilities and pre-existing conditions." He concluded that "COVID-19 is a human tragedy. But it has also created a generational opportunity. An opportunity to build back a more equal and sustainable world." Others have made similar observations. For example, novelist Arundahti Roy (2020) has documented the pandemic in India and concluded: Historically, pandemics have forced humans to break with the past and imagine their world anew. This one is no different. It is a portal, a gateway between one world and the next. We can choose to walk through it, dragging the carcasses of our prejudice and hatred, our avarice, our data banks and dead ideas, our dead rivers and smoky skies behind us. Or we can walk through lightly, with little luggage, ready to imagine another world. And ready to fight for it.
People who experienced the world before the pandemic as treacherous and broken may wish for a transformed world. As researchers, many of us may have had relatively satisfying experiences before the pandemic, and thus may not wish for world transformation. We should take a moment of crisis as a time to carefully imagine the future.
We should ask, what warrants change and what should be maintained? These questions should be applied both to mathematics teaching practices and to research approaches. And we should inform our considerations with careful attention to the perspectives of a wide range of stakeholders in mathematics education-students, teachers, and others, all from a wide cross section of contexts. The question boils down to a moral question: whose needs should be foregrounded? This question and other related questions were addressed by Adler and Lerman (2003). We suggest that the pandemic compels new consideration of the ethics of research in mathematics education.
New visions for mathematics teaching
When we look forward as researchers, we should question both curriculum and pedagogies. The question of curriculum is widely discussed in this special issue (Kollosche & Meyerhöfer, 2021;Rotem & Ayalon, 2021;Sánchez Aguilar and Castañeda, 2021;Sousa Silva, et al., 2021;Zavaleta, 2021, all in this special issue). It is indeed helpful to look at the mathematics that has appeared publicly in the pandemic to inform the mathematics that should be taught because citizens should be equipped to understand the mathematics they will experience in the world. Kwon et al., (2021, in this special issue) investigated the use of graphs in Korea's news media during the COVID-19 outbreak, providing implications for future teaching and learning of graph literacy in school mathematics courses. Heyd-Metzuyanim et al. (2021, in this special issue), after examining the Israeli public's understanding of mathematical notions that are required for understanding the pandemic and predicting its spread, demonstrate that mathematical identity may significantly hinder adults' engagement with such information. Kollosche and Meyerhöfer (2021, in this special issue) took a more critical stance and referred to different discussions in German mass media on the pandemic policy in the SARS-CoV-2 crisis in 2020 to argue that the critical evaluation of experts' use of mathematics by laypersons is not possible in all relevant cases, and discuss possible implications of this result.
We note that when people develop media to inform the public, they make their decisions about what mathematics to use and how to represent it based on the mathematics they expect the public to understand. Thus, we see a circularity: curriculum would be designed on the basis of the mathematics that people are applying in their lives, and such mathematics is influenced by the mathematics learned at school, hence influenced or even determined by the curriculum itself. A time of crisis may help disturb such circularity, identifying the mathematics that are needed in curriculum.
With this new vision, mathematics educators and mathematicians will need to identify mathematics that would be needed for interpreting crises so that this mathematics could inform the public. In addition to the widely circulated mathematics, there is important mathematics being done to address significant needs during the pandemic. Some of this mathematics may be part of the answer to the above question: what mathematics should be taught? Maciejewski (2021, in this special issue) contributed an account of his struggle with this question and reported on his approaches to developing prospective mathematics teachers' understanding of exponential growth, and connectivity. Also, this question has been discussed less formally, for example, in the closing plenary panel of the International Congress on Mathematical Education in 2021, panellists answered this question. The question of how to address this mathematics requires much more thought. And more research, we suggest. Even so, the question of what mathematics should be taught surely needs attention given the new realities exposed by the pandemic. The basic question underlying any evaluation of mathematics curriculum is this: What should every citizen know? Surely the answer is different than it was thirty years ago, considering the massive changes in interconnectivity in our world and the related growth of planet-wide crises. To answer this question, we need to identify the human and social problems of our time. Here we identify some questions that ask about the root factors in this pandemic; we know there are other questions like these: • What mathematics is necessary to understand interconnectivity? • What mathematics is needed to understand climate? • What mathematics is needed to understand biodiversity? • What mathematics is needed to understand wealth distribution?
It is not enough to identify pressing problems. They need to be prioritised. The pandemic pushes us to change priorities because we see how fragile societies are. Nevertheless, priority-setting remains an important function. Underneath the question of priority-setting, we will find assumptions about whose interests are most important. There are others who can help us answer these moral questions, but as researchers we have to make these determinations ourselves as we make decisions about where to devote our own resources in research. We can do our own evaluations of priorities, or we can make decisions about whose guidance we follow in such priority-setting.
Again, as it becomes clear in Maciejewski's (2021, in this special issue) account of teaching mathematics relevant to the pandemic, we see the importance of questions about how it should be taught. Questions of how to teach mathematics are also now impacted with the field's new understanding of different media for teaching mathematics. Some of the studies in this special issue address the sudden disruptions in teaching media (Albano et al., 2021;Borba, 2021; both in this special issue), but the studies also identify mathematics teachers' learning about their teaching and how to use new media. Drijvers et al., (2021, in this special issue) found that teachers in Flanders, Germany, and the Netherlands reported a remarkable increase in their confidence in using digital technologies during the lockdown. We expect that this learning will be applied to emergent practices post-pandemic. Some early results concerning the online education tool of the "micro-classes" in China are given by Xie et al. (2021, in this special issue).
As we reflect on the responsibilities of mathematics educators within the pandemic, we see that the same questions apply to future research. We are reminded that crisis is not new, and thus, we can look to pre-pandemic scholarship for some guidance on future research agenda in relation to crisis. In particular we point to Vithal and Valero's (2003) consideration of mathematics education research in social and political crisis and to a symposium convened by Parra et al. (2017) which prompted conversation about whose perspectives should be foregrounded in research in crisis contexts.
Reflection on the role of mathematics
Finally, we see that the role of mathematics is itself part of the crisis. Mathematics has underpinned technologies that have pushed species into new patterns of behaviour and that have made the world more connected. Thus, mathematics is underneath the emergence of the coronavirus that drove this pandemic and underneath the social systems that paved the way for its rapid spread. Ubiritan D'Ambrosio implored mathematics educators for decades to examine the role of mathematics in shaping the world (e.g., D'Ambrosio, 1994) and to advocate for such examination in school mathematics (e.g., D'Ambrosio, 2007D'Ambrosio, , 2015. Meanwhile, the way mathematics has been taught has influenced how people have understood the crisis and thus affected their actions within the crisis, again contributing to the particular rates of spread. For example, mathematics education practices have impacted the ability and willingness of citizens to read and trust statistics and modelling, which impacts both their decisions in the pandemic and the rise of certain political voices. As our field grapples with the new world, we are compelled to consider our complicity in the problems we see before us. Nevertheless, this critical reflection should not undermine our confidence that mathematics and good mathematics teaching can make important contributions to society. Rather, we need to be sure to include self-examination in our visions for the future.
A final word
Etymologically, the term crisis comes from the Greek krisis, which refers to choice, decisions, and decisive phases of an illness. It relates to the word krino, which means to distinguish. If we look at the etymology, it is always a time of crisis, because we are always being called upon to choose among different alternatives; even doing nothing to change a situation is an alternative and a choice. Taking different perspectives and addressing the most dynamic sense of the crisis, the authors of this special issue proactively seized the chance to write in the momentum of the pandemic crisis to offer alternatives to the mathematics education field. In such a perspective, it is our wish that the papers in this special issue will constitute a little light for future generations. | 2021-10-25T15:07:38.319Z | 2021-10-01T00:00:00.000 | {
"year": 2021,
"sha1": "5cb647c09cf3af6e0ae7d693486402e047088063",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10649-021-10113-5.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "a49eef1a0fc9b5d6741967dfae108a5e0f03b5fe",
"s2fieldsofstudy": [
"Mathematics",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5889877 | pes2o/s2orc | v3-fos-license | Ten-year survival and prognostic markers in one thousand patients with advanced heart failure . A single-centre analysis
Aim. Patients with advanced heart failure (HF) represent a pool of candidates for heart transplantation and long-term mechanical circulatory support devices. The aim of our study was to determine simple and reliable markers of one-year mortality for selection of the most suitable patients for heart replacement therapy. Methods and Results. One thousand consecutive patients with HF (mean age 49 ± 10.9 years; 86.8% males) referred to a single tertiary centre from January 1998 to January 2010 in order to assess the indication for heart transplantation were enrolled. Kaplan-Meier survival analysis was performed. Independent mortality predictors were established using logistic regression analysis. The mean follow-up was 4.3 ± 2.7 years (range 1–12 years). Cumulative survival was as follows: 1-year survival 83%, 3-year 63%, 5-year 50%, 7-year 39%, and 10-year 23%. Independent predictors of 1-year mortality included coronary artery disease, left ventricular diastolic diameter >79 mm, plasma sodium <135 mmol/L, the need for intravenous treatment at hospital admission (diuretics and/or inotropes), and furosemide dose at discharge >240 mg/day. Conclusions. Short-term prognosis of HF patient can be estimated based on simple parameters. Patients with signs of poor prognosis should be referred to tertiary centres to be considered for heart replacement therapy.
INTRODUCTION
Despite the undisputed progress in treatment, the long-term prognosis of patients with heart failure (HF) is unfavourable 1,2 .Patients with advanced HF represent a pool of candidates for heart replacement therapy.The number of heart transplantations (HTx) has long been stagnating due to the lack of donor organs 3 .In contrast, we are experiencing a significant increase in the number of implantations of mechanical circulatory support systems (MCS) (ref. 4).It can be assumed that the use of MCS will expand further, in particular in destination therapy.From this perspective, it is of key importance to determine the prognosis of patients with advanced HF. Heart transplantation and MCS implantation are associated with non-negligible periprocedural risk, which has to be balanced against significant life extension and improvement in life quality.Currently no reliable prognostic markers are available for the selection of the most suitable heart transplant candidates.
We analysed the characteristics of patients with advanced HF, hospitalised at a single tertiary centre.Their long-term survival was evaluated.In addition, we identified independent 1-year mortality predictors.Since most patients with advanced HF are managed on an out-patient basis, we established independent 1-year mortality indicators that can be detected during an out-patient examination.
PATIENTS AND METHODS
Consecutive patients with HF referred by cardiologists to a single tertiary centre from January 1998 to January 2010 in order to assess the indication for heart transplantation were enrolled.Patients hospitalised for other reasons were not included.
During this period, 1,000 HF patients were hospitalised and monitored.There were more men (86.8%) than women (13.2%).Their mean age was 49.0 ± 10.9 years.The characteristics of patients are summarized in Table 1.
All analysed parameters were measured during the first hospitalisation at the centre.Oral medication on discharge was evaluated.Mortality data was received from the National Health Information Centre.The day of an elective HTx was taken as the end of follow-up.HTx in urgent cases or MCS implantation was regarded as equivalent to death.MCS were implanted in our group as a bridge to transplant or to candidacy and the mean INTERMACS class at implantation was 2.1 ± 0.6.
Statistical analysis
Statistical analysis was performed using the SPSS statistical software, version 16.0.Nominal variables were characterised by arithmetic means and standard deviations.The normality of the distribution of nominal variables was tested with the Kolmogorov-Smirnov or Shapiro-Wilk tests, as appropriate.For variables with normal distribution, the parametric Student t test was used, and for variables with non-normal distribution we used the non-parametric Mann-Whitney test.
Patient survival was evaluated using the method of survival analysis, and expressed by means of Kaplan-Meier curves.In order to determine the mortality predictors, we used stepwise logistic regression analysis (backward method).Variables examined in fewer than 80% of patients and considered to have the potential to predict mortality were tested using independent mortality predictors from the previous model.The significance level for all the tests used was P < 0.05.
Survival Rates
The mean follow-up was 4.3 ± 2.7 years (range 1-12 years).Four hundred and sixty-nine (469) patients died during the follow-up period.One hundred and sixty-seven (167) patients underwent heart transplantation.The oneyear survival rate was 83%, the 3-year survival rate 63%, the 5-year survival rate 50%, the 7-year survival rate 39%, and the 10-year survival rate 23%.Survival rates are illustrated using the Kaplan-Meier curve as seen in Fig. 1.
One-Year Mortality Predictors
In order to determine independent predictors of 1-year mortality, we included 55 parameters in the multivariate logistic regression analysis.Independent predictors of one year mortality were coronary artery disease, left ventricular end-diastolic diameter (LVEDD) >79 mm, natraemia <135 mmol/L, need for intravenous therapy at hospital admission (diuretics and/or inotropes), and a daily dose of furosemide at discharge >240 mg (Table 2).
The remaining parameters examined were not significant in the multivariate analysis, although systolic blood pressure <100 mm Hg almost reached statistical significance (P = 0.08).
Some parameters of potential prognostic significance were examined in fewer patients; these were subsequently tested using the five independent predictors from the introductory model.In this manner, we tested haemodynamic parameters examined in 26.1% of patients (right atrial pressure, pulmonary artery pressures, pulmonary capillary wedge pressure, cardiac index), NT-proBNP (examined in 20.4% of patients), pVO 2 (obtained during bicycle spiroergometry in 27.3% of patients), and the 6-min walking test (performed in 75.2% of patients).In this extended analysis, the independent predictors of one-year mortality were also NTproBNP >2,297 pg/mL, 6-min walking distance <375 metres, systolic pulmonary artery pressure >60 mm Hg, and pulmonary capillary wedge pressure >27 mm Hg (Table 3).
One-year mortality predictors available at out-patient examination
Subsequently we focused on establishing prognostic indicators available at out-patient cardiology examination.Fifteen parameters that are generally available during a routine out-patient examination were included in the multivariate logistic regression analysis (Table 4).It was found that, of the parameters defined in this way, independent indicators of 1-year mortality include NYHA functional class III/IV, systolic BP <100 mm Hg, coronary artery disease, LVEDD >79 mm, a daily furosemide dose >240 mg, and an absence of beta-blocker (Table 5).
Patient population
Patients with advanced HF constitute a select group of patients who are considered for heart transplantation.Heart transplantation is age-limited, and this is the main reason that the mean age of these patients is significantly lower than the average age of patients with HF in the general population 1,5 .At the same time, patients referred for HTx evaluation are free of significant comorbidities.
The most frequent cause of HF was DCM (57%).The occurrence of DCM was more than double the occurrence of coronary artery disease (27%).Other authors recorded higher incidences of coronary artery disease in a comparable population of patients 6,7 .
The incidence of comorbidities was relatively low.This may be because the patients were relatively young and patients with multiple comorbidities are not considered for HTx.
Diabetes mellitus with organ complications, in particular with significant nephropathy and vascular complications, is a contraindication for HTx.For this reason, the attending cardiologists often do not refer patients with significant diabetic complications to the centre for con-sideration for HTx.It is assumed that the thorough evaluation of renal function in HTx candidates also contributed to the relatively low occurrence of renal insufficiency in our patients.Impairment of renal functions during an episode of cardiac decompensation or during aggressive treatment with diuretics, in particular if renal function was subsequently restored, was not considered renal insufficiency.Therefore, the prevalence of renal insufficiency in our group may be regarded as occurrence of direct renal damage, e.g. in the case of diabetic nephropathy or nephrosclerosis.This is demonstrated by the difference between the occurrence of renal insufficiency (7.1%) and hypercreatinaemia (30.6%) in our patients.Data from US transplant centres indicate that 21% of HTx candidates have creatinaemia >132 umol/L (ref. 7).][10] ).
There is no doubt that the mortality of HF patients in the era of modern treatment has decreased, but still remains high.In the Framingham study (1948-1988), one-year and five-year mortality in cases of HF was 43% and 75% in men, and 36% and 62% in women 11 .Stewart et al. state that one-year and four-year mortality in HF patients in the period before modern treatment was 40% and 65%, resp., which was worse than for most malignancies 12 .In comparison to the non-selected population of patients with advanced HF, the HTx candidates are younger and extracardiac comorbidities are rarer and less severe.Lietz and Miller analysed mortality on HTx waiting lists in individual periods, and concluded that mortality had decreased.In 2000-2005, compared to 1990-1994, the one-year mortality in outpatient HTx candidates on waiting lists dropped from 18.2% to 10.6%.The one-year mortality of urgent HTx candidates also decreased significantly, from 50.5% to 31% (ref. 7).Due to the characteristics of our group, the mortality in our group of patients corresponds to that of patients with advanced HF with severe left ventricular dysfunction, only minimally modified by comorbidities.In this type of patient population, only 7% of patients die from non-cardiac causes 13 .
The 1-year mortality of HF patients hospitalised at our centre from 1998 to 2010 was 17%.One-year mortality following heart transplantation was 8%, and 28% following left ventricular mechanic support implantation (unpublished data).Therefore, the interventions are intended for patients in whom the risk level is balanced by appropriate benefit.
One-Year Mortality Predictors
Independent predictors of one-year mortality were coronary artery disease, left ventricular end-diastolic diameter >79 mm, natraemia <135 mmol/L , a daily furosemide dose at discharge >240 mg, and the need for intravenous diuretic and/or inotrope therapy following admission.According to the statistical results, it seems that the most powerful indicators of 1-year mortality in our group are hyponatraemia and intravenous therapy.Hyponatraemia is generally considered a predictor of non-favourable prognosis in HF patients [14][15][16][17][18] .Intravenous therapy comprised the administration of diuretic agents and/or inotropy.It was administered to patients with decompensated HF, or to those with signs of hypoperfusion.Significant left ventricular dilatation indicated an unfavourable development.Some authors mention the predictive value of end-diastolic 19 or end-systolic left ventricular dilatation 20,21 .In patients with advanced HF, it is usually necessary to administer furosemide to decrease the risk of repeated cardiac decompensation.Other predictors of one-year mortality, independent of those mentioned above, were NT-proBNP >2,297 pg/mL, 6-minute walking distance <375 metres, systolic pressure in pulmonary artery >60 mm Hg, and pulmonary artery wedge pressure >27 mm Hg.
In the course of the follow-up period, we proceeded from BNP measurements to NT-proBNP measurements.In our group, the median NT-proBNP was 2,297 pg/mL.Some authors mention the NTproBNP value of 1,000 pg/ mL as a limit which predicts an increased risk of adverse episodes, independent of other subjective or objective parameters 22 .In the population of HTx candidates, other authors mention NT-proBNP >4,302 pg/mL as a negative predictor 6 .Published data on the prognostic value of the 6-minute walking test in HF patients is inconsistent 23,24 .The walking test was not performed in patients in poor functional condition with an obvious intolerance of physical activity.The right-sided heart catheterization procedure is not a routine examination in patients with advanced HF, but is necessary prior to inclusion on the HTx waiting list.In general, systemic hypotension is considered a negative prognostic predictor in patients with HF 14,15 .In our group, systolic blood pressure <100 mm Hg showed a trend to increased one-year mortality, however this just failed to reach statistical significance (P = 0.08).Low blood pressure in this population of patients may be a sign of poor circulation.On the other hand, in stable patients, a well tolerated vasodilator treatment may contribute to a relatively lower blood pressure.
In order to increase the efficiency of the consultation process for external cardiologists with candidates for heart transplantation, we assessed 15 parameters that are easily available during a routine outpatient cardiology examination.Independent predictors of 1-year mortality that were close to significant included NYHA functional class III/IV, systolic BP <100 mm Hg, coronary artery disease, LVEDD >79 mm, a daily furosemide dose >240 mg, and the absence of a beta-blocker.Just as in the case of "the large" analysis, the 1-year mortality indicators were coronary artery disease, a daily furosemide dose >240 mg and LVEDD >79 mm, and also systolic BP <100 mm Hg, and absence of beta-blockers.
Limitations
Our study is a retrospective analysis.For this reason, several evaluated parameters were not available in all patients.We have been examining NT-proBNP as part of routine practice since 2007; before that we measured BNP.Since 2001, we have initiated a routine performance of the 6-min walking test in accordance with a standard protocol.In addition, some of the indication criteria changed during the follow-up period, e.g.those for the use of resynchronisation therapy.Internal manuals and procedures in our centre have also been modified and developed.Since 2007, we have experienced roughly a twofold increase in the number of heart transplantations, and implantations of mechanical support systems have started -which also influenced the HTx indication criteria.
CONCLUSION
The study describes the long-term follow-up of a selected group of patients with advanced heart failure at a single centre; their mortality is only minimally limited by paracardiac morbidity.The medium and long-term outlook for survival in this population of patients is poor.Due to the enormous increase in the number of MCS implantations and the stagnation in the number of heart transplants, the analysed patients are potential candidates for MCS implantation.Knowledge of easily available prognostic predictors may improve the identification of patients who may benefit from heart replacement therapy.
Table 3 .
Independent mortality indicators out of parameters examined in <80% of patients.
Table 4 .
Parameters available at outpatient examination and included in multivariate logistic regression as potential indicators of one-year mortality.
Table 5 .
Independent 1-year mortality indicators from parameters examined at outpatient examination. | 2018-04-03T03:29:13.149Z | 2016-06-24T00:00:00.000 | {
"year": 2016,
"sha1": "0e69705d076302ba1b3676c306f2aabeee5d8113",
"oa_license": "CCBY",
"oa_url": "http://biomed.papers.upol.cz/doi/10.5507/bp.2015.049.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0e69705d076302ba1b3676c306f2aabeee5d8113",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
84833364 | pes2o/s2orc | v3-fos-license | W I -F I M ICROCONTROLLER B ASED SMART M ENU
Nowadays the technology is embedded in every utilized application to increase the reliability and minimize the human errors caused by the conventional methods. The traditional methods usually been used in restaurant is by taking the customer’s orders and write it down on a piece of paper. Many ordering systems have been proposed in order to solve this issue. In this paper a newly proposed model called Smart Menu is designed basedon the Wi-Fi technology as the communication medium and Peripheral Interface Controller ( ARM Cortex – m7 processor ) as the hardware which implements faster ordering system. The aim for the smart menu model is to build and design both hardware and software for the ordering and delivering system at restaurants by using TFT LCD connected to the kitchen through WI-Fi technology. Result shows that the hardware and software are successfully functional and able to be used as a smart ordering system. The proposed model is able to handle the lack number of the workers, reduce the lateness and the error on ordering foods by the customers.
INTRODUCTION
A restaurant is a place which is used for providing food and drinks.But why do we prefer a restaurant over another one that serves the same food?.Is it because the waiter always forgets to remove the tomatoes from your sandwich?Or is it because the order takes forever to be ready?Having your detailed order submitted directly to the chef was a hard obstacle in the past.Today, you can have exactly what you want just in few clicks.The restaurant menu has developed from its modest beginnings on carte writing slates and picture less print to today's point by point, bright shows.So, the digital image for a new meal will attract more people than just its name placed within the list.People rather buying a product they already familiar with than a new one, but with having the item on display, it will attract them and let them enjoy trying the new experience.Both the owner and the customer will find it more convenient and hence values will be added from the good impression and the efficient administration and management of the entrepreneurs.
The proposed Smart Menu system uses a TFT LCD display module which is placed on each customer's table for them to make orders.Order will be made by selecting the items displayed on LCD.The order will be sent from the customer section using WI-FI communication, and automatically will be displayed on a screen at the kitchen.The bill will be displayed with table number at the manager/billing office.The proposed model will reduce the time spent on making the orders and paying the bills, whereby the cost and man power also can be reduced.As an extension for this model in the future, many features will be included as of booking table remotely by mobile phone.
LITERATURE REVIEW 2.1 E-TABLE: THE UNIQUE RESTAURANT INTERACTIVE ORDERING SYSTEM [1]
An ARM cortex-M3 is used for driving the Graphically Liquid Crystal Display (GLCD)screen which is used for displaying the menu.The order will be sent using ZIGBEE module along with the table number.A buzzer is then set to high in the kitchen's area.This system provides speech recognition method as an input as shown in fig. 1.
WIRELESS TWO-WAY RESTAURANT ORDERING SYSTEM VIA TOUCH SCREEN (WTROSTS) [2]
This system prioritizes the customers by "first come, first serve" and only one customer can connect to the server at a time.The cooking room has a push button for sending back an acknowledgment to the customer's table as an indication for placing the order as shown in fig 2 and fig
DESIGN AND DEVELOPMENT OF AN E-RESTAURANT USING RTOS PROGRAMMING TO ENHANCE THE QUALITY OF SERVICE [3]
A touch screen graphical liquid crystal display (GLCD) acts as a menu recommender, the ARM controller module is placed at the kitchen section and a PC with real time operating system (RTOS) at the billing counter.The ZIGBEE technology is used for communication as shown in fig 4.
DESIGN OF THE RESTAURANT SELF-ORDERING SYSTEM BASED ON ZIGBEE TECHNOLOGY. (USING ARM CORTEX MICROCONTROLLER AND COLOR GLCD) [4]
This system includes two ARM microcontrollers one for the customer (transmitter) for making orders and the other for the kitchen (receiver) to receive the order showing the information of the order and the number of table that requested the order using ZigBee as the way of communication between the transmitter and receiver as shown in fig 5 and 6
TOUCH SCREEN BASED ADVANCED MENU DISPLAY AND ORDERING SYSTEM FOR RESTAURANTS [5]
The aim from this system from the cost wise aspect is to save spending a lot of money for similar systems especially for restaurants that work on a small scale.A Graphically Liquid Crystal Display (GLCD) provided with a touch screen that will display the menu items of the restaurant and it connects with the receiver that contains Liquid Crystal Display (LCD) screen using RF module as shown in fig
SMART ORDERING SYSTEM VIA BLUETOOTH [6]
In this system the customer will make the order using keypad that will be placed on the table.There will be a code with menu where the customer has to type the code to order the item he / she wants and this code will be decrypted by the microcontroller.The order will be transmitted to the computer in the kitchen through Bluetooth for preparing the order and to the counter computer for the billing procedure as shown in fig 9
COMPARISON BETWEEN THE PROPOSED AND THE EXISTING SYSTEMS
Table 1 shows the features of the proposed system (Smart Menu) compared to the existing systems.
THE NEWLY PROPOSED SYSTEM (SMART MENU)
In this section the features ofthe proposed system (Smart Menu), the methodology andthe flow chart will be discussed .
MAIN FEATURES
The microprocessor is using a 32 bit ARM Cortex-M7, Adaptive Real-Time Accelerator (ART) , a 4KB data cache and a 4KB instruction cache, with 0-wait state execution from the external memories and the embedded flash memory.The frequency is above 216 MHz, with Digital Signal Progressing (DSP) instructions.
The Micro Controller (MC) programming tool with 5 V power supplied by a USB connector operates between 0 to 50 °C.It is used to interface the TFT LCD which has a resolution of 480xRGBx272, and as high as 16.7M colors.
METHODOLOGY
First the total idea is broken down into several technology points such as touch controller, power management, LCD, etc. Then each point will be searched separately till we find out the best suited components for each of the technology points.Designing the usage of the pin out of the microcontroller is then done using CubeMX software which is specialized in STM32F ARM Microcontrollers.Therefore a microcontroller (Arm Cortex-M7) is selected to drive the TFT LCD using Keil micro vision software.The TFT LCD is provided with a touch screen to display the menu of the restaurant explaining each item with its description.The customer can select the items he/she wants from the menu then the order is submitted.The data of order will be transmitted to the kitchen using Wi-Fi connection to save time and provide a good service for the customer.In addition, the system can be provided with external Secure Digital (SD) memory card for future updates.
SYSTEM DESCRIPTION
Fig. 10 shows the main block diagram of the newly proposed system.The functions of the Hardware/Software components that are used and the reason for choosing them will be discussed in this section.
THE MICROCONTROLLER
Everyone knows what is a computer and how does it look like.Simply, it is a screen, a keyboard, a mouse, a printer and the most important part, the central processing unit (CPU).But there are also computers calculating, running programs, without interacting with humans.Those devices are known as "microcontrollers".The word "Micro" is due to their small size, and the "controller" part is because they are used to control gadgets, machines and else.They are designed for controlling machine applications, rather than a human interaction [7].
Microcontrollers are divided into categories according to their architecture, memory, number of bits and the instruction sets used.Many microcontrollers have completely different features.
ARM Cortex-M7 microcontroller will be used in The proposed system which has many features as shown below: • A 6-stage pipeline that can execute up to two instructions per a clock cycle.
• Instruction cache from 4KB to 64KB The reason for choosing this type of the microprocessor because of its specifications and objectives mentioned before.Besides that, this item was available to be ordered online with a great price according to its features.The ARM Cortex-M3 was another competitive for having similar features but the ARM Cortex-M7 is chosen to be used in the proposed Smart Menu Model
THE LCD
As innovation included, a touch screen LCD will probably pull in more clients.The touch screens are classified into two types' single touch and multi touch.The single touch is not prone to be utilized these days for its absence of capacities while the multi touch screens have two types, capacitive touch screens and resistive touch screens [8] The capacitive touch panels are made of n insulator which consists of glass coated with a transparent conductor which is made of indium tin oxide (ITO) and since the body of a human acts a good conductor of electricity therefore when contact is occurred between the human body and the Capacitive Touch Panel that will cause a distortion to electrostatic field of the touch panel.The display will respond according to the reading of touch panel's distortion.
The resistive touch panel is made up of many different layers.Pressing down onto the touch panel with a finger or a stylus, the layer on the top flexes and then pushes the layer behind it.As a result, a complete circuit will be created and then it notifies the controller which part of the touch panel has been press on.
In the proposed system a TFT LCD will be used which is a resistive touch screen as seen in Fig 11 .The most common TFT design is the inverse staggered structure.This structure presents high electron mobility and many advantages of a simple fabrication process.The first step in the TFT array fabrication consists of gate and storage-capacitor electrodes construction with 2000-3000 A of a metal such as, chromium, aluminum, tungsten or tantalum layer deposition then a triple layer of amorphous silicon and silicon nitride.It is then deposited using plasma-enhanced chemical vapor deposition (PECVD).
SOFTWARE DESCRIPTION:
In the proposed system there are three different types of the used software, Proteus, Cube MX and Kiel Micro-Vision.Proteus is a software suite containing schematic, simulation as well as PCB designing.Cube MX is a graphical software configuration tool that allows generating C initialization code using graphical wizards.Kiel Micro-Vision is an integrated development environment which allows the program to be written either in assembly or C language and simulated on a computer before being loaded into the microcontroller.
OPERATION SEQUENCE
A simple flow chart that illustrates the operation sequence of the system is shown in fig 12.
Figure 12.The flow chart of the Smart Menu
TESTING METHOD
First, the schematic diagram is implemented using Proteus software for testing the design before purchasing the items.The driver's code is generated by the CubeMX software to run on the KeilMicrovision IDE software.For testing the hardware part, multi-meter and oscilloscope devices will be used.Finally, the ST LINK V2 debugger will be used to test the firmware using KeilMicrovision software.
SYSTEM DESIGN
This sections shows the system design and the implementation of the newly proposed system (Smart Menu)
CIRCUIT DIAGRAM
In the proposed design, a TFT-LCD provided with a touch screen is used that is derived by an ARM Cortex M7 microcontroller with high performance for displaying the items of the menu in a good quality.In addition there is a Wi-Fi module as a way of communication between the transmitter and the receiver.Also there is a SD Card for external storage and USB for connecting external devices.
POWER MANAGEMENT
For power management, a Low Drop Out (LDO) is used to have an output voltage of 3.3 V which is the input voltage for the microcontroller.As for the LCD, it needs an input voltage of 26V.So, aDC/DC booster will be used to change the 5V input to a 26V output.Figure 17 shows the block diagram power management The value of R2 should be 100.98kΩ, to get a value close to that it should be changed to 102 kΩ which will also supply about 26V.
SYSTEM COMMISSIONING AND TESTING
First, the schematic diagram is implemented using Proteus software for testing the design before purchasing the items.The driver's code is generated by the CubeMX software to run on the Keilmicrovision IDE software.For testing the hardware part, multi-meter and oscilloscope devices are used .Finally, the ST LINK V2 will be used to download the code written using Keil micro vision to the Micro controller .The Microcontroller will control all actions taken on the LCD.
CONCLUSION AND FUTURE WORK
In this paper, some previous systems were discussed but the contribution proposed in those systems are not enough for providing a good quality of performance to satisfy the customers.The newly proposed system (Smart Menu) is introduced in this paper.The design and the implementation of the smart menu have been discussed in details.The implementation of the system is based on the Microcontroller (ARM Cortex M7), TFT LCD, SD card and WiFi module.Having a Smart Menu in a restaurant will facilitate the process of ordering and charging food.As the Smart Menu has a lot of outstanding features, but still there are a lot of features that can be added too to the system.For example, adding a credit card payment to the system will make the process easier for many customers.Also, adding using the mobile phone will help the customers to reserve their table before arriving to the restaurant.
added too to the system.For example, adding a credit card payment to the system will make the the possibility of booking the table remotely help the customers to reserve their table before arriving to the
Figure 2 .
Figure 2. The WTROSTS block diagram for the table
Figure 9 .
Figure 9. Block diagram of the Smart Ordering System
Figure 10 .
Figure 10.The proposed system's block diagram
Figure 13 .
Figure 13.Microcontroller circuit diagram Figure 13 shows the following : • L4 (Ferrite bead coil): it's a filter for allowing DC only to pass and neglecting AC to avoid noise.• Crystal X1: to make the output 1 MHz • All the values of the capacitors based on the datasheet of the Microcontroller.
Figure 14 and
Figure 14 and Figure 15 show the LCD circuit diagram and WiFI, USB and SD diagram respectively
Figure 18 .
Figure 18.Snapshot for the smart menu with all options5.1.2CONNECT YOUR WI-FI TO THE ACCESS POINT OF THE WI-FI MODULE OF THE DEVICE
Figure 22 .
Figure 22.Snapshot for selecting the quantity5.1.6FOR SELECTING THE LAST ITEM SELECT ORDER TO FINALIZE THE ORDER THEN A NEW PAGE WILL APPEAR WITH THE FINAL PRICE AND FOR CONFIRMATION AND SENDING THE ORDER SELECT THE GREEN RIGHT SIGN.
Figure 23 .
Figure 23.Snapshot for calculating the price.
Figure 24 .
Figure 24.Snapshot for the final order.
Unique
Restaurant Interactive Ordering System", The way Restaurant Ordering System via e, Vol.3, No.7, pp.01-05, 2014.Restaurant using RTOS programming to Enhance the Quality of Service", International Journal of Inventions in Ordering System Based on ZigBee Technology.(Using ARM cortex microcontroller and color GLCD)" International Journal Venmathi.V, Eswari.M, Jasmine Jenita.R, Jayasri.S, Kavitha.R "Touch Screen Based Advanced Menu Display and Ordering System for Restaurants" International Journal of Engineering Science N. M. Z.Hashim, N. A. Ali, A.S. Jaafar, N.R.Mohamad, L.Salahuddin, N. A. Ishak, "Smart Ordering System via Bluetooth", International Journal of Computer Trends and Technology,pp.2253-M7Core: Providing Adaptability for the Internet of Tomorrow.Available at: http://www.nxp.com/assets/documents/data/en/whitequalitystandard and custom OLEDs, LCDs and VFDs.Available at: http://www.newhavendisplay.com/app_notes/TPcompare.pdf
Table 1 .
Performance comparison • Data cache from 4KB to 64KB • Optional ECC (Error Correction Code) support for the cache memories • A 64-bit AXI system bus interface • A 64-bit instruction tightly coupled memory (ITCM) • An optional dual 32-bit Data Tightly-Coupled Memory (TCM) with Error-Correcting Code(ECC) support for customer and implementing the TCM memory arrays.• Low latency Advanced High-Performance (AHB) peripheral bus interface that allows fast access and deterministic to peripherals in real-time applications.
Table 2
shows the cost analysis for the newly proposed smart menu prototype model having all used items
Table 2 .
Cost analysis of the proposed design | 2019-03-04T19:46:49.930Z | 2019-02-28T00:00:00.000 | {
"year": 2019,
"sha1": "fa95b7880803d4fd13ca582be96734875ef806db",
"oa_license": null,
"oa_url": "https://doi.org/10.5121/ijcsit.2019.11105",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3effe52cf4e5b998fdb6cb60fd30cd0605178745",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
212741390 | pes2o/s2orc | v3-fos-license | Cytosolic Ca2+ Modulates Golgi Structure Through PKCα-Mediated GRASP55 Phosphorylation
Summary It has been well documented that the ER responds to cellular stresses through the unfolded protein response (UPR), but it is unknown how the Golgi responds to similar stresses. In this study, we treated HeLa cells with ER stress inducers, thapsigargin (TG), tunicamycin (Tm), and dithiothreitol (DTT), and found that only TG treatment resulted in Golgi fragmentation. TG induced Golgi fragmentation at a low dose and short time when UPR was undetectable, indicating that Golgi fragmentation occurs independently of ER stress. Further experiments demonstrated that TG induces Golgi fragmentation through elevating intracellular Ca2+ and protein kinase Cα (PKCα) activity, which phosphorylates the Golgi stacking protein GRASP55. Significantly, activation of PKCα with other activating or inflammatory agents, including phorbol 12-myristate 13-acetate and histamine, modulates Golgi structure in a similar fashion. Hence, our study revealed a novel mechanism through which increased cytosolic Ca2+ modulates Golgi structure and function.
HIGHLIGHTS Thapsigargin (TG) treatment leads to Golgi fragmentation independent of ER stress TG induces Golgi fragmentation through elevated cytosolic Ca 2+ TG-induced cytosolic Ca 2+ spikes activate PKCa that phosphorylates GRASP55 Histamine modulates the Golgi structure and function by a similar mechanism
INTRODUCTION
In mammalian cells, the Golgi apparatus is characterized by a multilayer stacked structure of $5-7 flattened cisternal membranes, and stacks are often laterally linked to form a ribbon located in the perinuclear region of the cell (Tang and Wang, 2013;Wang and Seemann, 2011). The exact mechanism of Golgi stack formation is not fully understood, but it has been shown that the Golgi re-assembly stacking protein of 55 kDa (GRASP55, also called GORASP2) and its homolog GRASP65 (GORASP1) play essential roles in Golgi stacking (Wang et al., 2003;Zhang and Wang, 2015). Both GRASPs are peripheral membrane proteins that share similar domain structures and overlapping functions (Wang and Seemann, 2011). GRASP65 is predominantly concentrated in the cis Golgi, whereas GRASP55 is localized on medial-trans cisternae. Both GRASPs form trans-oligomers through their N-terminal GRASP domains that ''glue'' adjacent Golgi cisternae together into stacks (Wang et al., 2003; and ribbons (Feinstein and Linstedt, 2008;Puthenveedu et al., 2006). GRASP oligomerization is regulated by phosphorylation; mitotic phosphorylation of GRASP55 and GRASP65 at the C-terminal serine/proline-rich (SPR) domain inhibits oligomerization and results in Golgi cisternal unstacking and disassembly Wang et al., 2005;.
The Golgi exhibits different morphology in different cell types and tissues as well as under different conditions. For example, in many secretory cells such as Brunner's gland of platypus, the Golgi forms large, well-formed stacks (Krause, 2000), whereas electron micrographs show reorganization of Golgi membranes in prolactin cells of female rats upon cessation of a sucking stimulus (Rambourg et al., 1993). In neurons, increased neuronal activity causes dispersal of the Golgi at the resolution of light microscopy (Thayer et al., 2013). In Alzheimer disease, the Golgi membranes are dispersed and fragmented in neurons from human brain and mouse models (Joshi et al., 2015). Golgi fragmentation is also observed in other neurodegenerative diseases, including Parkinson (Mizuno et al., 2001) and Huntington (Hilditch-Maguire et al., 2000) diseases and amyotrophic lateral sclerosis (ALS) (Fujita and Okamoto, 2005;Gonatas et al., 1998;Mourelatos et al., 1996). In addition, the Golgi has also been shown to be fragmented in lung, prostate, and breast cancers (Petrosyan et al., 2014;Sewell et al., 2006;Tan et al., 2016). A plausible hypothesis is that the Golgi adjusts its structure and function in response to different physiological and pathological conditions; however, the molecular mechanisms that control Golgi structure and function under disease conditions are so far not well understood.
The Golgi structure can be modulated experimentally such as by molecular manipulations of GRASP55 and GRASP65. Microinjection of antibodies against GRASP55 or GRASP65 into cells inhibits post-mitotic stacking of newly formed Golgi cisternae (Wang et al., 2003(Wang et al., , 2008. Knockdown (KD, by siRNA) or knockout (KO, by CRISPR/Cas9) of either GRASP reduces the number of cisternae per stack (Sutterlin et al., 2005;, whereas simultaneous depletion of both GRASPs causes fragmentation of the entire Golgi stack . Expression of non-phosphorylatable GRASP65 mutants enhances Golgi stacking in interphase and inhibits Golgi disassembly in mitosis . Because GRASPs play critical roles in Golgi structure formation, it is reasonable to speculate that physiological and pathological cues may trigger Golgi fragmentation through GRASP55/65 modification, such as phosphorylation Li et al., 2019a). Using GRASPs as tools to manipulate Golgi stack formation, it has been demonstrated that Golgi cisternal unstacking accelerates protein trafficking but impairs accurate glycosylation and sorting Xiang et al., 2013). In addition, GRASP depletion also affects other cellular activities such as cell attachment, migration, growth, and autophagy (Ahat et al., 2019b;Zhang et al., , 2019. Protein kinase C (PKC) is a large family of multifunctional serine/threonine kinases that are activated by signals such as increases in the concentration of diacylglycerol (DAG) and/or intracellular calcium ions (Ca 2+ ). In cells, PKCs are mainly cytosolic, but transiently localize to membranes such as endosomes and Golgi upon activation (Chen et al., 2004;El Homasany et al., 2005). Membrane association of PKC is via a C1 domain that interacts with DAG in the membrane. Conventional PKCs (cPKCs) also contain a C2 domain that binds Ca 2+ ions, which further enhances their membrane association and activity (Nishizuka, 1995). Knockdown of atypical PKCs (aPKCs) using siRNA causes a reduction in peripheral ERGIC-53 clusters without affecting the Golgi morphology (Farhan et al., 2010). In addition, increased PKC activity has been implicated in cancer (Cooke et al., 2017;Kim et al., 2013), but the mechanism by which PKC may contribute to invasion, inflammation, tumorigenesis, and metastasis is not fully understood (Griner and Kazanietz, 2007).
In this study, we performed high-resolution microscopy and biochemistry experiments to determine how the Golgi responds to cellular stresses such as ER stress. Although not all ER stress inducers caused Golgi fragmentation, treatment of cells with the Ca 2+ -ATPase inhibitor thapsigargin (TG) resulted in Golgi fragmentation with a low dose and short time in which ER stress was undetectable, indicating that Golgi fragmentation occurs independently of ER stress. Further experiments demonstrated that TGinduced cytosolic Ca 2+ spikes activate PKC that phosphorylates GRASP55. Interestingly, inflammatory factors such as histamine modulate the Golgi structure through a similar mechanism. Thus, we have uncovered a novel pathway through which cytosolic Ca 2+ modulates the Golgi structure and function.
TG Induces Golgi Fragmentation and UPR
It has been hypothesized that ER stress and the unfolded protein response (UPR) cause Golgi fragmentation and dysfunction through overloading misfolded proteins into the Golgi (Oku et al., 2011). To test this hypothesis, we performed a time course treatment of HeLa cells with a well-known UPR inducer, TG, which specifically blocks the sarcoendoplasmic reticulum Ca 2+ transport ATPase (SERCA) (Xu et al., 2004) and causes Ca 2+ dysregulation (Ito et al., 2015). We assessed the Golgi morphology by co-staining the cells for GM130, a cis-Golgi marker, and TGN46, a protein in the trans-Golgi network. As shown in Figures 1A and 1B, the Golgi became fragmented after TG treatment, and the response was linear over time ( Figures 1A, 1B, and S1A). Although Golgi fragmentation was more obvious after a longer treatment, it became detectable in shorter treatments such as 10 min. More careful examination of the Golgi morphology by super-resolution fluorescence microscopy demonstrated that the Golgi ribbon was broken down, as the Golgi appeared as disconnected puncta. The stacks were also defective, as indicated by the separation of GM130 and TGN46 signals ( Figures 1C and 1D).
To correlate Golgi fragmentation with UPR, we performed Western blot of TG-treated cells to assess the levels of several UPR markers, including phosphorylated eukaryotic translation initiation factor 2A eIF2a (p-eIF2a), the ER chaperone binding of immunoglobulin protein (Bip), and the CCAAT-enhancer-binding protein homologous protein (CHOP). As shown in Figures 1E-1G, longer term TG treatment, for example, 2 h or longer, caused UPR, as indicated by the increase of all three markers. When the treatment was reduced to 30 min, only the p-eIF2a level increased, whereas Bip and CHOP did not change. This indicates that the minimal time for UPR to occur is $30 min under our experimental conditions. Consistently, no significant increase in the level of any of these UPR markers was detected when the treatment was reduced to below 30 min. Interestingly, the Golgi in a significant proportion of cells was fragmented at this time. Golgi fragmentation was obvious with 10-min TG treatment when UPR was undetectable and became more Figure 1. TG Induces Golgi Fragmentation and UPR (A) Short-term TG treatment causes Golgi fragmentation. HeLa cells were treated with 250 nM TG, fixed at the indicated time points, and stained for GM130 (cis-Golgi) and TGN46 (trans-Golgi). Scale bar, 20 mm. (B) Quantitation of (A) for cells with fragmented Golgi using GM130 as the Golgi marker. (C and D) Super-resolution images of DMSO-(C) and TG-treated (D) HeLa cells. Cells were treated with 2 mM TG for 1 h, stained as in (A), and imaged with a Leica SP8 STED microscope. Indicated areas are enlarged and shown on the right as merged GM130 (green) and TGN46 (red). To quantify Golgi unstacking in these images, relative fluorescence intensity was plotted along a random line through the Golgi region. Note the agreement in peaks in control (C) and relative disagreement in peaks in the TG-treated cell (D). Scale bar in main images, 5 mm; in inserts, 1 mm. (E) Longer term TG treatment results in ER stress. Cells treated as in (A) were analyzed by Western blot of indicated proteins. Note that TG treatment increases the levels of p-eIF2a, Bip, and CHOP. (F-G) Quantitation of the ratio of p-eIF2a/eIF2a and the Bip levels from (E), with the no-treatment control normalized to 1. All quantitation results are shown as mean G SEM from at least three independent experiments; statistical analyses were performed using two-tailed Student's t-tests (*p % 0.05; **p % 0.01; ***p % 0.001). prevalent at 30-min treatment ( Figures 1A and 1B). The fact that Golgi fragmentation occurs earlier than UPR indicates that Golgi fragmentation is unlikely a downstream effect of ER stress, but rather occurs independently of UPR.
It is worth mentioning that TG treatment did not affect the level of key Golgi structural proteins, including the Golgi stacking proteins GRASP55 and GRASP65, the Golgi tethering protein GM130, and the Golgi SNARE Gos28 ( Figure 1E), indicating that TG induces Golgi fragmentation likely through modification rather than degradation of Golgi structural proteins. In addition, TG-induced Golgi fragmentation is reversible; when TG was washed out, the Golgi structure gradually returned to its normal shape (Figures S1B-S1D). Consistently, TG-treatment did not induce apoptosis as shown by Annexin V staining. In contrast, staurosporine treatment, which is known to induce apoptosis, increased Annexin V cell surface staining ( Figures S1D and S1E). In addition, TG treatment did not seem to affect the organization of the actin and microtubule cytoskeleton (Figures S1F and S1G).
Tunicamycin or Dithiothreitol Treatment Induces UPR but Not Golgi Fragmentation
To test whether the hypothesis that Golgi fragmentation occurs independently of UPR applies only to TG treatment or also to other ER stress inducers, we repeated the same set of experiments by treating cells with tunicamycin (Tm), an antibiotic that induces ER stress by inhibiting N-glycosylation and the accumulation of misfolded proteins in the ER lumen. Tm treatment did not affect the Golgi morphology after 360 min, as indicated by the GM130 and TGN46 signals ( Figures S2A-S2C). Further analysis of Tm-treated cells by electron microscopy (EM) also did not reveal any significant changes in the Golgi structure (Figure S2D). The treatment indeed induced UPR, as indicated by the robust increase in the p-eIF2a, Bip, and CHOP levels, in particular after 120 min ( Figures S2E-S2G). Six-hour Tm treatment increased the width of the ER cisternae and caused ER fragmentation ( Figure S2H). As Tm, dithiothreitol (DTT) treatment also did not cause Golgi fragmentation, although Bip and CHOP levels increased significantly after 120 min of treatment ( Figures S3A-S3D). We also performed super-resolution microscopy to examine the Golgi structure in parallel after Tm, DTT, or TG treatment. Similar to that observed in control cells, the Golgi structure is intact in Tm-and DTT-treated cells, with extensive overlap between cisand trans-Golgi markers, whereas TG treatment caused not only fragmentation of the Golgi structure but also separation of cisand trans-Golgi markers ( Figures S3E and S3F). Taken together, these results indicate that ER stress is unlikely a direct cause of Golgi fragmentation.
TG Induces Golgi Fragmentation Prior to UPR Through Elevated Cytosolic Ca 2+
We next sought to decouple the Golgi stress response from UPR after TG treatment. As a complementary approach to the timecourse experiment shown in Figure 1, we titrated TG (1-250 nM) in the treatment. Here we treated cells for 20 min, a time point prior to UPR becoming detectable when cells were treated with 250 nM TG ( Figure 1). The results showed that Golgi fragmentation increased linearly in response to the increasing TG concentration, and importantly, TG at low doses (1-250 nM) effectively caused Golgi fragmentation (Figures 2A and 2B). For comparison, we also assessed UPR in the same cells. As shown in Figures 2C-2E, treatment of cells with up to 250 nM TG for 20 min did not cause UPR as indicated by the p-eIF2a and Bip levels. These results indicate that TG triggers Golgi fragmentation independent of ER stress. Furthermore, we carried out similar experiments in normal rat kidney (NRK) cells and RAW 264.7 murine macrophages and obtained similar results ( Figure S4), indicating that the effect of TG treatment on the Golgi structure is not cell-type specific.
We next asked how TG treatment induces Golgi fragmentation. Knowing that TG increases cytosolic Ca 2+ (Jones and Sharpe, 1994), we employed the membrane permeable Ca 2+ chelator BAPTA-AM to test whether TG induces Golgi fragmentation through cytosolic Ca 2+ . We pre-treated cells with BAPTA-AM alone (60 mM) for 30 min and then with or without TG (100 nM) for 0, 15, 30, and 60 min (Figures 2F and 2G). The result showed that BAPTA-AM significantly prevented TG-induced Golgi fragmentation, whereas BAPTA-AM alone did not affect the Golgi morphology. Subsequent EM analysis confirmed TG-induced Golgi fragmentation and its rescue by BAPTA-AM ( Figures 2H and 2I). TG treatment reduced the number of cisternae per stack and the length of cisternae but increased the number of vesicles surrounding each stack. These effects were largely abolished by the addition of BAPTA-AM. These results demonstrated that cytosolic Ca 2+ is required for TG-induced Golgi fragmentation. Consistent with this notion, treatment of cells with a Ca 2+ ionophore, ionomycin (Io), also caused Golgi fragmentation ( Figures S5A and S5B). Therefore, the driving force behind TG-induced Golgi fragmentation is the elevated cytosolic Ca 2+ .
TG-Induced Golgi Fragmentation Increases Protein Trafficking in the Golgi
As GRASP-depletion-mediated Golgi destruction affects Golgi functions such as protein trafficking (Ahat et al., 2019b;Xiang et al., 2013), we examined the effect of TG treatment on the trafficking of the vesicular stomatitis virus glycoprotein (VSV-G) using the well-established RUSH system . Cells were transfected with a plasmid that encodes both the invariant chain of the major histocompatibility complex (Ii, an ER protein) fused to core streptavidin and VSV-G fused to streptavidin-binding peptide (SBP). Under growth conditions without biotin, the interaction between streptavidin and SBP retains VSV-G in the ER. Upon the addition of biotin, this interaction is disrupted, resulting in synchronous release of the VSV-G reporter from the ER to the Golgi. Because VSV-G is a glycoprotein, we used endoglycosidase H (EndoH) to distinguish its core (ER and cis Golgi) and complex (trans Golgi and post-Golgi) glycosylation forms as an indicator of trafficking. As shown in Figure 3A, TG treatment first slightly decreased VSV-G trafficking at 15-min release but then increased VSV-G trafficking at 60 and 90 min compared with DMSO control. Our previous studies showed that VSV-G reaches the cis Golgi at 15-20 min and trans Golgi at $90 min Li et al., 2019b). These results suggest that TG treatment may delay VSV-G release possibly by slowing down its folding; but once it reaches the cis Golgi, VSV-G trafficking across the Golgi stack is significantly accelerated. Monensin (Mo) is known to disrupt the Golgi structure and blocks TGN exit (Fliesler and Basinger, 1987) and thus was used as a control. As expected, monensin treatment resulted in VSV-G accumulation in the Golgi ( Figures 3A-3C).
To confirm these results using an alternative approach, we treated cells with Brefeldin A (BFA) to accumulate ManII-GFP in the ER. We then washed out BFA and analyzed ManII-GFP in ER-to-Golgi trafficking. The results showed that ManII-GFP started to accumulate in the Golgi at 60 min of BFA washout in the presence of TG, whereas the same observation occurred at 90 min in the control ( Figure 3D).
TG Induces Golgi Fragmentation Through PKCa Activation
Given that phosphorylation of Golgi structural proteins has been shown to cause Golgi fragmentation in physiological conditions such as in mitosis Wang et al., 2003;, as well as in pathological conditions such as in Alzheimer disease (Joshi et al., 2014), we explored the possibility that phosphorylation of Golgi structural proteins may play a role in TG-induced Golgi fragmentation. We treated cells with staurosporine, a non-selective kinase inhibitor, and a number of specific inhibitors of calcium-related kinases such as protein kinase Cs (PKCs) and Ca 2+ /calmodulin-dependent protein kinases (CAMKs). As shown in Figures S5C and S5D, staurosporine significantly reduced Golgi fragmentation in TG-treated cells. In addition, Bisindolylmaleimide I (BIM1), a selective PKC inhibitor, and KN-93, an inhibitor of CAMKII, also partially reduced Golgi fragmentation in TG-treated cells, whereas the myosin light-chain kinase inhibitor ML-7 and the protein kinase A (PKA) inhibitors H-89 and PKI had no such effects ( Figures 4A, 4B, S5C, and S5D). These results suggest that either PKC and/or CAMKII is involved in TGinduced Golgi fragmentation. Because both BIM1 and KN-93 inhibitors have pleiotropic effects, we selected two alternative drugs, Gö 6976 and KN-62, to inhibit PKC and CAMKII, respectively. Although Gö 6976 inhibited TG-induced Golgi fragmentation effectively, KN-62 had no effect ( Figures 4A and 4B), suggesting a major role of PKC in TG-induced Golgi fragmentation.
To further confirm that PKC activation causes Golgi fragmentation, we treated cells with phorbol 12-myristate 13-acetate (PMA), a widely used PKC activator, and its inactive enantiomer, 4-alpha-phorbol myristate acetate (4-alpha). The results showed that 4-alpha had no effect on the Golgi structure, whereas PMA treatment caused Golgi fragmentation ( Figures 4C and 4D), although 4-alpha and PMA had no effect on the level of PKC expression ( Figure 4E). In addition, expression of CAMKIIb had no effect on the Golgi morphology ( Figures S5E and S5F). There have been reports that activation of MAPK/ERK or PKD signaling causes Golgi fragmentation (Jamora et al., 1999;Jesch et al., 2001), we therefore inhibited these two kinases with U0126 or H-89, respectively. Pre-treatment of cells with these kinase inhibitors did not prevent Golgi fragmentation upon the addition of TG (Figures S5G and S5H). Taken together, these results indicate that TG induces Golgi fragmentation through PKC activation.
PKC has multiple isoforms including a, bI, bII, g, d, e, h, z, and i (Kajimoto et al., 2001). To identify the PKC isoform responsible for TG-induced Golgi fragmentation, we expressed GFP-tagged PKC isoforms, including all four known classical PKC (cPKC) isoforms (a, bI, bII, g) that respond to Ca 2+ stimuli, one from the non-calcium responsive novel PKC (nPKC, d) and one from the atypical PKC (nPKC, z) subfamily ( Figures S6A and S6B). To enhance the activity of expressed PKC, we also treated cells with PMA, using 4-alpha as a control. The results showed that expression of PKCa and treatment of cells with PMA increased Golgi fragmentation (Figures 4F-4H, S6A, and S6B). Interestingly, in addition to the localization to the (K) Cells in (I) were blotted for endogenous PKCa to evaluate the siRNA knockdown efficiency. All quantitation results are shown as Mean G SEM from three independent experiments. Statistical analyses were performed using two-tailed Student's t-tests (*p % 0.05; **p % 0.01; ***p % 0.001; NS, non-significant). plasma membrane as previously reported (Becker and Hannun, 2003), wild-type PKCa-GFP was also concentrated on the Golgi upon PMA treatment, as indicated by the colocalization with GM130 ( Figures 4F and S6B), whereas other PKC isoforms, or the inactive PKCa K368R mutant (Baier-Bitterlich et al., 1996), did not show the same phenotype ( Figures 4F and S6B-S6D). To further specify that PKCa mediates TG-induced Golgi fragmentation, we knocked down PKCa in cells with siRNA. The results showed that PKCa depletion reduces Golgi fragmentation after TG treatment (Figures 4I-4K). Taken together, these results demonstrate that PKCa activation causes Golgi fragmentation.
PKCa Induces Golgi Fragmentation Through GRASP55 Phosphorylation
Because activated PKCa localizes to the Golgi, we thought it might phosphorylate Golgi structural proteins. To identify potential PKCa targets on the Golgi, we performed gel mobility shift assays on a number of Golgi structural proteins, tethering factors, and SNARE proteins after TG treatment ( Figure S6E). To ensure that the band shift was caused by phosphorylation, we also applied staurosporine (2 mM for 10 min prior to TG treatment) to TG-treated cells to broadly inhibit phosphorylation. Among the proteins tested, GRASP55 and GRASP65 showed a smear above the main bands ( Figure S6E), indicating a partial phosphorylation of the proteins. To increase the resolution of phosphorylated proteins we utilized phos-tag gels, which showed GRASP55, but not GRASP65, to be significantly shifted up after TG treatment (250 nM, 1 h) ( Figure S6F). TGinduced mobility shift of GRASP55 was not seen upon Tm treatment ( Figure 5A, lanes 2 vs. 3) and was less dramatic than that by nocodazole (Noc) treatment that blocks cells in mitosis when GRASP55 is fully phosphorylated ( Figure 5A, lanes 3 vs. 4) , indicating that TG induced partial phosphorylation of GRASP55. The mobility shift of GRASP55 triggered by TG treatment was abolished by the addition of staurosporine ( Figure 5C, lanes 4 vs. 3; Figure S6F, lanes 3 vs. 2), validating the mobility shift by phosphorylation. In addition, incubation of purified recombinant GRASP55 with purified PKCa caused GRASP55 phosphorylation, confirming that PKCa can directly phosphorylate GRASP55 ( Figure 5D). In this experiment, PKCa was also autophosphorylated ( Figure 5D). Taken together, these results demonstrate that TG treatment activates PKCa, which subsequently phosphorylates GRASP55.
GRASP55 contains an N-terminal GRASP domain that forms dimers and oligomers and a C-terminal SPR domain with multiple phosphorylation sites Zhang and Wang, 2015). To map the PKCa phosphorylation site on GRASP55, we expressed GFP-tagged GRASP55 truncation mutants , treated the cells with TG, and determined their phosphorylation. A visible mobility shift of the GRASP55 variants was observed on the mutants possessing amino acids (aa)251-300 but not the truncated forms shorter than aa250 (Figures 5E and 5F). To further determine the functional consequence of GRASP55 phosphorylation, we expressed these constructs and treated cells with or without TG. The exogenously expressed GRASP55 truncation mutants were targeted to the Golgi as indicated by giantin as a Golgi marker but had no impact on the Golgi structure ( Figure S6G). However, when cells were treated with TG, expression of the N-terminal aa250 or shorter reduced TG-induced Golgi fragmentation, whereas expression of N-terminal aa300 or longer had no significant effect (Figures 5G and 5H). These results demonstrated that phosphorylation of GRASP55 within aa251-300 is important for TG-induced Golgi fragmentation.
Histamine Modulates the Golgi Structure Through the Same Pathway as TG Treatment
It is known that histamine activates Ca 2+ -dependent PKC isoforms and upregulates cytokine secretion via the release of calcium from the ER into the cytosol (Matsubara et al., 2005). It has also been shown that histamine triggers protein secretion and Golgi fragmentation (Saini et al., 2010), but the underlying mechanism has not been revealed. Therefore, we treated HeLa cells with histamine and determined the effect on Golgi morphology. As shown in Figures 6A and 6B, histamine treatment induced Golgi fragmentation in a dose-and time-dependent manner. More than 40% of cells possessed fragmented Golgi after 100 mM histamine treatment for 1 h, a concentration and time often used in previous studies (Sahoo et al., 2017;Xie et al., 2018). Subsequent EM analysis confirmed that histamine treatment induced alterations in the Golgi structure, including fewer cisternae per stack, shorter cisternae, and an increased number of Golgi-associated vesicles (Figures 6C, 6D, and S7A).
It has been previously shown that histamine activates Gbg, which causes TGN fragmentation (Saini et al., 2010). Because HeLa cells do not express Gbg, and histamine treatment triggers fragmentation of the entire Golgi stack ( Figures 6C and 6D), the Golgi fragmentation observed in our study may occur through a different mechanism. Indeed, as TG, histamine-induced Golgi fragmentation also depended on PKC, as the addition of the PKC inhibitor Gö 6976 reduced histamine-induced Golgi fragmentation (Figures 6E and 6F). To investigate the functional consequence of histamine treatment on Golgi function, VSV-G trafficking experiments were performed as above. As shown in Figures S7B and S7C, histamine treatment slightly decreased VSV-G trafficking within 45-min release but then began to increase the trafficking speed in the Golgi at later time points. These results demonstrate that histamine treatment affects Golgi structure and function through a similar mechanism as TG. Although it has been well documented that TG or histamine treatment elevates Ca 2+ level in the cytosol, whether this is also true for the Ca 2+ level in the Golgi region has not been reported. To test the effect of TG or histamine treatment on the Ca 2+ level in the Golgi region in real time, we fused the Ca 2+ probe GCaMP (Muto et al., 2013) to GRASP55, expressed the GRASP55-GCaMP construct in cells ( Figure 6G), and performed live cell imaging. (H) Histamine treatment increases the GRASP55-GCaMP7 signal. HeLa cells were co-transfected with mCherry-GM130 (red) and GRASP55-GCAMP7 (green). Shown are still frames before (left panel) or after (right) histamine was added. Scale bar, 5 mm. (I) TG and ionomycin treatments increase the GRASP55-GCaMP7 signal. HeLa cells expressing GRASP55-GCaMP7 were treated with 100 mM histamine (Hist), 250 nM TG, or 1 mM ionomycin (Io) for 1 h. Shown are the quantitation of the fluorescence intensity before and after the drug was added. All quantitation results are shown as Mean G SEM. Statistical analyses were performed using two-tailed Student's t-tests (*p % 0.05; **p % 0.01; ***p % 0.001).
iScience 23, 100952, March 27, 2020 Treatment of cells with 100 mM histamine caused a robust calcium spike ( Figure 6H; Video S1). Subsequent experiments using this novel Golgi-localized Ca 2+ probe demonstrated that treatment of cells with TG and Io also significantly elevated the Ca 2+ level in the Golgi ( Figure 6I). Taken together, our results demonstrated that histamine or TG treatment elevates the Ca 2+ level in the Golgi region, which subsequently activates PKCa, leading to GRASP55 phosphorylation and Golgi fragmentation. Thus, this study revealed a novel mechanism of how histamine, and perhaps other drugs, modulates Golgi structure and function.
DISCUSSION
In this study, by comparing Golgi fragmentation with ER stress in response to TG, Tm, and DTT treatments, we uncovered a novel signaling pathway through which increased cytosolic Ca 2+ triggers Golgi fragmentation through PKCa activation and GRASP55 phosphorylation. Significantly, we also demonstrated that histamine modulates the Golgi structure and function via a similar mechanism, which opens a new window through which we can better understand the effect of histamine on cell physiology.
One possible model of Golgi stress is that the expanding capacity of the ER during cellular stress leads to the failure of the Golgi as it is over-burdened with misfolded or improperly folded proteins, affecting its functions such as glycosylation (Oku et al., 2011). However, our results do not support this hypothesis for two reasons. First, although three ER stress inducers, TG, Tm, and DTT, all induce ER stress, only TG treatment causes Golgi fragmentation. Second, TG induces Golgi fragmentation at a low dose and time when UPR is undetectable. These results demonstrate that Golgi fragmentation occurs independently of ER stress, perhaps via the modification of pre-existing cellular materials. Therefore, the Golgi may possess its own mechanism to sense and respond to stress alongside or completely separate from the ER. Furthermore, our study revealed a novel mechanism that coordinates Golgi structure and perhaps function: TG treatment increases cytoplasmic Ca 2+ , which activates PKCa, which subsequently phosphorylates GRASP55, impairing its function in Golgi structure formation. GRASP55 therefore provides the conceptual link between an extracellular cue and Golgi morphological change during stress.
GRASP55 is comprised of an N-terminal GRASP domain (aa1-212) that forms dimers and oligomers and functions as a membrane tether to maintain an intact Golgi structure and an SPR domain (aa212-454) that undergoes post-translational modifications and functions as the regulatory domain of the protein Zhang and Wang, 2015). Originally, GRASP55 was found to be phosphorylated by ERK2 at T225 and T222 (Jesch et al., 2001). Subsequently, additional sites, such as S245 and T249, were identified to be phosphorylated in mitosis, which is required for mitotic Golgi disassembly . Recently, GRASP55 was discovered to be de-O-GlcNAcylated upon energy or nutrient deprivation and regulates autophagosome maturation (Zhang et al., , 2019. These results indicate that GRASP55 is an excellent candidate to function as both a sensor and effector of cellular stresses. Thus, GRASP55 is likely a master regulator of Golgi structure formation, function, and stress responses. Our in vitro kinase assay demonstrated that PKCa can directly phosphorylate GRASP55 likely on more than one site. The mobility shift of GRASP55 observed in cells consisted of only one obvious band, whereas the in vitro experiment showed two clear bands, suggesting that phosphorylation of GRASP55 in cells might occur less frequently. Using GRASP55 truncation mutants we mapped the site(s) of PKC-mediated phosphorylation to the aa251-300 region. Expression of truncation mutants of GRASP55 that lack this region significantly reduced TG-induced Golgi fragmentation. Previously, it has been shown by mass spectrometry that GRASP55 is phosphorylated on S441 after TG treatment, but the kinase mediating this phosphorylation is unknown (Gee et al., 2011). Although our results are consistent with this previous study, the exact phosphorylation site(s) need further investigation.
Histamine is a neuroendocrine hormone involved in the regulation of stomach acid secretion, brain function, and immune response; many of these functions involve secretion (Karpati et al., 2018;Sahoo et al., 2017;Xie et al., 2018). The role of histamine in immune response is often through the activation of the downstream kinase PKCa. For example, histamine enhances the secretion of granulocyte-macrophage colony stimulation factor (GM-CSF) and nerve growth factor (NGF) in different cell types, both through a PKCadependent mechanism (Sohen et al., 2001). Interestingly, histamine promotes HeLa cell proliferation and growth and has been shown to be elevated in cancers where Golgi is fragmented and secretion is enhanced. In our experiments, histamine induced a clear Golgi fragmentation phenotype, confirming a link between histamine and Golgi fragmentation. Additionally, expression of PKCa, but not other PKC isoforms, along with a stimulation with PMA, exhibited an additive Golgi fragmentation effect. Consistent with prior work showing that disassembly of Golgi stacks accelerates protein trafficking (Xiang et al., 2013), our findings therefore offer a mechanism for how histamine increases secretion of inflammatory factors.
How Ca 2+ controls membrane trafficking at the plasma membrane has been well documented in regulated secretion in specific cell types such as neurons, neuroendocrine cells, and mast cells, whereas its role in other cell types is less well known. Ca 2+ dynamic at the Golgi as well as its role in membrane trafficking at the Golgi is still an understudied area. There are EF-hand Ca 2+ binding proteins associated in the Golgi. For example, Cab45 is located in the Golgi lumen, whereas Calnuc is found in both cell cytosol and membrane fractions (Lin et al., 1998). At the cis-Golgi, Calnuc binds Gai and Gas, which is thought to be important for vesicular trafficking (Lin et al., 2000). There are also P-Type ATPases (SPCAs) such as SPCA1 located in the Golgi that regulate Ca 2+ homeostasis in the Golgi and control neural polarity (Sepulveda et al., 2009;Vanoevelen et al., 2007). Our study provided a novel link between thapsigargin and histamine treatment, elevation of Ca 2+ concentration in the Golgi region, activation of PKC and phosphorylation of Golgi GRASP55, and modification of Golgi structure and function.
Our study revealed that TG induces Golgi fragmentation through increasing cytosolic Ca 2+ and GRASP55 phosphorylation. A similar case has been described previously in Alzheimer disease, where cytosolic calcium increases by Ab treatment triggers activation of a cytosolic protease, calpain, which cleaves p35 to generate p25 and activate Cdk5, a cytoplasmic kinase that is highly expressed in neurons (Lew et al., 1994). Subsequently, activated Cdk5 phosphorylates GRASP65 and perhaps other Golgi structural proteins, leading to Golgi fragmentation (Joshi et al., 2014(Joshi et al., , 2015. Although PKC and GRASP55 were not the focus in this study, expression of a phosphorylation-deficient mutant of GRASP55 significantly reduced Golgi fragmentation as well as Ab production. Taken together, our studies indicate that the Golgi is sensitive to cellular stimuli and stresses as in disease conditions,and responds to signaling cues to adjust its structure and function through increasing cytosolic Ca 2+ and GRASP55 modification. Future studies defining the detailed mechanisms may help understand disease pathologies with Golgi and trafficking defects.
Limitations of the Study
Phosphorylation occurs more frequently in vitro than in cells. The exact phosphorylation site(s) on GRASP55 needs further investigation.
METHODS
All methods can be found in the accompanying Transparent Methods supplemental file.
DATA AND CODEAVAILABILITY
All data from this study is available upon request. For super-resolution microscopy, Alexa Fluor 647, and Alexa Fluor 488-labeled secondary antibodies (ThermoFisher) were used. After washing, coverslips were mounted to slides using ProLong Diamond antifade super-resolution imaging mountant (ThermoFisher). Super-resolution images were imaged using Leica (Wetzlar, Germany) TCS SP8 STED super-resolution microscope. Images were quantified using the NIH ImageJ software and assembled into figures with Photoshop Elements (Adobe, San Jose, CA). To clearly show the Golgi structure, brightness or contrast was adjusted linearly across all samples within each experiment.
SUPPLEMENTAL INFORMATION
For calcium imaging, GRASP55-GCaMP7 transfected cells were plated onto glass bottomed dishes and imaged by a Nikon C2-plus Laser Scanning Confocal Microscope System configured with a Ti2-E inverted microscope. Images were captured at 488 nm and 561 nm in sequential scanning mode. Zstacks of 5 slices at 1 µm interval were acquired every 30 seconds for a total period of 10 min. The NIS-Elements C software was used for acquisition, analysis and visualization. The "+Histamine" symbol in Movie S1 was added in Adobe Premiere Pro 2020. For quantification, fluorescence intensity was measured every minute for 60 min. 20 cells were measured for each drug treatment.
To quantify Golgi fragmentation, cells were evaluated by eye under a microscope according to predefined fragmentation criteria, at least 300 cells were counted in each reaction. The following criteria are used to define whether a Golgi is intact or fragmented: 1) If the Golgi exists as a single piece of connected membrane, it is intact. 2) If a Golgi exhibits several items that are connected by visible membrane bridges, even though these bridges might be faint, the Golgi is considered intact. 3) If a Golgi exhibits ≥ 3 disconnected pieces (no visible bridges connecting them), then the Golgi is fragmented. 4) Mitotic cells, defined by the DNA pattern, and overlapping cells in which the Golgi pattern is difficult to define, are not counted. Hoechst was used to identify individual, mitotic and overlapping cells. In experiments where transfected proteins were employed, only transfected cells were counted, and 100 cells were counted per replicate. In experiments where an inhibitor screen was performed, an unbiased image thresholding method was used to extract fragmentation data from ≥40 cells per replicate.
VSV-G Trafficking using RUSH system
VSV-G trafficking was performed as previously described . Briefly, HeLa cells were transfected with the Str-li_VSVG wt-SBP-EGFP plasmid and cultured at 37°C for 16 h. Cells were then incubated with 250 nM TG or 10 µM monensin in fresh medium for 0.5 h at 37°C before 40 µM D-biotin (VWR Life Science, Radnor, PA) was added. Cells were then lysed at indicated time points (chase), treated with or without EndoH, and analyzed by Western blotting for VSV-G-GFP using a GFP antibody. The percentage of EndoH resistant VSV-G was quantified using the ImageJ software.
In vitro Kinase Assay
Twenty μg/ml recombinant GRASP55 protein was incubated with 10 μg/ml recombinant PKCα (SignalChem, British Columbia, Canada) in the presence or absence of 2 mM ATP.
Reactions were performed in kinase buffer (20 mM HEPES-NaOH, pH 7.4, 1 mM CaCl2, 1 mM DTT, 10 mM MgCl2, 200 μg/ml phosphatidylserine, 20 μg/ml diacylglycerol) at 30°C for 3 h. Reactions were terminated by adding SDS sample buffer and boiling. GRASP55 proteins were separated by Phos-tag SDS-PAGE and visualized by immunoblotting. In brief, 50 μM Phos-tag acrylamide and 100 μM MnCl2 were included in the gel recipe according to the manufacturer's instructions. Phos-tag gels were washed three times in transfer buffer supplemented with 10 mM EDTA and twice in transfer buffer without EDTA before transferring to membranes. Proteins were visualized by Western blotting.
Quantitation and Statistics
All data represent the mean ± SEM (standard error of the mean) of at least three independent experiments unless noted. A statistical analysis was conducted with two-tailed Student's t-test in the Excel program (Microsoft, Redmond, WA). Differences in means were considered statistically significant at p ≤ 0.05. Significance levels are: *, p<0.05; **, p<0.01; ***, p<0.001. Figures were assembled with Photoshop (Adobe, San Jose, CA). Pearson's colocalization coefficient values were computed using the "Coloc 2" function in ImageJ software. | 2020-03-05T10:42:45.054Z | 2020-02-28T00:00:00.000 | {
"year": 2020,
"sha1": "8d12b4aa36e4b7398ab59dbca231c04ff5e7a55d",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S258900422030136X/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "81839d0574fd73f5e1e9d2fe958ce086b56552b6",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
3955703 | pes2o/s2orc | v3-fos-license | Nonstationary time series prediction combined with slow feature analysis
using the Slow Feature Analysis (SFA) approach, then introducing the driving force into a predictive model to predict non-stationary time series. In essence, the main idea of the technique is to consider the driving forces as state variables and incorporate them into the prediction model. To test the method, experiments using a modified logistic time series and winter ozone data in Arosa, Switzerland, were conducted. The results 10
SFA is a method that extracts slowly varying driving forces from a quickly varying (2) 74 where H(t) is an N k × matrix and k = m + m (m + 1)/2.
75
To simplify (2) as ( (4) 81 Here, k y is first-order derivative, calculated by ) 3) Normalize the expanded signal H(t), by an affine transformation to generate H'(t) 83 with zero mean and uni t covariance matrix: 4) By means of the Schmidt algorithm, the function space (5) is orthogonalized as 98 Where c is a given constant and )} ( { 1 t y is the output signal of the slowest driving 99 force obtained by equation (7).
,
In this study, the SFA was tested on a logistic map To test the ability to construct the driving force from this modified logistic map, we Here, m 1 and m 2 are the given embedding dimensions for respectively, and N = n − (max (m 1 , m 2 ) − 1)τ is the number of phase points on the 126 trajectory.
127
Based on this trajectory, a predictive model to predict the future state of the system 128 can be established as: Where p is the prediction time step (considered as 1 in the present study), ) (t ε is the 131 fitting error, and fˆis assumed to be a quadratic polynomial in this study. The Takens 132 embedding theorem is only appropriate for an autonomous dynamical system, 133 therefore we followed the method of Stark (1999) to embed the driving forces in the 134 same state space for a nonstationary system. The next task is to find the cost function (12 ) when it reaches its minimum value. For more 136 details, refer to the studies of Farmer and Sidorowich (1987) and Casdagli (1989).
137
3 Experime nts 138 We applied the prediction technique described above to perform some prediction 139 expe riments using several given non-stationary time series. The experiment presented 140 in Section 3.1 was performed with data from the modified logistic model given abo ve.
142
The prediction experiments were based on 5000 data points from the above verified 143 logistic map (8) with the assumed driving force (9). The first 4800 data points were driving force } { 1 y when the embedd ing dimens ion was chosen as 3, 5,7,9,11,174 respectively (shown in Figure 3). Note that the result did not change significantly with 175 different embedding dimension values. 176 We established a pr ediction mod el for winter ozone data by incorporating the The experimental results for this case are listed in Table 1, also shown in Figure 4 186 and Figure 5. From Table 1, it can be seen that all RMSE values given by the forcing 187 mod el were much lower than those by the stationary mod el. Figure In this study, we first constructed the driving forces of a time series based on the SFA 207 approach, and then the driving forces were introduced into a predictive mode l. By The true and constructed driving force.
Figure 2
The comparison of prediction skills between models combined with or 313 without driving force.
Figure 3
The slowest driving force with different embedding dimension for total 315 ozone data.
316 Figure 4 The comparison of prediction skills between models combined with or 317 without driving force. Errors (Dobson Units) at prediction steps with or without forcing input. 349 | 2018-03-18T14:58:16.631Z | 2015-07-10T00:00:00.000 | {
"year": 2015,
"sha1": "39290c4a3c0a417437ed54f04248d189cf267d54",
"oa_license": "CCBY",
"oa_url": "https://www.nonlin-processes-geophys.net/22/377/2015/npg-22-377-2015.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6e91808bd05f96eaf17f7dd8ad2c0eb07c6dd225",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
269632383 | pes2o/s2orc | v3-fos-license | Research on the evolutionary control of unsafe behavior of construction personnel based on multi-field coupled-homogeneous analysis model
Unsafe behavior among construction personnel poses significant risks in petroleum engineering construction projects. This study addresses this issue through the application of a multi-field coupled homogeneous analysis model. By conducting case analyses of petroleum engineering construction accidents and utilizing the WSR methodology, the influencing factors of unsafe behaviors among construction personnel are systematically categorized into organizational system factors, equipment management factors, and construction personnel factors. Subsequently, employing Risk coupling theory, the study delves into the analysis of these influencing factors, discussing their coupling mechanisms and classifications, and utilizing the N-K model to elucidate the coupling effect among them. Furthermore, a novel approach integrating coupling analysis and multi-agent modeling is employed to establish an evolutionary model of construction personnel’s unsafe behavior. The findings reveal that a two-factor control method, concurrently reinforcing equipment and construction personnel management, significantly mitigates unsafe behavior. This study provides valuable insights into the evolution of unsafe behavior among construction personnel and offers a robust theoretical framework for targeted interventions. Significantly, it bears practical implications for guiding safety management practices within petroleum engineering construction enterprises. By effectively controlling unsafe behaviors and implementing targeted safety interventions, it contributes to fostering sustainable development within the petroleum engineering construction industry.
Introduction
Petroleum engineering construction is complex and high-risk, with unsafe behavior among construction workers being a leading cause of accidents.Various factors interact, resulting in frequent unsafe behavior.Analyzing these factors is crucial for risk management and prevention of accidents [1][2][3].
The study of risk management in petroleum engineering construction has primarily focused on risk identification, assessment, and control.Methods used for risk identification include qualitative and semi-quantitative methods such as expert assessment, hierarchical analysis, risk ranking matrices, and LCA methodology [4][5][6][7].Various intelligent risk assessment methods have been proposed, including self-organizing mapping neural network theory, dynamic risk-based inspection methodology, and assessment models based on neural networks and Monte Carlo simulation [8][9][10].Bayesian network models, hybrid fuzzy DEMATE-L-ANP approaches and the PSO-SVR algorithm have been used for the quantitative assessment of petroleum engineering hazards [11][12][13].For risk control, qualitative methods such as surveys, interviews, and case studies [14][15][16], as well as quantitative methods that combine risk control with mathematical models and artificial intelligence algorithms [17,18], have been employed.However, research in this area has been limited in its examination of the coupling effects of multiple risk factors that can contribute to petroleum engineering construction accidents.
Risk coupling theory refers to a theoretical framework in the field of risk management that considers the interactions and influences between different risk factors.It emphasizes the potential interconnections and mutual impacts among various risk factors, which may lead to the occurrence of one risk event or exacerbate the occurrence of others, thereby increasing the overall risk level [19][20][21].Several researchers have explored the coupling effects of multiple risk factors in various industries by constructing N-K models, including marine ship accidents and tunnelling construction accidents [22,23].These studies aim to achieve dynamic control of coupled risks.Combining the N-K model with different research methods has also been used to enhance the risk coupling model, such as the AHP-based N-K model for constructing a risk coupling model of transportation accidents in complex marine environments [24].The BBN-NK model has been utilized for risk analysis of tunnel fire accidents, quantifying various risk factors and simulating the frequency of tunnel fire accidents [25].Furthermore, the N-K model has been applied to construct a social risk coupling evaluation model for major projects based on complex network theory [26], as well as a risk coupling method for offshore oil well accidents using the Dynamic Bayesian Network (DBN) and N-K model to analyze the interaction among risk factors and the accident risk evolution process [27].While the N-K model is extensively used in engineering risk management, there is a lack of research on analyzing the coupling effects of factors impacting the unsafe behavior of personnel involved in petroleum engineering construction using the N-K model.Fundamentally, the overarching objective of risk coupling theory is to facilitate a comprehensive comprehension and adept management of the intricate relationships inherent within risk systems [28].By doing so, it endeavors to empower stakeholders with the tools necessary for more efficacious prevention and mitigation strategies across a spectrum of risks.Therefore, this research employs risk coupling theory to elucidate the evolutionary process of unsafe behaviors among construction personnel within the complex system of petroleum engineering construction projects.This aids in gaining a deeper understanding of the interactions among various factors influencing unsafe behaviors of construction personnel and provides a theoretical framework for constructing subsequent risk evolution model.
Unsafe behavior of construction personnel is a fundamental component of the petroleum engineering construction system that significantly impacts engineering construction risks.Multi-Agent modelling has emerged as an effective approach to linking the behavior of microlevel subjects in complex systems with macro-level problems, and to assess and reveal the evolution of management issues based on the behavior of micro-level subjects [29,30].Building an evolutionary model of unsafe behavior of engineering construction personnel using multiagent technology can reproduce the evolutionary process of unsafe behavior and quantify the construction risk status, and can also combine with risk control strategies to control construction risks by intervening in unsafe behavior.This approach forms a comprehensive construction risk control method for petroleum engineering.Prior research has employed multi-agent technology to investigate engineering risks, including the identification of the main risk factors of shield tunnelling projects using a safety computational experiment system and the simulation of various risk control strategies for construction projects using a risk evolution model based on multi-agent modelling and stochastic methods [31,32].Multi-agent-based collaborative emergency decision-making algorithms have also been proposed for emergency response to traffic accidents, and simulation modeling of human-machine-environment-related risk factors in coal mines has been conducted using multi-agent technology [33,34].
In summary, in the realm of existing research concerning the analysis of unsafe behaviors among construction personnel in petroleum engineering construction, there remains a scarcity of studies that amalgamate macro-level influencing factors with the micro-level evolution of these behaviors.Gaps persist in the research pertaining to the control of unsafe behaviors among construction personnel based on the risk coupling theory.Hence, this study explores the coupling mechanism and effect between the influencing factors of unsafe behaviors by combining the WSR methodology and the N-K model.Additionally, it proposes a novel multifield coupled homogeneous analysis model by incorporating the multi-agent modeling method to elucidate the evolution mechanism of unsafe behaviors among construction personnel.By dynamically scrutinizing the evolution of construction personnel's unsafe behaviors, this study formulates targeted safety interventions aimed at addressing the issue of unsafe behaviors in petroleum engineering construction projects.These innovative methodologies contribute to a more profound comprehension of the evolution of unsafe behavior among construction personnel and offer valuable insights for enhancing safety in petroleum engineering construction endeavors.
Coupling analysis of factors affecting unsafe behavior of construction personnel
Through an analysis of construction accident statistics in petroleum engineering companies, coupled with the application of the WSR methodology, this study delves into the factors influencing unsafe behaviors across three distinct levels: physical, principle, and human factors.It elucidates the interaction and coupling mechanisms among these factors.Furthermore, leveraging the risk coupling theory, the study categorizes influencing factors into single-factor coupling, two-factor coupling, and multi-factor coupling.Subsequently, employing the N-K model, a probability analysis of various coupling forms is conducted.These efforts aim to furnish a theoretical framework for the construction of a multi-agent model of unsafe behavior evolution.
Identification of influencing factors
This paper counts 96 cases of construction accidents in petroleum engineering companies.The sources of case statistics are: government websites, published accident investigation reports and "Cases of Petroleum Engineering Safety Accidents".By analyzing the causes of unsafe behavior of construction workers in different construction accident cases, the influencing factors of unsafe behavior of construction workers can be derived.
In conjunction with a case study of construction accidents and the WSR methodology, the factors that influence the unsafe behavior of construction workers are summarized.Professor Gu, a renowned Chinese systems science expert, suggested the "Wuli-Shili-Renli" (WSR) methodology as a comprehensive system methodology [35], which is based on the principle of analyzing problems from multiple perspectives in order to resolve problems in complex systems more effectively.The WSR methodology splits complex systems into three levels: physical level, principle level, and human level.The physical level refers to the system's matter and energy, the principle level refers to the system's logic and information, and the human level refers to the system's human behavior and decision-making.Applying WSR methodology to petroleum engineering construction risk cases, analyzing the causes of unsafe behaviors of construction workers individually, and discovering that three factors influence construction personnel's unsafe behavior: equipment management, organizational system, and construction personnel, which correspond to the physical level, the principle level, and the human level, respectively.
The causes of unsafe behavior of construction personnel in each case were studied based on the three levels of unsafe behavior influencing factors, and the statistical distribution of factors influencing unsafe behavior of petroleum engineering construction personnel is determined (Table 1).The physical factor-principle factor effect implies that the unsafe behavior of construction workers is the result of the interaction between equipment management factor and organization system factor, and the rest are the same.
Coupling mechanism of unsafe behavior influencing factors
Coupling refers to the degree or manner in which two or more systems, components, or subsystems interact or influence each other, whereas risk coupling refers to the interaction or interdependence between two or more risk events in a risk system, where the occurrence of one risk event may cause the occurrence or exacerbation of other risk events, thereby increasing the overall risk level of the system or project.Hence, the coupling of factors influencing unsafe behaviors of construction workers in petroleum engineering refers to the mutual influence and interdependence among the factors that affect construction workers' unsafe behaviors.As seen in Fig 1, the coupling mechanism of elements influencing risky behavior in construction workers is investigated from the three-dimensional perspective of physical factor, principle factor, and human factor.
Fig 1 reveals that the influence of physical level factor may result in the failure of construction equipment; the influence of principle level factor may result in the failure of the organization's management system and the decline of the safety atmosphere; and the influence of human level factor may result in the lack of safety awareness among construction personnel.The coupling of these three factors leads to the unsafe conduct of construction workers.In the process of petroleum engineering construction, if the effect of one of the aforementioned factors does not reach the threshold of unsafe behavior of construction personnel, it will not directly lead to the occurrence of unsafe construction behavior.However, the interaction of the aforementioned factors will produce a coupling effect, which will increase the likelihood that construction personnel will engage in unsafe behavior.
Coupling types of unsafe behavior influences
The risk coupling can be categorized into the three following categories based on the number of factors that influence the risky behavior of construction professionals involved in the coupling.
1. Single-factor coupling, refers to the internal interaction of certain factors that affect construction workers to implement unsafe behaviors.There are three types of single factor coupling, namely equipment management factor coupling, organizational system factor coupling, and construction personnel factor coupling.
2. Two-factor coupling, refers to the interaction between two types of factors that affect construction personnel to implement unsafe behaviors.There are three types of two factors coupling, namely equipment management-organizational system factors coupling, equipment management-construction personnel factors coupling, organization system-construction personnel factors coupling.
3. Multi-factor coupling, refers to the interaction among three or more factors that affect construction personnel to implement unsafe behaviors.There is one type of multi-factor risk coupling, which is the coupling of equipment management-organization system-construction personnel factors.
Coupling analysis of factors influencing unsafe behavior based on N-K model
In recent years, risk coupling models have been increasingly investigated by academics.The N-K model, which is a general model used to represent interactions in complex systems, is suited for examining the coupling between factors that influence the unsafe behavior of construction workers.
Professor Kauffman proposed the N-K model for the analysis of gene combinations in the 1990s based on random Boolean networks [36].The N-K model describes the system as a network of N elements, each having two states (0 or 1), and K connections between the elements that represent their interactions and dependencies, the minimum value of K is 0, and the maximum value is N-1.
The evolutionary system of risky construction worker behavior consists of three distinct types of influencing factors: equipment management factor, organizational system factor, and construction personnel factor.According to the statistics of 96 petroleum engineering construction accident incidents in Table 1, "0" and "1" are used to indicate the three categories of influencing elements."0" indicates that the risk factor is not involved in coupling but construction workers still engage in unsafe behavior and "1" indicates that the risk factor is involved in coupling and causes construction workers to engage in unsafe behavior, then there are eight distinct forms of coupling between the three influencing factors.
For example, the coupling form "110" represents the mutual coupling of equipment management and organizational system factors that lead to unsafe behaviors among construction workers, whereas "P 110 " represents the probability that the mutual coupling of equipment management and organizational system factors leads to unsafe behaviors among construction workers.The number and probability of different coupling forms leading to unsafe behavior of construction personnel are shown in Fig 2 and Table 2, respectively.
In Table 2, P 110 = 0.167 indicates that the probability of construction personnel exhibiting unsafe behaviors due to the mutual coupling of equipment management and organizational system factors is 0.167, which is the ratio of the number of times equipment management and organizational system factors cause construction workers to exhibit unsafe behaviors to the total number of unsafe behaviors exhibited by construction workers.Through similar calculations, it is possible to determine the probability of construction workers engaging in unsafe behavior under various coupling forms.The probabilistic analysis of the diverse forms of coupling derived from the N-K model can aid in parameterizing the state transition process for construction workers in subsequent multi-agent model.Risk coupling precipitates the manifestation of unsafe behaviors, constituting a transformation in the state of construction workers.Thus, the probability of unsafe behavior resulting from various forms of risk coupling serves as the probability of the state transition process occurrence in the multi-agent model.
Evolution model of unsafe behavior of construction personnel
Beginning with the control of unsafe construction worker behaviors, the relationship between the influencing factors of unsafe behaviors and the status of construction workers is investigated.In conjunction with the multi-agent modeling method and the coupling analysis of unsafe behavior affecting elements, a Multi-Agent model of unsafe behavior evolution is devised based on the coupling of influencing factors.
Related settings of unsafe behavior evolution model
The establishment of the evolution model of construction workers' unsafe behavior based on multi-agent necessitates the selection of a suitable modeling platform and the establishment of an appropriate engineering context and system environment.
(1) Modeling platform.AnyLogic not only provides a visual interface and graphical modeling tools, but also enables visual analysis of systems to aid users in comprehending the behavior and attributes of multi-intelligent systems, which are widely utilized in management science study disciplines.In this research, all multi-agent modeling and simulation experiments are implemented on the Anylogic8 platform.
(2) Engineering background.Using petroleum engineering as an illustration, the primary source of its construction risk is the unsafe behavior of construction personnel.
(3) System environment setting.The model represents the state of petroleum engineering construction risk via changes in the behavior of construction personnel, and the fundamental unit of the model is the construction personnel.The equipment management, organizational system, and construction personnel factors are taken as the influencing factors of the construction personnel's behavior, and the control parameters are set to represent the strength of the control measures of the influencing factors.The system environment setting contains the following: ① Number of construction personnel and simulation cycle; ② Construction personnel in various states: include the safe, transitional, and unsafe states of construction personnel; ③ Ways and degrees of influence of unsafe behavior influencing factors on the state of construction personnel; ④ Ways and degrees of influence of control measures on the state of construction personnel; ⑤ Behavioral decision making and evolutionary mechanisms: describe the behavioral transformations and transitional pathways of construction personnel in response to varying conditions.
(4) Setting of construction personnel behavior status transformation.By analyzing the evolution process of unsafe behavior of construction workers in construction accident cases, it is set that in the initial stage of system evolution, construction workers are performing safe construction, called normal status.
When the construction personnel are affected by organizational system factors, their state will undergo two transitions: transition1: from a normal state to a state that produces unsafe behavior, called an unsafe status; transition2: from a normal state to a state that is affected by the organization system but does not produce unsafe behavior, called transition status1.
Afterwards, when the construction personnel are affected by equipment management factors, their state will undergo three transitions: transition3: only affected by equipment management factors, from transition status1 to an unsafe status; transition4: affected by the coupling effect of organizational system factors and equipment management factors, from transition status1 to unsafe status, transition5: transition from transition status1 to a state affected by equipment management factors but without unsafe behavior, called transition status2.
Finally, when construction personnel are affected by construction personnel factors, their state will undergo four transitions: transition6: only affected by construction personnel factors, from transition status2 to an unsafe status, transition7: affected by the coupling of equipment management factors and construction personnel factors Influenced from transition status2 to unsafe status, transition8: affected by the coupling effect of organizational system factors and construction personnel factors, from transition status2 to unsafe status, transition9: affected by three factors of organizational system, equipment management and construction personnel Coupling effects, from transition status2 to unsafe status.
As depicted in Fig 3, create an evolution model of unsafe construction personnel behavior using the AnyLogic platform.
(5) Model variables setting.Expert evaluation method and entropy method are used to determine the comprehensive weight of influencing factors in order to quantify the importance of three types of influencing factors, namely equipment management, organization system, and construction personnel.Following are the specific steps of the entropy method for calculating comprehensive weights: ① Collecting data via expert evaluations, constructing evaluation matrices, and normalizing; ② Construct standardized matrix: ③ Calculate the entropy value of each indicator: ④ Calculate the coefficient of difference: ⑤ Calculate the weight: Using entropy method to calculate the comprehensive weight of the three types of influencing factors, as shown in Table 3.
The model's variables and parameters are set as shown in Table 4 based on the previous coupling analysis of the state evolution process of construction workers.
Initial setting of model parameters and evolutionary results
Referring to the current state of petroleum engineering construction enterprises and the general effectiveness of their control measures for the three influential factors of organizational structure, equipment management, and construction personnel.In the initial setting, the total number of construction workers is set to 1,500, the values of control parameter1, control parameter2, and control parameter3 are all set to 1, and the number of simulation days is set to 100.The number of construction workers in the normal, transition 1, and transition 2 states is collectively referred to as the number of safe states, while the number of construction workers in an unsafe status is referred to as the number of unsafe states.As depicted in Fig 4, under the initial setting, the number of construction personnel transforming into an unsafe state increased due to the influence of unsafe behavior influencing factors.When the number reaches about 380, the rising trend of the number of construction personnel in an unsafe state slows and reaches a stable state as a result of construction enterprises' control measures.When the system reaches a stable state, the number of construction personnel in an unsafe state fluctuates considerably and will remain close to 400, representing nearly one-third of the total number of workers, indicating that the construction risk of The status of construction personnel affected by organizational system factors but not producing unsafe behaviors
Transition Status2
The status of construction personnel affected by equipment management factors but not producing unsafe behaviors 4
Unsafe Status
The status of construction personnel implementing unsafe behaviors 5
Transition1
The probability that construction personnel is transformed into unsafe status by organizational system factors, based on the previous coupling analysis, the value is set to P 010 = 0.052 The probability that construction personnel is transformed into transition status1 by organizational system factors, the value is 1-P 010 = 0.948 The probability that construction personnel is transformed into unsafe status by equipment management factors, based on the previous coupling analysis, the value is set to P 100 = 0.021 The probability that construction personnel is transformed into unsafe status by the coupling effect of organizational system and equipment management factors, based on the previous coupling analysis, the value is set to P 110 = 0.167 9
Transition5
The probability that construction personnel is transformed into transition status2 by equipment management factors, the value is 1-P 100 -P 110 = 0.812 10
Transition6
The probability that construction personnel is transformed into unsafe status by construction personnel factors, based on the previous coupling analysis, the value is set to P 001 = 0.083 11
Transition7
The probability that construction personnel is transformed into unsafe status by the coupling effect of equipment management and construction personnel factors, based on the previous coupling analysis, the value is set to P 101 = 0.25 12
Transition8
The probability that construction personnel is transformed into unsafe status by the coupling effect of organizational system and construction personnel factors, based on the previous coupling analysis, the value is set to P 011 = 0.365 13
Transition9
The probability that construction personnel is transformed into unsafe status by the coupling effect of three factors, based on the previous coupling analysis, the value is set to P 111 = 0.062 Parameter controlling the construction personnel factor, with the initial value set to 1 https://doi.org/10.1371/journal.pone.0302263.t004petroleum engineering is relatively high at this time.Consequently, after adjusting the control parameters, the results of the evolution of unsafe behavior of construction personnel under the conditions of single-factor, two-factor, and multi-factor control are studied, analyzed, and summarized to identify key measures to control unsafe behaviors of construction personnel.
Result analysis by controlling different influencing factors
Under the assumption that the sum of the control strengths of the influencing factors is the same, the evolution process of the unsafe behavior of construction personnel is simulated when controlling a single factor, a double factor, and a multi-factor, and the number of people in an unsafe status when the system is stable is analyzed to compare the effects of various control methods.
Single-factor control
(1) Strengthen the control of organizational system factor.Assuming that the control strength of a single factor is increased by 1 unit to simulate the strengthening of the management of organizational system factor, the value of control parameter1 is increased to 2, while control parameters2 and control parameters3 maintain their initial values.Analyzing the influence on the behavioral state of construction personnel when strengthening the control of organizational system factor yields the construction personnel status change diagram in Fig 5(A) below.
As depicted in Fig 5(A), when only reinforcing the control of organizational system factors, the number of construction personnel in an unsafe state slows after increasing to about 300 as a result of the strengthening of control measures and reaches a stable state.When the system reaches a stable state, the number of construction workers in an unsafe condition will remain at approximately 350.At this point, not only will the number of unsafe workers fluctuate more, but the decline relative to the initial setting will be minimal, indicating that the control effect does not increase significantly.(2) Strengthen the control of equipment management factor.Assuming that the control strength of a single factor is increased by 1 unit to simulate the strengthening of the management of equipment management factor, the value of control parameter2 is increased to 2, while the values of control parameter1 and control parameter3 maintain their initial values.Analyzing the influence on the behavioral state of construction personnel when demonstrates that when only the equipment management control factor is reinforced, the number of construction workers in an unsafe state slows after reaching about 250 and then stabilizes as a result of the control measure's strengthening.When the system reaches a stable state, the number of unsafe employees will remain around 300, representing one-fifth of the total number, and the fluctuation will decrease, indicating that the control effect has increased compared with the previous control method.demonstrates that when only the construction personnel factor control is reinforced, the number of construction personnel in an unsafe state decelerates after reaching about 200 and reaches a stable state as a result of the strengthened control measure.When the system reaches its steady state, the number of unsafe employees will remain between 200 and 250, or one-sixth of the total.The number of unsafe workers fluctuates minimally at this time and decreases more than with the previous two control methods.Consequently, the control effect of this method is most apparent when only a single factor is controlled.
Two-factor control
(1) Strengthen the control of organizational system and equipment management factors.
Assuming that the control of the two types of factors increases by 0.5 units each, simulate the concurrent strengthening of the control of equipment management and organizational system factors.
Taking into account the coupling effect between the influencing factors and the ratio of the equipment management factor to the organizational system factor's weight is 0.171:0.303.Consequently, the value of control parameter2 is increased to 1.5, the value of control parame-ter1 is increased to 1.885, and the initial value of control parameter3 is maintained.As shown in Fig 6(A) below, the status change diagram of construction personnel is obtained.Fig 6(A) demonstrates that when the organizational system and equipment management factors control are simultaneously reinforced, the number of construction personnel in an unsafe state decelerates after reaching about 250 and then stabilizes as a result of the strengthened control measures.When the system is in a stable state, the number of construction employees in an unsafe state will fluctuate widely between 250 and 350, representing one-fifth of the total number.Therefore, compared to single-factor control, not only is the control effect not substantially improved, but it is also inferior to the effect of merely strengthening the control of construction personnel factors.
(2) Strengthen the control of organizational system and construction personnel factors.
Assuming that the control of the two types of factors increases by 0.5 units each, simulate the concurrent strengthening of the control of organizational system and construction personnel factors.
Taking into account the coupling effect between the influential factors and the weight ratio of the organizational system factor to the construction personnel factor is 0.303:0.526.Consequently, the value of control parameter1 is increased to 1.5, the value of control parameter3 is increased to 1.868, and the initial value of control parameter2 is maintained.As shown in demonstrates that, when the organizational system and construction personnel factor control are simultaneously reinforced, the number of construction personnel in an unsafe state decelerates after reaching about 180 and then stabilizes due to the strengthening of control measures.When the system is in a stable state, the number of unsafe workers will be maintained at approximately 200, representing about one-seventh of the total number, with minor fluctuations.Consequently, its control effect is enhanced compared to the previous two-factor control method and is also superior to single-factor control.
(3) Strengthen the control of equipment management and construction personnel factors.
Assuming that the control of the two types of factors increases by 0.5 units each, simulate the concurrent strengthening of the control of equipment management and construction personnel factors.Taking into account the coupling effect between the influential factors and the weight ratio of the equipment management factor to the construction personnel factor is 0.171:0.526.Consequently, the value of control parameter2 is increased to 1.5, the value of control parameter3 is increased to 2.538, and the initial value of control parameter1 is maintained.The construction personnel status change diagram is obtained, as shown in demonstrates that, when equipment management and construction personnel factors control are simultaneously strengthened, the number of construction personnel in an unsafe condition decreases gradually after reaching about 180 and then stabilizes as a result of the strengthened control measures.When the system is in a stable state, the fluctuation magnitude will be modest and the number of construction workers in unsafe conditions will be maintained at approximately 150.Among the three methods of implementing two-factor control, this method has the lowest number of construction workers in an unsafe state and the most obvious control effect.
Multi-factor control
Assuming that the control of each type of factor increases by 0.33 units, simulate the situation where the management of organizational system, equipment management, and construction personnel factors are simultaneously strengthened.
Taking into account the coupling effect between the influential factors and the weight ratio of the three influential factor types is 0.171:0.303:0.526.Consequently, the values of control parameter1 are increased to 1.585, control parameter2 is increased to 1.As shown in Fig 7, the number of construction personnel in an unsafe state decelerates after reaching approximately 150 and reaches a steady state as a result of the strengthening of control measures by simultaneously enhancing the control of three influencing factors.When the system is in a stable state, there will be between 180 and 200 construction workers in unsafe conditions.Therefore, when compared to single-factor control, the control effect is significantly enhanced; however, when compared to two-factor control, the improvement in control effect is less than when equipment management and construction personnel factor controls are strengthened simultaneously, and the fluctuation magnitude is also larger.This indicates that under certain conditions, three-factor control is less effective than two-factor control.
Discussion
In this chapter, the impact of different control strategies on the development of unsafe behaviors among construction workers was investigated through simulations employing single-factor, two-factor, and multi-factor control methods.While the single-factor approach demonstrates some efficacy in mitigating unsafe behaviors, its failure to account for the intricate interactions and coupling effects among different factors restricts its overall effectiveness, rendering it less potent compared to multi-factor interventions.Multi-factor control strategies offer a more holistic perspective by considering the interplay between diverse factors.However, given the typical constraints on resources for safety interventions in practical settings, the experiments compare the effectiveness of different multi-factor control methods under the constraint of constant total control intensity across influencing factors.The experimental results reveal that the two-factor control method, which concurrently enhances management of both equipment and construction personnel factors, emerges as the most efficient, significantly reducing the incidence of unsafe behaviors.
These findings offer crucial insights into risk management and safety interventions in the context of petroleum engineering construction projects.Particularly, regarding the formulation of precise and practical interventions, a comparative analysis of various multifactorial control methods is conducted to determine the optimal control method for addressing the unsafe behaviors exhibited by construction personnel, especially under resource-constrained conditions.Meanwhile, by establishing a multi-field coupled-homogeneous analysis model, this study delves into the underlying dynamics of unsafe behavior progression during petroleum engineering projects.It furnishes a theoretical framework to inform the safety management practices concerning project personnel, thereby contributing to the broader discourse on safety enhancement within construction domains.
Conclusion
The risk coupling theory is added to the evolution analysis of unsafe construction worker behaviors in petroleum engineering in this study.First, the classification and coupling mechanism of factors influencing unsafe behaviors of construction workers are discussed.Next, the coupling model of the factors influencing unsafe behavior is constructed using N-K model to reveal the coupling effect between various influencing factors.Lastly, the evolution model of unsafe behavior of construction personnel based on influencing factors coupling is constructed using multi-agent modeling, and the following conclusions are drawn: 1. Based on the case study and WSR methodology, the influencing factors of unsafe behavior of petroleum engineering construction personnel can be categorized as equipment management, organizational system, and construction personnel.There will be interaction and interdependence between the three influencing factors, thereby producing a coupling effect.
2. The coupling effect of multiple factors is the root cause of construction workers' unsafe behaviors.There are potential factors influencing unsafe behavior in the process of petroleum engineering construction, and the likelihood of unsafe behavior of construction personnel due to the role of a single factor is low.However, when coupled with other influencing factors, there will be a coupling effect, and the likelihood of unsafe behavior of construction personnel will be greatly increased, resulting in construction accidents.
3. Using the coupling analysis of the influencing factors of construction personnel's unsafe behavior in conjunction with multi-agent modeling, an evolutionary model of unsafe behavior of construction personnel based on risk coupling is developed in order to analyze the state change law of construction personnel and investigate the evolution path of their unsafe behavior.
4. Considering the coupling effect of influencing factors, the evolution of unsafe construction worker behavior when different control methods are chosen is simulated by modifying the control parameters.On the basis of an analysis of the simulation results of the behavioral state of construction personnel, it can be concluded that: Of all the control methods, the two-factor control method that simultaneously strengthens the management of equipment and construction personnel is the most effective in controlling the evolution of unsafe behavior among construction personnel, and control of multiple factors is superior to control of a single factor.This suggests that when controlling the evolution of unsafe behavior of construction personnel under the influence of multiple factors, it is preferable to control the coupling of multiple factors rather than a single factor, and that it is preferable to control the two-factor coupling of equipment management and construction personnel rather than all factors simultaneously.This conclusion can reduce the unnecessary consumption of human and material resources in the process of construction risk management and improve the efficacy of construction risk management in petroleum engineering construction enterprises.
The study of the coupling analysis of the factors influencing the unsafe behavior of petroleum engineering construction personnel and the evolution of their unsafe behavior control can fully investigate how to control the unsafe behaviors of construction personnel and reduce the occurrence of construction accidents by constructing a comprehensive system of safety interventions, which has theoretical significance for the risk control of petroleum engineering construction.Simultaneously, by analyzing the evolution model of unsafe behavior among construction personnel, elucidating the key factors influencing the efficacy of controlling their unsafe behavior can effectively inform the implementation of extant management protocols.This facilitates the formulation of targeted interventions for unsafe behavior among construction personnel, thereby guiding safety management practices.This approach addresses the intricacies of construction safety control processes and effectively prevents major safety incidents in petroleum engineering construction.Consequently, it holds practical significance in steering safety construction within the petroleum engineering industry.When implementing construction risk management, construction enterprises should take into account the coupling effect of risk factors, the evolution of unsafe behaviors, and the control effect of different influencing factors to select a more scientific and reasonable construction risk management method, so as to obtain better control effects and further reduce the incidence of construction accidents, which is conducive to the sustainable development of petroleum engineering construction.
Future perspective
The utilization of a multi-agent model in this study for modeling the actual construction process introduces certain limitations, primarily stemming from model assumptions and rational simplifications of real-world scenarios.Such simplifications may create disparities between the model and actual situations, thereby compromising the accuracy of the model and the reliability of research conclusions.
In the future, research endeavors will delve deeper into the influencing factors of construction safety, encompassing external environmental factors, organizational culture, among others.Furthermore, there will be a concerted effort to explore the intricate interrelationships among different influencing factors, enhance the quality and quantity of data, and consequently improve the accuracy and predictive capabilities of the model.This will enhance the applicability of the study, facilitating the application of research findings to practical construction management.By surmounting these limitations and broadening the scope of research, the advancement of petroleum engineering construction safety studies and practices will be propelled forward, contributing to the reduction of construction accidents and the enhancement of construction safety plans.
Fig 3 .
Fig 3. Evolution model of unsafe behavior of construction personnel.https://doi.org/10.1371/journal.pone.0302263.g003 Fig 4 depicts the construction personnel state change diagram generated by executing the unsafe behavior evolution model of construction personnel.
organizational system factor, with the initial value set to 1 15 Control Parameter2 Parameter controlling the equipment management factor, with the initial value set to 1 16 Control Parameter3
( 3 )
Fig 5(C) demonstrates that when only the construction personnel factor control is reinforced, the number of construction personnel in an unsafe state decelerates after reaching about 200 and reaches a stable state as a result of the strengthened control measure.When the system reaches its steady state, the number of unsafe employees will remain between 200 and 250, or one-sixth of the total.The number of unsafe workers fluctuates minimally at this time and decreases more than with the previous two control methods.Consequently, the control effect of this method is most apparent when only a single factor is controlled.
Fig 6 (B) below, the status change diagram of construction personnel is obtained.Fig 6(B)
Fig 6 .
Fig 6.Diagram of construction personnel status change.(a) when controlling organizational system and equipment management factors; (b) when controlling organizational system and construction personnel factors; (c) when controlling equipment management and construction personnel factors.https://doi.org/10.1371/journal.pone.0302263.g006 Fig 6(C) below.Fig 6(C) 33, and control param-eter3 is increased to 2.015.As shown in Fig 7 below, the status change diagram of construction personnel is obtained.
Table 1 . Statistical table of influencing factors causing unsafe behavior of construction personnel. Type of Effect Causes Number of Unsafe Behavior
https://doi.org/10.1371/journal.pone.0302263.t001 | 2024-05-10T05:07:34.722Z | 2024-05-08T00:00:00.000 | {
"year": 2024,
"sha1": "0aa511523731cebf1fbf64b50a9227f2cfac04d6",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0302263&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0aa511523731cebf1fbf64b50a9227f2cfac04d6",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
114615385 | pes2o/s2orc | v3-fos-license | Determination of the optimal time and cost of manufacturing flow of an assembly using the Taguchi method
The optimization of the parts and assembly manufacturing operation was carried out in order to minimize both the time and cost of production as appropriate. The optimization was made by using the Taguchi method. The Taguchi method is based on the plans of experiences that vary the input and outputs factors. The application of the Taguchi method in order to optimize the flow of the analyzed assembly production is made in the following: to find the optimal combination between the manufacturing operations; to choose the variant involving the use of equipment performance; to delivery operations based on automation. The final aim of the Taguchi method application is that the entire assembly to be achieved at minimum cost and in a short time. Philosophy Taguchi method of optimizing product quality is synthesized from three basic concepts: quality must be designed into the product and not he product inspected after it has been manufactured; the higher quality is obtained when the deviation from the proposed target is low or when uncontrollable factors action has no influence on it, which translates robustness; costs entailed quality are expressed as a function of deviation from the nominal value [1]. When determining the number of experiments involving the study of a phenomenon by this method, follow more restrictive conditions [2].
Introduction
The optimization of the manufacturing flow of parts and of the assembling operation was realized with the purpose of decreasing the manufacturing time and related costs. The Taguchi Method is a method based on the experience plans in which certain input factors are varied and the outputs are determined. The Taguchi Method comprises nine important steps, namely: -Step 1 Formulating the issue, the experiment's success depending on understanding the nature of the issue; -Step 2 Identifying the output performance characteristics of the most relevant issues; -Step 3 Identifying control, noise and signal factors. The control factors are those which can be controlled under normal production conditions. Noise factors are difficult to control under these conditions and signal factors affect the average performance of the process; -Step 4 Choosing the variation levels, the possible interactions between factors and the effects of those interactions; -Step 5 Constructing the corresponding orthogonal matrix; -Step 6 Preparation of the experiments (or simulations). -Step 7 Conducting the experiments/simulations corresponding to the orthogonal matrix; -Step 8 Statistic analysis and interpretation of results; -Step 9 Verifying and confirming the results obtained in the experiment or simulation [3]. In this case we realized models for the optimization of the manufacturing flow for two parts and one model of optimization of their installation. The controllable input factors were varied after a wellestablished plan and for each combination the execution time, as well as the related cost were determined.
Experimental setup
The analysis of the optimization of manufacturing flows was realized within PSAPET PROD COM S.R.L enterprise, in Bacau, that has as main field of activity the manufacturing of hydrostatic assemblies and subassemblies necessary in the aerospace industry. The assembly analyzed for the identification of optimum solutions in the optimization process of the manufacturing flow is the one used in the previous method and is composed of: flange 1 pc bushing 1 pc helicoidal inserts 9 pcs, of which 6 pcs M2.5x5 and 3 pcs M3x4,5, supplied by a collaborating company which has the execution of such elements in its activity portfolio.
In order to optimize the manufacturing flow of the two main components, respectively the bushing and flange, five of the basic operations were chosen which were assigned two execution versions as presented in table 1. In order to build optimal plans for the execution of components necessary for the presented assembly, we started from the two execution versions presented in table 2 and 3.
The first version
Version 1 for the execution of parts 1 and 2 regarding the manufacturing time (classic equipment, respectively: 3-axis lathe, semi-automatic, manually operated). Version 1 for the execution of parts 1 and 2 regarding the manufacturing cost (classic equipment, respectively: 3-axis lathe, semi-automatic, manually operated). Version 2 for the execution of parts 1 and 2 regarding the manufacturing cost (performant equipment, respectively: 5-axis lathe, ultrasound, 3D measuring machine, automatic). The generation of the experiment plan was realized with the Design Expert program and contains a number of 32 versions. For each of these versions the execution times of the respective flow and the execution costs were determined. Both the generated plan, as well as the obtained values, corresponding to the time [s] and costs (u.m.) are presented in figure 1. In order to optimize the manufacturing flow of part 2, the same procedure is used since the selected influence factors are the same. The work plan for part 2 is shown in figure 4. The optimization of the installation flow can be realized using the same methodology, considering the operations corresponding to assembling. In this case we selected three operations from the total operations required for assembling which were assigned two levels of variation (table 7). As in the case of part no. 1 and 2, in order to find the optimum version for the realization of the assembling operation two execution versions were created, which are presented in tables 8, 9 , 10 and 11. The work plan, times and costs corresponding to each combination of the three factors are presented in figure 7.
Conclusions
After applying this method in order to optimize the manufacturing flow of the entire presented assembly, the following aspects were found: For part no. 1, the flange, the optimum execution versions for which minimum costs are obtained are the following: cutting with automated saw, turning 1 st and 2 nd turn with 3-axes CNC 3 lathe, ultrasound adjustment, automated wash, final inspection with the 3D measuring machine. For this execution version, the cost related to the execution of the part is of 407.85 u.m. and the execution time is 209'35''. For part no. 2 two optimum solutions were found for which its execution costs are minimum, respectively: automatic saw, 3--axis CNC lathe, ultrasound adjustment, automated wash, manual final inspection, the cost related to the execution of the part is of 196.2 u.m. and the execution time is 99'65''; automatic saw, 3-axis CNC lathe, ultrasound adjustment, automated wash, final inspection with 3D measuring machine, the cost related to the execution of the part is of 196.7 u.m. and the execution time is 99'65'' It can be observed that minimum manufacturing costs were obtained both for part no. 1, as well as for part no. 2. In the case of the assembling process a single optimum version was obtained, this being composed of the following operations: ultrasound adjustment, automated wash, final inspection with 3D measuring machine, the cost related to the performance of the assembling process is of 102.25 u.m. and the execution time is 66'55'', which shows us that both an optimization of the assembling time, as well as an optimization of the costs was succeeded. | 2019-04-15T13:06:09.662Z | 2016-08-01T00:00:00.000 | {
"year": 2016,
"sha1": "2083028bf3835e85f5a02cbc5e5dac1152a1b898",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1757-899X/145/6/062009/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "3874b5d5bc0e5ec6e6804287315865bd3602a652",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Engineering"
]
} |
17102941 | pes2o/s2orc | v3-fos-license | A meta-analysis of declines in local species richness from human disturbances
There is high uncertainty surrounding the magnitude of current and future biodiversity loss that is occurring due to human disturbances. Here, we present a global meta-analysis of experimental and observational studies that report 327 measures of change in species richness between disturbed and undisturbed habitats across both terrestrial and aquatic biomes. On average, human-mediated disturbances lead to an 18.3% decline in species richness. Declines in species richness were highest for endotherms (33.2%), followed by producers (25.1%), and ectotherms (10.5%). Land-use change and species invasions had the largest impact on species richness resulting in a 24.8% and 23.7% decline, respectively, followed by habitat loss (14%), nutrient addition (8.2%), and increases in temperature (3.6%). Across all disturbances, declines in species richness were greater for terrestrial biomes (22.4%) than aquatic biomes (5.9%). In the tropics, habitat loss and land-use change had the largest impact on species richness, whereas in the boreal forest and Northern temperate forests, species invasions had the largest impact on species richness. Along with revealing trends in changes in species richness for different disturbances, biomes, and taxa, our results also identify critical knowledge gaps for predicting the effects of human disturbance on Earth's biomes.
Introduction
Developing the ability to predict the consequences of environmental change is one of the most significant challenges in ecology today (Chapin et al. 1997;Pereira et al. 2010;Dawson et al. 2011). Evidence is increasingly demonstrating the negative effects of biodiversity loss on Earth's ecosystem processes (Loreau et al. 2001;Balvanera et al. 2006;Wardle et al. 2011;Hooper et al. 2012). Given the increasing human domination of Earth's biomes, establishing accurate estimates of the magnitude of biodiversity loss resulting from common human disturbances, such as landuse change and habitat loss, species invasions, climate change, and nutrient additions, is of particular importance.
With the sustainability of human life on Earth relying on the services that healthy ecosystems provide (Millenium Ecosystem Assessment 2005), a better understanding of why and how species are being lost from ecosystems is needed. There is considerable uncertainly however over the magnitude of current and future biodiversity loss (Barnosky et al. 2012). Previous attempts to estimate changes in biodiversity have relied heavily on expert opinion (Sala et al. 2000) or have focused on estimating extinction risks for particular taxa . Potential time lags between environmental change and extinctions (Krauss et al. 2010), differences in extinction rate estimates based on species-area-curves (He and Hubbell 2011), and other confounding effects have made predicting the magnitude of species loss resulting from various human-caused disturbances problematic (Bellard et al. 2012). Differences between modeling approaches and uncertainties within model projections have also resulted in widely varying predictions of future biodiversity change (Pereira et al. 2010). For example, two modeling approaches used to project the future global extinction risks for birds revealed very different estimates with Jetz et al. (2007) projecting 253-455 species at risk for extinction by the year 2100 while Sekercioglu et al. (2008) projects 2150 species at risk for extinction in the same time period.
One potential solution for the uncertainties in estimating biodiversity loss is making use of studies that report the difference in species richness between disturbed and undisturbed habitats. Species richness is not synonymous with biodiversity, with the later serving as a more complex description of both the variation in the number of species and their relative abundances, along with genetic and ecosystem variation. However, declines in species richness can be an indicator of biodiversity loss and with studies that examine changes in species richness following disturbances among the most common in the ecological literature, compiling these studies and analyzing changes in species richness can provide information on the potential biodiversity loss occurring from human-caused disturbances. In this study, we have compiled studies that document the effects of human-caused disturbances on changes in species richness into a dataset that includes 327 empirical values of change in species richness taken from 245 previously published experimental and observational disturbance studies. Using a combination of categorical and continuous meta-analyses, we determined whether there are differences in the fraction of change in species richness resulting from five anthropogenic disturbances: species invasions, nutrient addition, temperature increase, habitat loss or fragmentation, and land-use change. We also determined whether the fraction of change in species richness caused by the disturbances differed based on: (1) the type of biome (Northern temperate forest, boreal forest, tropical, or aquatic); (2) the type of species (producer, ectotherm, or endotherm); (3) the type of study (experimental or observational); (4) the initial species richness; (5) the latitude of the study site; and (6) the length of the experiment.
Selection criteria
Our dataset was compiled by searching the biological literature for studies that reported the effects of anthropogenic disturbances on species richness. We focused on five anthropogenic disturbances that have been identified as major drivers of current biodiversity decline: species invasions, nutrient addition, temperature increase, habitat loss or fragmentation, and land-use change (Vitousek et al. 1997;Jackson et al. 2001). We performed a literature search using the ISI Web of Science database of the following research areas: "environmental sciences ecology", "biodiversity conservation", and "marine freshwater biology". We used the following search expressions: "biodiversity loss" OR "species loss" OR "species richness" OR "community change" AND ("invasi* species" OR "habitat loss" OR "land use change" OR "climate change" OR "experiment* warm*" OR increas* temperature" OR "eutrophication" OR "nutrient add*"). A final search of the literature was completed on 10 February 2013. We searched for studies that experimentally manipulated disturbances (n = 113) or observational studies that compared a disturbed habitat with a control (undisturbed) habitat (n = 214). The literature search yielded 114,597 citations, of which 245 studies that included 327 values of change in species richness were included in the final dataset ( Fig. 1). All papers reported a mean measure of species richness and a corresponding error measure in both a disturbance and a reference condition. Values were given in 147 of the responses. For studies that did not explicitly state results but instead showed results in a figure, as was the case for 180 responses, the average species richness and corresponding error measures were estimated using GetData Graph Digitizer software (S. Fedorov, Russia). If a study presented multiple responses, these were only included when the responses were for different disturbance categories, different geographical regions, or different trophic categories. Multiple responses that did not differ from each other based on these criteria were averaged, and the average response was used in the dataset. We also averaged responses for studies that manipulated disturbance over a range of disturbance intensities. Because we had no way of separating the effects of multiple disturbances, we only included responses that gave the effects of single disturbances. If a combined disturbance effect was given, the response was not included in the dataset. We took data from the final sampling date for studies that measured species richness over a period of time.
We followed strict guidelines in choosing the types of disturbance studies to be included in the analysis. For the temperature increase category, we only included studies that increased temperature per se (e.g., Chapin et al. 1995). Studies that combined other climate change effects, such as altered light and precipitation, with increases in temperature were not included (e.g., Zhou et al. 2006) nor were observational studies which compared natural communities growing in areas that differ in ambient temperature (e.g., Kennedy 1996). For nutrient addition, we included studies that enriched the experimental community with nitrogen (e.g., Bonanomi et al. 2009), phosphorus (e.g., Cherwin et al. 2008), or a fertilizer solution containing one or both of these nutrients (e.g., Lindberg and Persson 2004). Habitat loss and land-use change comprised two separate categories, each with their own subcategories. We classified a disturbance as a form of habitat loss if the habitat had been fragmented or reduced in size. If the habitat size remained the same but was transformed from a natural habitat to either an urban or agricultural habitat the disturbance was classified as landuse change. For habitat loss, we included studies that fragmented experimental plots (e.g., Gonzalez and Chaneton 2002), those where habitat size had been reduced (Bonin et al. 2011), or those that compared communities present in control sites to those that had been clear cut or logged (e.g., Biswas and Malik 2010). We did not include studies that combined corridor effects with fragmentation (e.g., Rantalainen et al. 2004). We grouped three habitat loss categories (fragmentation, reduction in habitat size, and logging) into a single habitat loss category. While the fraction of change in species richness did differ between the three categories (fragmentation = 13% decline, n = 21; reduction in habitat size = 25% decline, n = 22; logging = 30% decline, n = 15), the difference was not statistically significant, likely due to the high variability within categories caused by low sample sizes (Q b = 1.96, P = 0.38). We decided to group together these three habitat categories to increase the overall sample size for the habitat loss category. All studies in the land-use change category were studies that observed species richness in a site that had been transformed from a natural area to one dominated by human development (e.g., urban or suburban areas) or agricultural activity, compared with a reference natural area. The fraction of change in species richness differed between the two landuse change categories (human development = 19% decline, n = 21; agricultural activity = 48% decline, n = 39); however, the difference was not statistically significant (Q b = 3.05, P = 0.081); thus, we grouped the two types into a single land-use change category to increase the sample size for this category. Finally, for species invasions, we included studies in which a non-native species, or group of non-native species, was added (intentionally or unintentionally) to an established community. We did not include studies that examined the effects of removing non-native species from previously invaded communities (e.g., Ostertag et al. 2009). We also included observational studies that examined an uninvaded site with an invaded site. We only included native species richness for the invasion studies.
We grouped studies into one of three species categories. Producers included both terrestrial and aquatic primary producers, ectotherms included animals that rely on external sources to control body temperature, and endotherms included animals that produce heat internally. We chose these three species categories as we wanted to be more specific than simply grouping species as consumers or producers yet separating the studies into anything more specific than these three categories would have resulted in very small sample sizes for each category. Categorizing the consumer species as ectotherms and endotherms takes into account differences in metabolic activity and body size, as endotherms are generally larger bodied animals compared with ectotherms.
The 245 studies spanned most of the Earth's biomes. Ten terrestrial biomes were classified into condensed ecoregions (Bailey 1998): arctic, alpine, northern temperate forest, southern temperate forest, boreal forest, savanna, mediterranean, desert, grassland, and tropical. Freshwater, marine, estuary, and wetland ecosystems were combined into an aquatic biome category. In the categorical analysis of the biomes, effects were only calculated for disturbance-biome combinations that included five or more responses. Thus, effect sizes were not calculated for 36 of the 55 disturbance-biome combinations, as they did not fit this minimum sample size. In order to make relevant comparisons across biomes, we only analyzed biomes that contained effect sizes for at least three of the five disturbances. This left four biomes in the analysis: northern temperate forest, boreal forest, tropical, and aquatic. The study site latitude was also recorded for each response to examine any potential latitudinal gradients in species loss.
Data analysis
We performed weighted random effects meta-analyses using MetaWin 2.0 software (Rosenberg and Adams 2000). We considered a random effects analysis, which assumes that effect sizes will exhibit random variation among studies, to be more appropriate than a fixed effects analysis as the studies included in our dataset vary widely in both methodology and biological factors. We used the standard equation for the response ratio (RR) as the effect size for the analyses to compare species richness (SR) between experimental (e) and control (c) conditions. The response ratio is calculated as: The response ratio is a common effect size measure in ecological meta-analyses (Hedges et al. 1999). Response ratios that are significantly greater or less than zero indicate a larger change in species richness between the control and disturbance treatments, with the direction of change indicating whether the disturbance increased or decreased species richness relative to the reference condition. The percentage of change in the responses that we refer to in the text was calculated as: The independent responses in the analyses were weighted according to their sample variances to account for the difference in statistical precision between individual experiments (Hedges et al. 1999). Greater weight is given to experiments whose estimates have a smaller standard error, thus a greater precision. Variance for each response was calculated as: We used a combination of categorical and continuous meta-analyses to test for the effect of seven different factors on the magnitude of change in species richness between the control and disturbance treatments. The factors were as follows: (1) Disturbance type (categorical). This factor included five disturbance type categories: habitat loss, land-use change, species invasion, nutrient addition, and temperature increase.
(2) Study type (categorical). This factor included two study type categories: experimental and observational, but only compared between habitat loss and species invasions, as these were the only two disturbances to contain response from both study types.
(3) Species category (categorical): This factor included three species categories: producers, ectotherms, and endotherms. (4) Biome type (categorical): This factor included four biome categories: northern temperate forest, boreal forest, tropical, and aquatic. (5) Initial species richness (continuous): Initial species richness was given as the species richness in the control treatment for each response. (6) Latitude (continuous): Latitude of the study site was given for each response.
(7) Experimental length (continuous): Length (in days) was given for each of the experimental responses.
Observational studies were not included in this analysis. We used 95% confidence intervals to determine significant differences in an effect size from zero, indicating an increase or decrease in species richness in the disturbed treatment compared with the control. If the confidence interval overlaps with zero then the species richness did not significantly increase or decrease in that response. We also used 95% confidence intervals to compare between the different categories within a factor. If the intervals of two categories overlapped then they are said to not significantly differ in their magnitude of species richness change. In categorical meta-analysis, one can test whether the effect sizes of the categories within a factor are homogeneous, meaning that the observed differences are due to sampling error and not due to the effect of the category by examining the heterogeneity statistic (Q). The total heterogeneity for a group of comparisons (Q t ) is partitioned into within-group heterogeneity (Q w ) and between-group heterogeneity (Q b ). A significant between-group heterogeneity statistic indicates that the effect sizes between the different categories in a factor are significantly heterogeneous, and thus, the differences are not due to sampling error alone. In the continuous meta-analysis models, we used the model heterogeneity (Q m ) to determine whether the relationship between the magnitude of species loss and the continuous variable was significant. A significant Q m indicates that the model explains a significant amount of variability within effect sizes.
Publication bias
Publication bias occurs when there is a tendency toward publishing only significant results, leading to a disparity in the strength or direction of the results of published studies compared with those of unpublished studies (Moller and Jennions 2001). We used two methods to test for publication bias in our dataset. The first was visual inspection of a "funnel plot" of sample size against effect size. If the effect sizes were derived from a random sample of studies, suggesting that publication bias is low, the plot should reveal a funnel shape, with small sample sizes showing a larger variance in individual effects and a decrease in variance with increasing sample size (Moller and Jennions 2001). The second method we used to test for publication bias was the calculation of a fail-safe number (Rosenthal 1991). The fail-safe number provides an estimate of the number of future studies needed to change a significant effect to a non-significant one (Moller and Jennions 2001). Therefore, a larger fail-safe number relates to a lower chance of publication bias. Rosenthal (1991) has suggested that a fail-safe number that is equal to or greater than 5n + 10 (where n is the number of studies) provides evidence of a robust effect size that is not skewed by publication bias.
Results
Our results show that, on average, human disturbances lead to an 18.3% reduction (n = 327) in species richness ( Fig. 2A). Significant decreases in species richness were observed for land-use change (24.8% decline, n = 61), species invasions (23.7% decline, n = 131), and habitat loss and fragmentation (14% decline, n = 60). Significant changes in species richness were not observed for nutrient addition (8.2% decline, n = 46) or temperature increase (3.6% decline, n = 28). Between-class heterogeneity was marginally insignificant (Q b = 9.12, P = 0.058), suggesting that the magnitude of species loss slightly differs between the different disturbance type categories. When grouped according to experimental or observational study type, which only applied for species invasions and habitat loss, experimental studies had a slightly lower, yet not significantly different, fraction of decline in species richness than observational studies (Fig. 2B). This difference was more pronounced for species invasions, where experimental invasion studies had a lower decline in species richness losing an average of 11.2% less species (n = 16) than observational invasion studies, which lost an average of 24.2% of species (n = 116). In contrast, the fraction of decline in species richness between experimental and observational habitat loss studies was more similar, with experimental studies losing an average of 10.2% of species (n = 23) and observational studies losing an average of 17.1% of species (n = 37). The between-class heterogeneity was marginally insignificant (Q b = 6.83, P = 0.078), suggesting that the fraction of decline in species richness slightly differs between the two study type categories.
In general, the type of species affected by the disturbance influenced the fraction of change in species richness observed across all disturbances (Q b = 10.59, P = 0.005), and when separated into the different disturbance categories (Q b = 25.91, P = 0.011). Across all disturbances, endotherms showed a greater decline in species richness than ectotherms or producers (Fig. 3). Endotherms lost an average of 33.2% of species across all disturbances while ectotherms lost 10.5%, and producers lost 25.1%. While there was a significant decline in the species richness of endotherms across all disturbances, when the disturbances were separated, none showed a significant decline. The greatest decline in endotherm species was caused by species invasions (44.9%), followed by land-use change (30.5%) and habitat loss (36.7%).
Producer species richness only significantly declined from species invasions (30.3%) and nutrient addition (19.5%). Land-use change (22.2%), habitat loss (13%), and temperature increase (8.9%) all led to insignificant declines in producer species richness. In contrast, land-use change was the only disturbance to lead to significant decline in species richness in ectotherm species (24%). Habitat loss led to a slightly insignificant decline in ectotherm species (12.8%), while species invasions led to insignificant ectotherm species loss (5.2%), and nutrient addition and increases in temperature led to a small, yet insignificant, increase in ectotherm species richness (15.5% and 5.3%, respectively). Overall, species invasions was the only disturbance type to cause significantly different fractions of change in species richness between species categories resulting in a significantly greater decline in producer species richness (30.3%) compared with ectotherm species richness (5.2%).
Higher initial species richness was associated with greater species loss across all disturbances (Q m = 4.61, P = 0.032; Fig. 4). At low initial richness values, disturbances were also generally associated with increases in species richness in disturbed habitats. When separated by disturbance type, there was no relation between change in species richness and initial species richness for any of the disturbances (Fig. S1).
There was no relationship between latitude and the fraction of change in species richness across all disturbances or for each disturbance category (Fig. S2). Experimental length also had no significant effect on the fraction of change in species richness (Q m = 0.5, P = 0.48; Fig. S3). Heterogeneity statistics and corresponding P-values for all factors included in the metaanalysis are displayed in Table S1.
Disturbances across biomes
Our results also show that the vulnerability of an ecosystem's biodiversity differs across the Earth's biomes. Across all disturbances, significant decline in species richness was observed in all three of the terrestrial biomes we compared, and no significant change in species richness was observed in the aquatic biome (Fig. 5). This decline was greatest in the boreal forests with a 25.8% decline in species richness (n = 31), followed by the tropics (25.6% decline, n = 60), and northern temperate forests (22.5% decline, n = 52). Between-class heterogeneity was marginally significant (Q b = 6.99, P = 0.072), suggesting that the fraction of decline in species richness slightly differs among the four biome categories across all disturbances.
None of the five disturbances led to significant change in species richness in the aquatic biome, and the effect of all disturbances did not differ from each other. Comparisons among the disturbance categories in the three terrestrial biomes revealed that the disturbances led to different fractions of change in species richness among the different biomes. Habitat loss led to significant declines in species richness in the tropics (25.6% decline, n = 27), yet did not lead to significant declines in either the boreal (17.2% decline, n = 17.22) or northern temperate forest (26.7%, n = 9) biomes. Land-use change was the disturbance that led to the greatest fraction of decline in species richness in the tropics (32.4% decline, n = 24), yet did not lead to significant decline in the northern temperate forest biome. Species invasions led to the greatest fraction of decline in Figure 3. Change in species richness in species categories following anthropogenic disturbances. Average response ratios and 95% confidence intervals of species richness changes in producers, ectotherms, and endotherms across all disturbances and for each disturbance type. Values that significantly differ from zero, according to the 95% confidence intervals, are indicated with an asterisk. The values in parentheses represent the number of responses included in the analysis. species richness in both the boreal (33.5% decline, n = 19) and northern temperate forest (30% decline, n = 25) biomes, yet did not lead to significant decline in the tropics (23.7% decline, n = 9). Nutrient addition led to insignificant declines in species richness in both the boreal and northern temperate forest biomes.
Publication bias
The funnel plot of sample size and effect size displays a clear funnel shape with a much greater spread of studies with small sample sizes and a decrease in this spread as sample size increases (Fig. 6). This funnel shape is what is expected if the studies are compiled from a random sampling with similar research methods (Moller and Jennions 2001), as it is expected that studies with smaller sample size will be less precise than those with large sample size. The clear funnel shape we see in this plot suggests that our dataset is unlikely to suffer from publication bias. The fail-safe number calculated for our dataset (5548.3) also indicates low publication bias. This number is over three times larger than Rosenthal's (1991) suggested number (5*327 + 10 = 1645) thus indicating that the negative effect of disturbance on species richness is very robust to publication bias.
Fraction of change in species richness across disturbances
While habitat loss is widely cited as the leading cause of biodiversity decline (Vitousek et al. 1997;Pimm and Raven 2000;Millenium Ecosystem Assessment 2005) our results show that, at local scales, species invasions result in a fraction of change in species richness comparable to land-use change and greater than that caused by habitat loss/fragmentation. One potential explanation for this result lies in the difference in the fraction of change in species richness between observational and experimental studies of species invasions. Observational studies differ from experimental studies in many ways, one being the dispersal ability of species. Dispersal is likely limited in experimental plots while there is more environmental heterogeneity and dispersal potential in observational studies. A greater potential for replacement of lost individuals or species in observational studies implies that the fraction of decline in species richness might be lower. Our results reveal the opposite pattern with observational studies of species invasions resulting in a decline in species richness that was over two times greater than the decline observed in experimental invasion studies. This large disparity between study types did not occur for habitat loss. Observational disturbance studies are unable to completely control for multiple disturbances, and it is likely that the disturbed treatment will differ in other ways from the reference treatment. Therefore, our results suggest that observational studies of species invasions may be partially confounded by multiple disturbances. Invasive species often establish more frequently in disturbed rather than pristine habitats (Didham et al. 2005) and are often associated with other disturbances, such as nutrient addition (Kercher and Zedler 2004) or habitat disturbances (Mac-Dougall and Turkington 2005). Thus, the high fraction of decline in species richness resulting from species invasions may in part be due to synergistic interactions with other disturbances (Brook et al. 2008). The large, negative effect that we found of land-use change on species richness is not surprising, as previous predictive studies have stressed the impact of land-use change, suggesting that it will be more significant than climate change, nitrogen deposition, and species invasions Sala et al. 2000).
Change in species richness across taxa
Our analysis of the fraction of change in species richness between species categories shows that land-use change results in significant declines in species richness in ectotherms, marginally insignificant declines in species richness in endotherms, and insignificant declines in species richness in producers (Fig. 2). This result supports the hypothesis that disturbances that transform habitats, including land-use changes, habitat destruction, and habitat fragmentation are correlated with the extinctions of species in high trophic positions and with large body sizes (Holyoak 2000;Gonzalez et al. 2011).
Species invasions was the only disturbance that led to significant declines in species richness in producers, and this decline was greater than the decline of ectotherm species following species invasions (Fig. 3). Endotherm species loss following species invasions was greater than for both producers and ectotherms, yet the sample size was small (n = 5), compared with that of the producers (n = 86) and ectotherms (n = 40), and thus, the effect was not significant. These results suggest that species invasions are more likely to lead to extinctions of producer species than consumer species. A potential explanation of this strong effect of invaders on producer species relates to the nature of the invader species. The studies in our analysis that examined the effect of invasions on ectotherms and endotherms included those where an ectotherm or endotherm species was the invader as well as those where a producer species was the invader. Although not statistically significant, decline in ectotherm species richness was greater in studies where the non-native invader was a producer (5.8% decline, n = 27), compared with when an ectotherm species was the invader (0.5% increase, n = 13). This pattern was also seen in endotherms, with endotherm species experiencing a 47.8% decline in species richness (n = 3) following invasion by a producer species, and a 35% decline in species richness (n = 2) following invasion by an endotherm species. These results suggest that non-native species that impact the base of a food web have a stronger effect than higher trophic level invaders. Because all of the studies in our analysis that measured the effect of an invader on producers were those where the non-native invader was also a producer species, the strong effect of producer invaders was likely amplified due to the direct competition the non-native invader had with the native species for resources.
An important caveat to consider when examining changes in species richness between different studies is the difference in how finely resolved the taxonomic groups are. There is typically much better characterization among larger animals, such as mammals, compared with small animals, such as invertebrates. Because smaller species may not be as finely resolved, the magnitude of change in species richness in these species may be potentially underestimated. Across all disturbances, our results show that the decline in endotherm species richness is greater (33.2%) than the decline in ectotherm species richness (10.5%). While this could be due to the hypothesis that extinctions are more highly correlated with large bodied and high trophic level species (Holyoak 2000;Gonzalez et al. 2011), it could also be a result of a difference in how the studies included in our dataset characterized the species.
It is well established that diverse communities are generally more stable in terms of their biomass than communities with lower species richness (Tilman 1999;McCann 2000;Campbell et al. 2011). Our finding that higher initial species richness was associated with greater species loss suggests that the stabilizing role of high diversity on productivity (McCann 2000;Tilman et al. 2006) may not extend to biodiversity maintenance in the face of perturbations. That biodiversity is more difficult to maintain in diverse communities may be related to skewness of species-abundance distributions toward rare species in more diverse communities (Sankaran and McNaughton 1999). There is substantial evidence that rare species are more susceptible to extinction following a disturbance than common species (Davies et al. 2004;Lavergne et al. 2005;Gonzalez et al. 2011). Therefore, the high fraction of decline in species richness we found following habitat loss and species invasions may be, in part, due to the high richness of rare, extinction-prone species in these studies compared with the other disturbances.
Change in species richness across biomes
We observed a similar fraction of decline in species richness across all disturbances in the three terrestrial biomes that we compared. However, while all terrestrial biomes experienced an overall significant decline in species richness, the aquatic biome experienced a much lower, and insignificant, decline across all disturbances. This suggests that the effect of anthropogenic disturbances on species richness is stronger in terrestrial ecosystems. The difference in food web structure and ecosystem properties between aquatic and terrestrial habitats suggests that these systems can differ in their response to disturbances. The very low effect of species invasions in the aquatic biome (2.4% decline) was surprising given the strong overall effect of invasions across all biomes (23.7% decline) and within each of the terrestrial biomes (boreal = 33.5%, northern temperate forest = 30%, and tropical = 23.7%). A potential explanation for this small effect of species invasions in the aquatic biome is that there may be facilitative interactions occurring between the invaders and native species. There is evidence that non-native species can facilitate native species and potentially lead to increases in native species richness (Simberloff and Von Holle 1999;Rodriguez 2006). The most common mechanism of non-native facilitation of native species is habitat modification, where the invader modifies the natural habitat to create new physical structures, which can benefit native species (Rodriguez 2006). One of the most familiar examples of habitat modification by an invader is the dense, complex colonies formed by invasive bivalves in aquatic ecosystems. These colonies have been shown to cause a shift from planktonic to benthic food webs (Simberloff and Von Holle 1999) and lead to increases in invertebrate diversity (Stewart and Haynes 1994). Of the 33 aquatic species invasion responses in our dataset, we found that the non-native invaders had a positive interaction with the native species in almost half of the responses (19 negative effects vs. 14 positive effects). While facilitative interactions between invaders and native species has been shown to occur almost equally in terrestrial and aquatic habitats (Rodriguez 2006), we did not find the same strong dichotomy in the direction of the effect of species invasions in the terrestrial responses from our dataset (82 negative effects vs. 15 positive effects). Therefore, our analysis suggests that positive interactions between invaders and native species may be more common in aquatic ecosystems.
While the fraction of decline in species richness across all disturbances was similar among the three terrestrial biomes, we found variation among the biomes in terms of the disturbances that had the largest impact on species richness (Fig. 5), suggesting that the effects of humancaused disturbances are not uniform across the Earth's biomes. The decline in species richness caused by both land-use change and habitat loss was only significant in the tropical biome. This may be due to the extremely high level of taxonomic diversity in tropical biomes (Myers et al. 2000), which is particularly affected by a reduction in available living space. On the other hand, species invasions were the only disturbance to lead to significant decline in species richness in the northern temperate forest and boreal forest biomes. This suggests that species in these biomes are more robust to reduced habitat area, but may be vulnerable to competition for resources imposed by invaders.
Previous attempts to estimate and predict the magnitude of species loss resulting from different human-caused disturbances have relied heavily on expert opinion (e.g., Sala et al. 2000). In contrast, the estimates of declines in species richness presented here are based on empirical studies. In Sala et al. (2000), the authors predict future biodiversity change for five drivers of biodiversity decline (land use, atmospheric CO 2, nitrogen deposition, climate, and biotic exchange) in 11 terrestrial biomes along with lakes and streams. To make these predictions, they combine the expected changes in the five drivers with the expected impact of each driver on biodiversity loss. Sala et al. (2000) uses knowledge from experts to estimate the biodiversity impact of each driver in each biome, ranking the estimates from a low impact on biodiversity (1) to a high impact on biodiversity (5). While studies such as Sala et al. (2000) and the present meta-analysis differ in many respects including spatial scale and as such are not directly comparable, a number of the similarities and differences in the results of the two studies are interesting. While land-use change is estimated in Sala et al. (2000) to lead to more species loss across all biomes than any other disturbance, we only find significant declines in species richness resulting from land-use change in the tropics. Species invasions show a much stronger effect on species richness in northern temperate forests and boreal forest biomes based on the meta-analysis presented here than land-use change or habitat loss. Additionally, Sala et al. (2000) predict a relatively low impact of species invasions in these biomes. Our results based on empirical values of change in species richness show that the effect of species invasions on species richness will be much greater than is currently estimated by expert knowledge and that the effects of species invasions may be comparable to those of land-use change and habitat loss. While it is evident from our analysis that human-caused disturbances do not all contribute to the same fraction of decline in species richness in each biome, the large effect of species invasions stresses the significant impact that non-native species have on ecosystems.
An important caveat to consider when comparing our empirical estimates of change in species richness to estimates of global biodiversity change, such as those made by Sala et al. (2000) is how differences in spatial scale can impact the patterns of biodiversity loss. A variety of species richness patterns have been shown to be dependent upon spatial scale. These include differences in the strength or shape of the relationship between diversity and productivity (Chase and Leibold 2002), diversity and latitude (Hillebrand 2004), and diversity and altitude (Rahbek 2005) between local and regional scales. With spatial scale playing a large role in the strength of several species richness relationships, the effects of anthropogenic disturbances on the magnitude of species loss may also be scale-dependent, and thus the strength of the effects we found may differ at the global scale. It is possible that a disturbance might decrease local species richness, but increase regional species richness, as could be the case for the effects of nutrient addition if the scale-dependent diversity-productivity relationship holds true (Chase and Leibold 2002). A further understanding of the scale dependence of anthropogenic disturbances on the magnitude of species loss will be essential in order to make future biodiversity loss predictions at the global scale.
The latitudinal gradient in species richness from the polar to equatorial regions has been demonstrated for a wide variety of species and is one of the most fundamental patterns of biodiversity (Rosenzweig 1995;Willig et al. 2003). It has been suggested that biodiversity is potentially more difficult to maintain in diverse communities, due to these communities containing many rare species that are more susceptible to extinction following a disturbance (Sankaran and McNaughton 1999;Davies et al. 2004). Therefore, we would expect to find a latitudinal gradient in the fraction of change in species richness following disturbances, with low latitude regions that contain greater biodiversity experiencing a greater decline in species richness. However, we did not observe latitudinal gradients in the fraction of change in species richness for any of the five disturbances (Fig. S2). This suggests that while low latitude regions may be more susceptible to species loss due to their high biodiversity, the relative fraction of species richness decline does not differ from higher latitude regions with lower diversity. The issue of spatial scale may also be playing a role in the absence of a latitudinal gradient in our results. As discussed above, the latitudinal diversity gradient is known to differ between spatial scales, with a stronger and steeper relationship at the regional scale compared with the local scale (Hillebrand 2004). Because our metaanalysis examined change in species richness at the local scale, it is possible that a similar relationship exists, with a weaker relationship between latitude and the magnitude of species loss following anthropogenic disturbances at the local scale compared with what we might observe at a larger, regional scale.
Knowledge gaps
In compiling the dataset of disturbance studies for this meta-analysis, we found major data gaps, making it impossible to make comparisons of the effects of disturbance types across all of Earth's biomes. These gaps are the result of research intensity skewed toward different disturbances for different biomes, rather than research aimed at gaining a broad understanding of global effects of disturbance. While disturbance-mediated biodiversity loss has been well studied in some biomes, for example boreal and northern temperate forests, information is largely lacking for disturbances in others. For example, climate change is extensively studied in the arctic and alpine biomes yet few studies have addressed the effects of increases in temperature on biodiversity in northern temperate forest or tropical biomes. Likewise, while species invasions have been well studied in many of Earth's biomes, data are lacking for arctic and alpine biomes. These shortcomings limit our ability to compare the major drivers of biodiversity loss across the Earth's biomes and need to be addressed in order to accurately assess how anthropogenic disturbances affect biodiversity at the global scale.
These knowledge gaps seriously hinder our ability to make accurate predictions of future biodiversity change. These shortcomings should be considered when using empirical values of species loss to make predictions of biodiversity change. It will be necessary for future studies to focus on exploring biodiversity changes in the areas where knowledge gaps exist to further improve these projections of future biodiversity change.
Future directions
In this study, we used species richness to measure the magnitude of biodiversity change. Species richness is the most common biodiversity measure used in disturbance studies, and while it provides a measure of the magnitude of species loss, it is unable to account for the complex changes in composition and community structure that can take place following disturbances (Mendenhall et al. 2012). For example, following deforestation in Costa Rica for agricultural activity bird species richness did not significantly differ between forested and agricultural habitats, suggesting that the deforestation did not have the large negative impact on the community that would be anticipated (Daily et al. 2001). However, community composition differed greatly between habitats, with the natural forest and agricultural area showing two distinct communities (Mendenhall et al. 2011). Changes in the abundance distributions of species in disturbed ecosystems are also important indicators of change. Another overlooked problem when using only average values of change in species richness as a metric of biodiversity is that disturbances can also affect the consistency, or predictability, of a response (Fraterrigo and Rusak 2008;Murphy and Romanuk 2012). Response predictability is a relatively unexplored consequence of disturbances but an understanding of response predictability changes can help to better interpret the ecological effects of disturbances (Murphy and Romanuk 2012). Future disturbance studies should concentrate on including alternative measures of biodiversity, such as community composition, along with species richness to obtain a clearer understanding of how different types of human-caused disturbances affect biodiversity. Figure S1. Relationship between initial species richness and the change in species richness for habitat loss (A), land-use change (B), species invasion (C), nutrient addition (D), and temperature increase (E). Figure S2. Relationship between latitude and the change in species richness following anthropogenic disturbances. Figure S3. Relationship between experimental length (days) and change in species richness for experimental studies. Table S1. Heterogeneity statistics and corresponding P-values for each of the categorical and continuous factors included the meta-analysis. Data S1. Dryad data. | 2018-04-03T01:17:54.468Z | 2013-12-12T00:00:00.000 | {
"year": 2013,
"sha1": "432dbcb43ea6fba1adc305ad7f3aaa50645dd4e2",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1002/ece3.909",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "432dbcb43ea6fba1adc305ad7f3aaa50645dd4e2",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
73502963 | pes2o/s2orc | v3-fos-license | Non‐invasive analysis of tumor mutation profiles and druggable mutations by sequencing of cell free DNA of Chinese metastatic breast cancer patients
Background Metastatic breast cancer (MBC) remains an incurable disease worldwide. Tumor gene mutations have evolved and led to drug resistance in the treatment course of MBC. However, data on the mutation profiles and druggable genomic alterations of MBC remain limited, particularly among Chinese patients. Our study aimed to depict the mutation profiles and identify druggable mutations in circulating tumor DNA (ctDNA) in Chinese MBC patients. Methods Targeted deep sequencing of a 1021‐gene panel was performed on 17 blood samples and 5 available tissue samples from 17 Chinese MBC patients. Results We identified 60 somatic mutations in 17 blood samples (sensitivity 100%). Somatic mutations were identified in the blood samples of all patients, and 41.18% (7/17) of patients harbored at least one druggable mutation. A high ctDNA level in plasma is associated with shorter progression‐free survival. Conclusion Targeted deep sequencing of cell free DNA is a highly sensitive, noninvasive method to depict tumor mutation profiles, identify druggable mutations in MBC, and predict patient outcome. Our study shed light on the utility of ctDNA as noninvasive “liquid biopsy” in the management of MBC.
Introduction
Breast cancer is the most common type of cancer in women across the world, and the incidence is rapidly increasing in China. 1 Metastatic breast cancer (MBC) remains an incurable disease; however, an increasing number of targeted therapies have resulted in ever-improving clinical outcomes. Many studies have shown that clonal evolution of MBC can arise in disease progression or following multiple lines of therapy, leading to treatment failure. 2 Thus, understanding the genomic profile of the tumor is critical for managing MBC, especially with respect to the selection of targeted therapies and when switching regimens. Tumor tissue biopsy in MBC is invasive and often inaccessible (e.g. in bone metastasis). However, as next generation sequencing (NGS) technology has advanced, noninvasive molecular profiling of MBC has become available.
Blood-derived circulating tumor DNA (ctDNA) is reported to be detectable in the plasma of patients with advanced malignancy including BC, 3 acting as a potential noninvasive source to characterize the genomic features of tumors. 4 Many studies on ctDNA have used digital PCR techniques to detect mutations in blood 5,6 these techniques are highly sensitive, but molecular profiling information of tumor tissue is still needed. With the recent development of high-throughput DNA sequencing of targeted regions, we are now able to detect and track tumor-specific somatic mutations in cell free DNA (cfDNA) independently.
To explore whether plasma can be used as "liquid biopsy" in Chinese MBC patients, we conducted a pilot study using a commercially available 1021-gene panel tested on plasma samples, paired peripheral blood mononuclear cell (PBMC) samples, and accessible tumor tissue. Herein, we report the genomic profiles and druggable genomic alterations (GAs) in ctDNA from 17 patients with advanced BC during the course of their standard clinical care.
Patient cohort and sample collection
The study cohort consisted of 17 Chinese MBC patients treated at Sun Yat-Sen Memorial Hospital. This was an observational, non-interventional, retrospective study and was conducted in accordance with recognized ethical guidelines. Written informed consent was obtained from all participants. Patients were treated according to physicians' decisions.
All patients were diagnosed with pathologically confirmed BC. Staging investigations were performed in all patients with breast ultrasound, computed tomography (CT) and/or magnetic resonance (MR) scan and evaluated according to National Comprehensive Cancer Network guidelines. We used tumor markers, breast ultrasound, CT and/or MR to monitor disease every six months and/or when disease progressed.
The clinical characteristics of the study cohort are summarized in Table 1. ER, PR, and HER2 status, as well as Ki67 index were assessed in a single laboratory of the Sun Yat-Sen Memorial Hospital Pathology Department using standard criteria.
Next generation sequencing
A total of 17 blood samples were collected. Blood was processed within one hour of sample collection in ethylenediamine-tetraacetic acid tubes and centrifuged at 3000 rpm for 10 minutes. Plasma was then transferred to new EP tubes and centrifuged at 10 000 rpm to further remove cell debris, and stored at −80 C until DNA extraction. Genomic DNA was extracted from peripheral blood mononuclear cells to generate a reference genome to distinguish germline mutations and single nucleotide polymorphisms (SNPs) for each patient. Archival tumor tissues were also tested if accessible.
Target region capture and enrichment was conducted based on a 1021-gene panel and a customized library provided by Geneplus-Beijing (Beijing, China). All experimental processes were performed following the manufacturer's protocol under strict quality control and assessment. All of the captured DNA fragments were amplified and pooled to obtain multiplex libraries.
All of the samples were sequenced with Illumina 2 × 75 bp paired-end reads on an Illumina HiSeq 3000 instrument according to the manufacturer's recommendations using the TruSeq PE Cluster Generation Kit v3 and the TruSeq SBS Kit v3 (Illumina, San Diego, CA, USA).
Sequence data analysis
After filtering the adaptor and low-quality sequences from the raw reads, the clean data were mapped to the reference human genome (version hs37d5.fa) aligned with Burrows-Wheeler Aligner (BWA). 7 Somatic small insertions and deletions (indels) and single nucleotide variants (SNVs) were identified using The Genome Analysis Toolkit (https://www.broadinstitute.org/gatk/) and MuTeck, 8 and copy number variations (CNVs) were identified using Contra. PyClone 9 was employed to assess the clonal population structure of ctDNA in each patient. The clonal variant allele frequency (VAF) at each time point was analyzed based on the mean allele fraction (MAF) of mutations contained in the cluster with highest cancer cell fraction (CCF). 10
Statistical analysis
A log-rank test was used to assess the association between detection of ctDNA and PFS. Correlations between ctDNA level and clinicopathological markers were assessed using Pearson's chi square test. All statistical analyses and visualizations were performed with Graph-Pad Prism version 6.0 (La Jolla, CA, USA) or R version 3.4.1 with R package pheatmap, ggplot2 (R Foundation for Statistical Computing, Vienna, Austria). All P values are two-sided.
Results
Clinical characteristics of the study cohort Seventeen female patients were enrolled in our study. The average diagnostic age was 46 years. All patients were stage IV. Two patients had primary stage IV BC and were treatment-naive when their blood samples were collected; all other patients had received at least one line of therapy.
Of the 17 patients, 10 were ER+/HER2−, 2 were HER2+, and 5 were triple negative BC. The clinical characteristics of the study cohort are summarized in Table 1.
Somatic mutation profile of circulating tumor DNA (ctDNA) using targeted deep sequencing Targeted deep sequencing of cfDNA was successfully performed with blood samples collected from the 17 patients. Tumor-specific mutations were identified in cfDNA from the blood samples of all patients (100%), with a median of four somatic mutations per sample (range: 1-9 mutations per sample). A total of 60 somatic mutations and 1 CNV were detected in the 17 blood samples, with a median MAF of 1.40% (range: 0.06-51.00%). TP53 (35.29%, 6 patients), and PIK3CA (29.41%, 5 patients), were the most frequent mutated genes (Fig 1), which is consistent with the mutation spectrum of primary tumors. 11 ESR1 (17.65%) and PTEN (17.65%) were the third most frequently mutated genes in our study, with mutation frequencies much higher than those reported based on tumor tissue sequencing in the COSMIC database 12 and other studies (ESR 7%, PTEN 4%). 13 ctDNA profile differs among breast cancer of different hormone receptor status We also compared the mutation profiles of ER positive and negative patients. PIK3CA mutations were frequent across different hormone receptor status (30% in ERpositive and 28.57% in ER-negative patients). However, TP53 occurred in five out of seven (71.43%) ER-negative patients and only 1 out of 10 (10%) ER-positive patient. All of the ESR1 mutations were detected in ER-positive patients (3 mutations in 3 patients), which is consistent with the tumor tissue sequencing results of other studies (Fig 2). 11 In addition, we detected ERBB2 amplification in one patient (P001), whose immunohistochemistry and fluorescence in situ hybridization results were also HER2 positive.
Concordance of somatic mutations between synchronous and asynchronous tissue and plasma samples
The reliability of ctDNA sequencing has not been fully established and tumor tissue sequencing remains the golden standard. However, invasive procedures are required to procure biopsy samples of MBC and are often difficult to obtain. In our sample, archival tissue samples of five patients were accessible and sequenced (Fig 3). In 80% (4/5) of patients, concordant mutations were found in both tissue and plasma samples. Patient P006 had primary stage IV disease, and paired tumor tissue and blood samples were collected at the same time when the primary tumor was surgically removed. In this case sequencing results of ctDNA and tumor tissue were completely concordant (Fig 3a). Somatic mutations TP53 W91* and STK11 L290P were detected in both the patient's tumor tissue and ctDNA. While in the asynchronous samples from P003, P008 and P017, the mutations identified in tissue and plasma were only partly concordant (Fig 3b,e) or completely different (Fig 3c). Only in one patient, P105, who had distant metastases shortly after surgery, the asynchronous tissue and plasma mutations were concordant (Fig 3d).
Identifications of actionable genomic alterations from ctDNA
We next sought to characterize druggable mutations in the study cohort. We compared somatic mutations detected in the ctDNA of our study cohort with the identified druggable mutations documented in the National Center for Biotechnology Information ClinVar database and previous reports of tissue sequencing results. 14,15 Analysis of ctDNA of the 17 patients revealed that 7 patients harbored a total of 8 druggable somatic mutations and 1 patient had ERBB2 amplification ( Table 2). The most frequent druggable mutations occurred at two hotspots of the PIK3CA gene. One was H1047R (4 mutations in 4 patients) at exon 20 encoding the kinase domain, and the other was E542K (detected in 2 two samples of P013) at exon 9 encoding the helical domain. These two hotspot mutations were reported to activate the phosphatidylinositol-3 kinase/protein kinase B/ mammalian target of rapamycin (PI3K/AKT/mTOR) pathway, which diminishes the effects of hormone therapy 16 as well as trastuzumab and lapatinib treatment. 17 However, according to results from BOLERO-2, the beneficial effects of the mTOR inhibitor everolimus are maintained regardless of the PIK3CA genotype. 18 The remaining three druggable mutations occurred at the PTEN gene, namely, p.K144*, p.Q219*, and p.M134del, which led to the loss of PTEN activity. PTEN can inhibit activity of the PI3K/AKT/m-TOR pathway, and PTEN gene loss leads to activation of this pathway. 17 Biomarker analyses from BOLERO-1 Figure 2 The distribution of somatic mutations in (a) ER-positive and (b) ER-negative metastatic breast cancer patients. and BOLERO-3 trials showed that HER2-positive advanced BC patients with PTEN loss in tumors could derive a progression-free survival (PFS) benefit from everolimus. 19,20 Other preclinical studies have suggested that targeting mTOR may restore sensitivity to endocrine therapy in hormone receptor-positive advanced BC patients. 21 Overall, druggable mutations were detected in 50% of our patient cohort. All of the druggable mutations detected in our study cohort were related to the PI3K/AKT/mTOR pathway.
ctDNA level and clinical outcome in metastatic breast cancer patients
We evaluated the effect of ctDNA mutations on clinical tumor burden and PFS. The VAF of mutations from the major mutated clones (the clone with greatest CCF) was used to access ctDNA levels. 10 Among 17 patients, the ctDNA level varied from 0.06% to 51% (median 2.01%). We found no significant difference between ctDNA level based on the number of metastatic sites or whether there was visceral metastasis (Fig 4a,b), which may be a result of the limited sample size of our study cohort. However, when we further analyzed ctDNA and PFS (16 of 17 patients had available follow-up data), we found patients with higher than median ctDNA levels (> 2.01%) had significantly shorter PFS, and serum tumor marker CA15-3 cannot predict PFS of MBC patients (Fig 4c,d). The median PFS of the high ctDNA level group was less than half the PFS of the low ctDNA level group (138 vs. 386 days; log-rank P = 0.02).
Discussion
Our study characterized the mutation profile in a cohort of Chinese MBC patients. We have shown the feasibility of analyzing ctDNA to characterize genomic alterations in MBC. The high frequency (50%) of druggable mutations among the patients suggests that ctDNA is potentially of great clinical utility in the management of MBC. We also showed that a high ctDNA level is associated with poor PFS in MBC patients. In our study cohort, ctDNA showed high sensitivity (100%) in plasma derived from MBC patients. In previous reports, ctDNA was detectable in > 75% of patients with advanced malignancies. 22 On one hand, the high tumor burden in our MBC patient cohort (47.06% patients had more than one metastatic site) contributed to the high sensitivity. On the other hand, the rapid development of sequencing techniques in recent years has greatly improved the sequencing depth and coverage of cfDNA assays. The commercial panel we applied in our study covered 1021 genes with a target region of 1.1 Mb, which also contributed to the high sensitivity. In 2013, Dawson et al. successfully detected ctDNA in 29 out of 30 women (97%) in whom somatic genomic alterations were pre-identified in tumor tissue. 5 However, in our study we achieved high sensitivity without prior knowledge of tissue mutation using a broad-coverage 1021-gene panel. Although only five tissue samples were available because of the difficulties of sampling, we found that tissue mutation and ctDNA mutation is highly concordant in synchronous paired tissue and plasma samples. In contrast, tissue mutation and ctDNA mutation detected in asynchronous tissue and plasma samples were only partly matched or even completely discordant. These findings suggest that ctDNA can reflect real-time tumor mutation profiles and shows potential tumor clonal evolution during disease progression or under the pressure of treatment.
Another important strength of our study is the high frequency of druggable mutations independently detected in plasma samples without the need for biopsy. In our study cohort, 41.18% of patients (n = 7) had druggable mutations; if we include ERBB2 amplification, actionable genomic alterations were detected in the blood samples of 47.06% of patients (n = 8). Druggable genomic alterations in tumor tissue have been investigated by many studies across different cancer types. 23,24 In 2017, a large-scale study evaluated druggable mutations in 10 000 metastatic cancer tissue samples of different cancer types. 25 The study revealed that BC ranked third in terms of the prevalence of actionable mutations at 63%, which indicates the importance of genomic profiling in MBC. Our study further shows the utility of ctDNA analysis as a noninvasive method to depict genomic alterations in MBC. The high frequency of druggable mutations, mainly located in PIK3CA and PTEN, suggests that the PI3K/AKT/mTOR pathway plays an important role in MBC. The PI3K/AKT/mTOR pathway can be targeted by the clinically available mTOR inhibitor everolimus, as shown in BOLERO-2. Currently, the PI3K inhibitor buparlisib has shown promising results in penetrating endocrine-resistant HR+/HER2+ MBC in the phase III clinical trial, BELLE-2. 26 In the BELLE-3 trial, buparlisib plus fulvestrant also showed longer PFS compared to a fulvestrant plus placebo group in HR+/HER2− MBC. 27 Thus, monitoring the presence and dynamics of these mutations is of clinical importance. Notably, ESR1 mutations were also detected at a high frequency in ER-positive MBC patients. ESR1 mutations are commonly detected after therapy for metastatic disease and the presence of ESR1 mutations indicates the development of endocrine resistance, especially aromatase inhibitors. 28 Schiavon et al. reported that patients with ESR1 mutations in their ctDNAs had substantially shorter PFS on subsequent aromatase inhibitor-based therapy. 29 Given these results, although there is currently no targeted therapy for ESR1 mutations, they should be carefully considered in disease management. 29 The ctDNA level is reported to be associated with PFS in other cancer types, such as lung cancer. 10 The cfDNA tumor fraction is also reported to be associated with survival in MBC. 5,30 Our study confirms that a higher ctDNA level is associated with shorter PFS in a small cohort of MBC Chinese patients. Our results shed light on the potential of ctDNA as liquid biopsy to depict genomic alterations and identify druggable mutations, which can complement and substitute multiple biopsies in the management of MBC. | 2019-03-11T17:19:37.445Z | 2019-02-21T00:00:00.000 | {
"year": 2019,
"sha1": "18349209478f823a00307df41599e57db6b6fdb0",
"oa_license": "CCBYNC",
"oa_url": "https://www.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1759-7714.13002",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "18349209478f823a00307df41599e57db6b6fdb0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270140114 | pes2o/s2orc | v3-fos-license | Effect of Tranexamic Acid on Hidden Blood Loss in Percutaneous Endoscopic Transforaminal Lumbar Interbody Fusion: A Retrospective Study
Purpose Percutaneous endoscopic transforaminal lumbar interbody fusion (PE-TLIF) has become one of the most popular minimally invasive surgeries today. However, the issue of hidden blood loss (HBL) in this surgery has received little attention. This study aims to examine the HBL in PE-TLIF surgery and the effect of tranexamic acid (TXA) on blood loss. Methods In our research, We conducted a retrospective analysis of 300 patients who underwent PE-TLIF from September 2019 to August 2023. They were divided into 2 groups based on whether they received intravenous TXA injection before surgery. The variables compared included: demographic data, pre-and postoperative hemoglobin (HB), hematocrit (HCT), platelets (PLT), red blood cells (RBC), total blood loss (TBL), visible blood loss (VBL), HBL, operation time, postoperative hospital stay, inflammatory markers, coagulation parameters, and adverse events. Results Regarding demographic characteristics, besides the operation time, no significant differences were observed between the two groups. Compared with the control group, the TXA group showed a significant reduction trend in TBL, HBL, and VBL (P < 0.05). On the first day after surgery, there were significant differences in prothrombin (PT), activated partial thromboplastin time (APTT), and D-dimer (D-D) levels between the two groups. Similarly, HCT also found similar results on the third day after surgery. No adverse events occurred in either group. Conclusion Research has found that there is a significant amount of HBL in patients undergoing PE-TLIF. Intravenous injection of TXA can safely and effectively reduce perioperative HBL and VBL. Additionally, compared to the control group, the TXA group shows a significant reduction in operation time.
Introduction
With the increase in the aging population, the incidence of degenerative lumbar disease has been rising year by year. 1,24][5] However, the bleeding situation of this minimally invasive surgery is often overlooked, especially HBL.The recent research 6 found that the HBL of endoscopic transforaminal lumbar interbody fusion (Endo-TLIF) can reach 91% of TBL, with an average of approximately 717.9 ± 220.1 mL.Similarly, Zhou L et al 7 found that the HBL during oblique lateral interbody fusion (OLIF) accounted for 92.4% of the TBL, averaging around 809.0 ± 358.8 mL.Massive HBL can increase the risk of infection, anemia, blood transfusion, and other complications, which significantly affect the speed of postoperative recovery, increase medical expenses, and even endanger the patient's life. 8herefore, finding effective ways to reduce HBL has become a major concern in PE-TLIF surgery.
Tranexamic acid (TXA) is a synthetic analogue of lysine that reduces intraoperative bleeding by inhibiting plasminogen activation and blocking fibrinolysis. 9,102][13] Some studies have shown that TXA can effectively reduce blood loss and blood transfusion requirements in surgical procedures, such as total hip arthroplasty and total knee arthroplasty. 14,15Furthermore, multiple studies [16][17][18] have demonstrated that the intravenous injection of TXA in spinal surgery can effectively and safely reduce blood loss without increasing related complications.Although TXA is widely used in surgical procedures, its impact in PE-TLIF surgery has not been clearly confirmed.
TXA has multiple routes of administration, including preoperative intravenous infusion, local administration, and intravenous combined with local administration. 19,20However, preoperative intravenous injection of TXA can more effectively inhibit the dissolution of blood clots, thereby reducing bleeding. 21In addition, TXA can pass through physiological barriers about 15 minutes after intravenous injection and accumulate at surgical and trauma sites. 22Therefore, the purpose of this study is to explore the impact of intravenous TXA on HBL in patients undergoing PE-TLIF through retrospective research methods.
Study Design
This is a retrospective study following the Helsinki Declaration principles and approved by the Ethics Committee of Zhejiang Provincial People's Hospital (QT2024043).Considering the retrospective nature of the study, the committee has decided to waive the requirement for written informed consent.The aim of this study is to analyze patients who underwent PE-TLIF surgery in our hospital from September 2019 to August 2023.Of the 300 patients initially screened, 40 lacked follow-up data and 72 did not meet the inclusion criteria.The patients were divided into the TXA group and the control group based on whether TXA was used preoperatively.Specifically, 1g of TXA or an equal amount of 0.9% saline was intravenously injected into the two groups of patients 15 minutes before skin incision.If the surgery lasted more than 2 hours, another dose was administered.Of note, this study used a double-blind design, that is, patients, surgeons, and anesthesiologists were not informed of the specific treatment received by patients.Ultimately, there were 44 patients in the TXA group and 38 patients in the control group (Figure 1).
Data Collection
The patient's age, gender, height, weight, body mass index (BMI), bone condition, patient's blood volume (PBV), disease type, surgical segment, operation time, and postoperative hospital stay were record in the two groups.Record relevant data before surgery, as well as on the first and third days following the procedure, including HB, RBC, PLT, HCT, C-reactive protein (CRP), PT, APTT, TT, and fibrinogen (FIB).Furthermore, blood transfusion and adverse events were documented.
DovePress
Therapeutics and Clinical Risk Management 2024:20 Since no blood transfusion was given during and after the surgery, HBL = TBL -VBL.Meanwhile, no drainage was performed in any patients post-operatively, VBL = intraoperative blood loss.The intraoperative blood loss includes the drainage volume in the drainage bottle minus the volume of flushing fluid during the operation, as well as the net increase in weight of the hemostatic gauze.In conclusion, HBL can be preliminarily estimated.
Surgical Technique
After the anesthesia is done, place the patient in a prone position and raise both sides of the abdomen to facilitate surgery.C-arm fluoroscopy is used to locate the surgical segment and mark the decompression incision, usually 4-6 cm from the midline.This incision is also a pedicle screw skin incision that can be adjusted according to the patient's condition.Subsequently, puncture localization was performed under fluoroscopy, and the soft tissue around the articular process was separated using the gradually expanding catheter, which was then inserted into the endoscope.Extensive bone decompression was performed using the endoscopic circular saw and laminar rongeur to remove the upper articular process and part of the vertebral lamina, thereby enlarging the intervertebral foramen.The ligamentum flavum was separated and removed to fully expose the dural sac and nerve roots, ensuring adequate decompression.Replace the fusion sleeve to safeguard the nerve root, and expose and remove the intervertebral disc to handle the intervertebral space.Next, introduce autologous or allogeneic bone into the intervertebral space, and position a suitable interbody fusion Cage.Meanwhile, use a C-arm machine to ensure the correct position.Then, double check if the nerve root has been fully decompressed.Finally, percutaneous placement of pedicle screws and bilateral connecting rods was performed.All patients undergo surgery by the same spine surgeon.In addition, all patients adopted a similar perioperative management plan.
Statistical Methods
This study employed SPSS 23.0 software for statistical analysis.Normally distributed data was presented as mean ± standard deviation and analyzed using independent samples t-test.Non-normally distributed data was presented as M [P25; P75] and compared using Mann-Whitney U-test.Furthermore, chi-square test was utilized to determine the relationship between categorical variables.P < 0.05 was deemed statistically significant.
Results
During the study, a total of 82 patients who met the inclusion criteria were enrolled.Out of these, 44 individuals were in the TXA group, while the control group consisted of 38 people.Statistical analysis shows that the TXA group required less surgical time compared to the control group, with statistical significance.Additionally, there were no statistically significant differences in the remaining baseline characteristics between the two groups (Table 1).No statistically difference in coagulation indicators was observed between the two groups of patients before and on the third day after surgery.However, the PT, APTT, and D-D levels in the group receiving TXA treatment on the first day after surgery were significantly lower than those in the control group (Table 2).In addition, TXA did not have a significant impact on changes in CRP.
No statistically significant difference of preoperative level of HB, RBC, PLT, and HCT levels were detected between the two groups.However, on the third day post-surgery, a notable difference in HCT was observed between the two groups.Moreover, patients receiving TXA treatment exhibited a slower decline in HB, RBC, and HCT compared to the control group (Figure 2).
During the perioperative period, the TBL, VBL, and HBL of the TXA group were significantly lower than those of the control group (774.702± 309.244 vs 937.523 ± 244.714 mL, 66.136 ± 23.049 vs 93.158 ± 28.674 mL, 708.565 ± 307.985 vs 844.366 ± 237.963 mL, P < 0.05) (Figure 3).In addition, no adverse events or TXA related side effects occurred among all participants during the study.tissue invasion, less blood loss, faster postoperative recovery, and shorter hospital stay. 24,25This technique enters the intervertebral space through Kambin's triangle and can clearly display the dura mater and nerve roots under endoscopy.This allows for safer and more effective decompression and endplate treatment, while avoiding the limited vision of the minimally invasive transforaminal lumbar interbody fusion (Mis-TLIF) technique. 26However, in clinical practice, many patients who undergo this minimally invasive surgery still suffer from anemia or related diseases.Meanwhile, the severity of anemia in postoperative patients does not match the amount of blood loss.The HBL proposed by Sehat et al in 2000 may be related to these outcomes. 27HBL is a special form of blood loss that cannot be directly observed and accurately estimated in clinical practice, so it is often overlooked. 28Many studies have shown that neglecting HBL can not only lead to an increase in blood transfusion requirements, but also result in various complications, including anemia, delayed wound healing, prolonged postoperative recovery, and increased risk of infection. 29,30It seriously affects the safety and recovery of patients during the perioperative period.Therefore, HBL has become a growing concern for surgeons and patients.Currently, the specific cause of HBL is still unclear.The cause of HBL after spinal fusion surgery may be related to residual blood entering the cavity, bone surface leakage after decompression, and storage of internal fixation systems composed of pedicle screws and rod instrumentation. 31Previous studies have shown that spinal fusion surgery involves a significant amount of HBL.Smorgick Y et al reported that the HBL accounts for 42% of the TBL in posterior lumbar fusion. 32The hidden blood loss of Mis-TLIF reported by Zhou Y et al was 488.4 ± 294.0 mL, which accounted for 52.5% of the total blood loss. 33A similar result was also found in a study on extreme lateral interbody fusion (XLIF) research. 34nterestingly, Zhang H et al found that minimally invasive spinal surgery has more HBL compared to open spinal surgery. 35In our PE-TLIF study, HBL also reached 90% of the TBL, which is consistent with the research results of Ge M et al. 6 Therefore, how to effectively reduce HBL has become an urgent problem to be solved in PE-TLIF surgery.
PE-TLIF, as a minimally invasive lumbar fusion surgery technique, has received increasing attention from doctors and patients. Compared to traditional open surgery, minimally invasive surgery has the advantages of smaller incisions, less soft
TXA is extensively utilized in surgical procedures due to its proven ability to effectively reduce blood loss and transfusion rates throughout the perioperative period. 36,37During the start of the surgery, the process of fibrinolysis is activated, which is an important factor leading to intraoperative and postoperative bleeding. 38This phenomenon is most pronounced 6-12 hours after surgery. 39By targeting the lysine binding site on plasminogen, TXA effectively inhibits the interaction between plasminogen and fibrin, thereby achieving hemostatic effects. 40,41Moreover, Research has found that the route of administration of tranexamic acid is also another important factor affecting intraoperative and postoperative bleeding. 42Preoperative intravenous injection of TXA can quickly act on the surgical or traumatic site and more effectively inhibit the blood clot breakdown. 21,43Therefore, in this study, preoperative intravenous TXA was used to evaluate the impact on perioperative blood loss in PE-TLIF patients.
Our research has found that in PE-TLIF surgery, HBL is a problem that requires urgent attention and cannot be ignored.The results showed that the HBL of the TXA group was 708.565 ± 307.985mL, VBL was 66.136 ± 23.049 mL, and TBL was 774.702 ± 309.244 mL, which were significantly lower than those of the control group.This result is consistent with the study by Hao S et al in PLIF, both of which significantly reduced HBL. 44Furthermore, TXA is also beneficial for shortening surgical operation time.We also found the trend of HCT changes in the TXA group were relatively small, which is consistent with the research results of Dong W et al and Kelly M et al 45,46 D-D can be a reliable indicator of the pre-thrombotic state. 47nterestingly, the average level of D-D at postoperative day 1 in TXA group is significantly lower than that in control group [2815 (1717.5-4497.5)ug/L VS 6415 (3917.5-8482.5)ug/L, P < 0.001)].This indicates that TXA can effectively reduce postoperative D-D levels, which is supported by Dong et al. 46 In addition, neither group of patients experienced any postoperative adverse events or TXA related complications.Therefore, it can be seen that TXA can safely and effectively reduce the HBL and VBL of PE-TLIF, thereby enabling patients to gain greater benefits.
Of course, our research also has some limitations.Firstly, this is a single center study with a relatively small sample size.Secondly, detailed subgroup analysis was not conducted.In addition, there is no unified standard for postoperative activity and dietary nutrition of patients.
Conclusion
This research indicates that there is a large amount of HBL in patients undergoing PE-TLIF.Preoperative intravenous injection of TXA not only effectively reduces HBL and VBL but also does not increase the risk of complications.Additionally, compared to the control group, the TXA group shows a significant reduction in operation time.
Figure 1
Figure 1 Patient flow chart.
Figure 3
Figure 3Comparing bleeding conditions in two groups of patients.Note: *Represents P < 0.05.Abbreviations: BL blood loss, HBL hidden blood loss, VBL visible blood loss, TBL total blood loss.
Table 1
The Baseline Data of the Two Groups Note: *Represents P < 0.05.Abbreviations: TXA tranexamic acid, PBV patient blood volume, BMI body mass index, LSS lumbar spinal stenosis, LDH lumbar disc herniation, LS lumbar spondylolisthesis.
Table 2
Coagulation and Inflammatory Parameters of the Two Groups | 2024-05-31T15:27:28.830Z | 2024-05-01T00:00:00.000 | {
"year": 2024,
"sha1": "13c8af15a35ffcd5e9b45b72cc8625376a488ae4",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=99531",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ecc32cd04dfda146c0a3a6a6b1ccbbb202588904",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236198883 | pes2o/s2orc | v3-fos-license | Radial endobronchial ultrasound-assisted transbronchial needle aspiration for pulmonary peripheral lesions in the segmental bronchi adjacent to the central airway
Background Tissue samples from lesions located in the 3rd to 5th segmental bronchi are challenging to obtain. In this retrospective study, we aimed to evaluate the diagnostic rate of pulmonary peripheral lesions located in the 3rd to 5th segmental bronchi, near the inner field of lung on the computed tomography (CT) image and outside the bronchus, using radial endobronchial ultrasound (REBUS) followed by transbronchial needle aspiration (TBNA). Methods This retrospective study enrolled patients whose preoperative CT examinations showed a lesion located in the segmental bronchi (3rd to 5th), yet adjacent to the inner field of lung on the CT image. REBUS followed by TBNA was used to acquire tissue samples from these lesions. A bronchoscope was used to reach the bronchi surrounding the lesion, and an ultrasound probe was used to determine the lesion’s location. Then, the ultrasound probe was withdrawn, and puncture was performed at the location that was determined by ultrasound. The tissue specimens obtained were subjected to pathological examination. Results Nineteen patients were enrolled in this study including 15 males and 4 females with an average age of 55 years old. Of the enrollees, 8 patients (42.1%) were successfully diagnosed with samples obtained through TBNA, including 6 cases of lung cancer, 1 case of non-specific inflammation, and 1 case of cryptococcal infection. The diagnostic rate was 42.1%. No post-procedural complications were observed among the patients. There was no significant difference in nodule diameter between patients with a diagnostic sample and those in whom TBNA failed to provide a diagnosis (2.99±0.96 vs. 2.26±1.27 cm, P=0.20). Conclusions With the assistance of REBUS, TBNA can acquire sufficient samples to achieve a reasonably diagnostic rate for parenchymal lung lesions located near the inner field of lung on the CT image without intrabronchial invasion.
Introduction
Lung cancer ranks first out of all malignancies for morbidity and mortality in most countries (1,2). Early diagnosis and treatment are key to curing lung cancer and prolonging survival. The American National Lung Screening Trial (NCT00047385) demonstrated that screening with lowdose computed tomography (CT) lead to a relative reduction in mortality of 20% among patients with lung cancer (3). Consequently, this screening method has been accepted by the academic community and included in clinical guidelines. With large-scale CT screening, an increasing number of unidentified lung lesions appears to be occurring (4,5). Therefore, the optimal diagnostic approach for patients with suspicious nodules urgently needs to be determined.
Malignant diseases cannot be accurately distinguished from benign conditions by noninvasive examination methods, such as fluorodeoxyglucose positron emission tomography or dynamic contrast-enhanced CT (6). For such indeterminate lung nodules, many invasive methods for obtaining tissue exist. Probably the most widely available, CT-guided lung biopsy is associated with a high incidence of complications (up to 40%), with the incidence of pneumothorax reaching up to 25% (7,8).
Many minimally invasive flexible bronchoscopic procedures are currently available. These include convex probe endobronchial ultrasound (CP-EBUS), radial endobronchial ultrasound (REBUS), virtual bronchoscopy (VB), and electromagnetic navigation bronchoscopy (ENB). However, the availability of ENB and VB remains extremely limited in many parts of the world, mostly due to the elevated costs of these procedures. The access to these lesions in the vicinity of the 3rd to 5th segmental bronchi by CP-EBUS may be limited by the scope's large size (6.9 mm in diameter). Finally, when no endobronchial component or extrinsic compression is present, adequate sampling is generally limited to transbronchial needle aspiration (TBNA). TBNA has been widely used in diagnosing both benign and malignant pulmonary diseases. EBUS is a good approach for visualizing lesions to guide the sample biopsy and EBUS-TBNA has been suggested to have a sensitivity around 0.9 and a specificity to 1.0 in lung cancer diagnosing and staging (9,10). Currently, REBUS has been developed as a novel technique to guide TBNA sampling (11). In this retrospective study, we aimed to evaluate the diagnostic rate and complications when using REBUS followed by TBNA in the evaluation of lesions located in the 3rd to 5th segmental bronchi, near the inner lung field on the CT image and outside the bronchus. We present the following article in accordance with the STROBE reporting checklist (available at https://dx.doi.org/10.21037/tlcr-21-490).
Study participants
Patients who underwent REBUS-TBNA in Shanghai Pulmonary Hospital between August 2016 and May 2020 were included in this retrospective study. During this time, 900 patients underwent REBUS in the hospital. Because of the specific location of the lesion, nineteen out of 900 patients conformed to the inclusion criteria ( Figure S1). Clinical and pathological parameters including the patient's age, sex, smoking history, the lesion location and the lesion size were recorded from patients' medical records and medical images.
The inclusion criteria for patients were: (I) aged 18 to 80 years old; (II) clinically suspected lung lesion located in the segmental bronchi (3rd to 5th) in addition to its location near the inner lung field on the CT image; (III) no contraindications to bronchoscopy; (IV) signed an informed consent form and approval for medical record reviewing. The study received approval from the Ethics Committee of Tongji University Affiliated Shanghai Pulmonary Hospital (No. 18Q016NJ) in accordance with the guidelines of the Helsinki Declaration (as revised in 2013). Written informed consents were obtained from all patients for reviewing their medical records and images for scientific research.
REBUS-TBNA
Preoperative investigations included an enhanced CT scan of the lungs, complete blood count (CBC), liver and renal function examination, electrolytes examination, electrocardiogram (ECG), and coagulation tests. Immediately before the procedure, the operator reviewed the chest CT images and determined the lesions size and the bronchus in which was located. The patient was instructed to lie on the examination table in the supine position and received nebulized lidocaine (administered oropharyngeally) as local anesthesia. Then oxygen was provided via nasal cannula for the entire length of the procedure and monitored for vital signs. In our current cohort intravenous sedation or general anesthesia was not applied.
The procedure was performed with the Olympus BF-P260F bronchoscope (Olympus Co. Ltd., Tokyo, Japan). After a systematic inspection of the tracheobronchial tree ( Figure 1A), the REBUS (UM-S20-20R, Olympus Co. Ltd., Tokyo, Japan) 20-MHz probe was inserted into the target bronchus on the bases of the CT-scan images as discussed above. Once the best possible ultrasound image of the peribronchial lesion was obtained with the REBUS probe ( Figure 1B), the segmental bronchus was recorded, and the optimal puncture location was selected. Before the withdrawal of the ultrasound probe, the probe depth was marked. Subsequently, a 19-gauge WANG TBNA needle (MWF-319, ConMed Company, New York, USA) was inserted through the biopsy channel at the same length of the mark done on the REBUS probe, and a needle biopsy was performed at the location that was selected under ultrasound image ( Figure 1C). Then, a 60-mL syringe was attached to the end of the needle with negative pressure aspiration applied. The process was repeated to obtain 5 to 6 samples. Brush cytology was performed routinely to avoid missing any mucosal involvement by the lesion not macroscopically identified on the airway exam. Biopsy tissues would be sent for histopathological examination and smear cytological examination. If an accurate diagnosis could be obtained from the biopsy sample, the sample would be considered as a diagnostic sample and the patient would be defined as patient with positive diagnosis.
The specimens were fixed in a 10% formalin solution for histopathological examination. The molecular testing was polymerase chain reaction (PCR)-based assay. DNA or RNA would be extracted from biopsy samples according to the protocol. Reverse transcription would be done to convert RNA into cDNA for further PCR. The aberrations of EGFR/ALK/ROS1/KRAS/BRAF were detected by Multi-Gene Mutations Detection Kit (Amoy, Xiamen, China) according to manufacturer's protocol.
After the procedure, if bleeding occurred, local hemostasis was carried out using diluted epinephrine and cold saline. After confirmation of no active bleeding, the procedure was completed. Intraoperative and postoperative adverse events were recorded including bleeding, chest pain, hypoxia, and postoperative infection. All procedures were performed by the same bronchoscopist, who was aided by the same nurse, to avoid the potential bias.
Statistical analyses
Data were statistically analyzed with IBM SPSS Statistics 22 (IBM, Armonk, NY, USA). Continuous variables were expressed as mean ± standard deviation, and categorical variables as counts and percentages. Fisher's exact test was used to compare the distributions of sex, age, smoking history, and nodule diameters between groups. Nodule diameters for patients with different diagnoses ("positive" or "negative") were compared using the t-test. Statistical significance was indicated by a P value of less than 0.05. The diagnostic yield was calculated by dividing the number of successful diagnoses by the total number of lesions.
Results
This study included 19 patients [15 males (78.9%)], with an average age of 55 (±12.1) years old ( Table 1). All patients had their reports of pathological examination and were included for further analysis. Eight nodules were located in the left lung, of which only one was in the left lower lobe.
Of the eleven right lung nodules, six were located in the right upper lobe, one in the right middle lobe and four in the right lower lobe ( Table 1). Lesion size was obtained from the CT images and the average diameter of all lesions was 2.57±1.18 cm ( Table 1). No bronchus sign was observed on CT images in any patients. All lung nodules were visualized under REBUS during the procedure. In total, 8 patients (42.1%) were successfully diagnosed with TBNA samples and most of these were malignant ( Table 1). Six cases were ultimately diagnosed with lung cancer, one with cryptococcal infection, and one with nonspecific inflammation. Two brush samples (10.5%) were diagnostic and showed lung adenocarcinoma. The overall diagnostic rate was 42.1% [8/19]. Among the patients with a positive diagnosis, 3 lesions were in the left upper lobe, 4 lesions in the right upper lobe, and 1 lesion in the right lower lobe. Meanwhile, in the 6 samples diagnosed as lung cancer, 3 samples were also suitable for detecting driver gene alteration (EGFR/ALK/BRAF/KRAS/ROS1). One of these was found to have BRAF V600E mutation and no driver gene alteration was detected in the other two. Among Table 2). Furthermore, no postoperative complication was recorded among the study participants.
Discussion
The present study included 19 patients with peribronchial lesions located in the vicinity of the 3rd to 5th segmental bronchi, near the inner field of the lung on the CT image and outside the airway lumen. Obtaining tissue samples from such lesions can be challenging. The airway diameter may not be large enough to be able to fit a CP-EBUS for real-time sampling. Also, as the lesions are often completely outside the airway, endobronchial forceps biopsy and brush techniques are not good options. As a result, conventional TBNA (C-TBNA) may turn out to be the only available tool for obtaining pathological tissue samples in many of these cases (11). C-TBNA was first invented by Dr. Eduardo Schieppati in 1949 and was further developed by Ko-Pen Wang (12). It has been applied successfully in the clinical setting for more than 20 years. With the guidance of REBUS described herein, C-TBNA remains a valuable tool.
In the current study, the diagnostic yield of REBUS for nodules with an average diameter of 2.57 cm near the inner field of the lung was 42.1%. A retrospective analysis of 177 patients revealed diagnostic rates of 14% and 31% for lesions of <2 cm located in the peripheral third and the inner two-thirds of the lung, respectively (13), which almost corresponded with our results. Ost and colleagues have described the diagnostic yield of REBUS to be 57.0% with a higher diagnostic yield for lesions greater than 2 cm in longest diameter (14). A prospective trial with 54 patients with nodules that could not be visualized by fluoroscopy reported 48 lesions (89%) were localized by REUBS, and in 38 cases (70%) diagnoses were established by biopsy. One pneumothorax occurred in this series (15). In another study, the diagnostic yield of REBUS was 70.6% (16), which was superior to that previously reported for routine bronchoscopy (13,17). Wang Memoli et al. also reported the diagnostic yield of REBUS to be 71.1% in a meta-analysis (18). Moreover, the currently reported diagnostic yield of REBUS was comparable to that reported by Gildea (20). It suggests that REBUS-TBNA is an efficient and costeffective diagnostic method. In our study, we only included nodules located in the inner field of the lung. The prevalence of such lesions in previous studies is not well described. Thus, the relatively low diagnostic yield in our study might be related to the very specific location of the lesions. Further studies are indispensable to evaluate the diagnostic performance of REBUS for lesions at different segmental bronchi. Meanwhile, according to our current study, the sample obtained from REBUS-TBNA may also suitable for molecular testing, which is corresponding to previous study (21). The diagnostic sensitivity of trans-bronchoscopic biopsy for peripheral lung cancer has been shown to be related to lesion size (13,22). With traditional EBUS, the larger the lesion, the greater the likelihood of obtaining a diagnostic sample (23). A recent meta-analysis of 57 studies indicated that the diagnostic rate was significantly higher for lesions with a diameter of >2 cm (16). In this study, we found that the nodule size did not differ significantly between patients with diagnostic and non-diagnostic samples. However, because of the small sample size of our study, these findings need to be interpreted with caution and larger studies are needed.
In a study of 846 CT-guided percutaneous procedures, the mean lesion size was 3.02 cm and the complications rate was 30%, with the incidence of pneumothorax and hemoptysis reaching 27% and 3%, respectively (24). The risk of pneumothorax is 11 times higher in patients with lesions <2 cm than in patients with lesions >4 cm (25). The rate of pneumothorax during routine bronchoscopy is reported to be 4% (26,27). The risk of complications from REBUS is quite low, with an approximate risk of pneumothorax of only 1% (28), which was in line with our research findings. Furthermore, the incidence of adverse events requiring intervention is only 0.7% (18).
Although REBUS-TBNA can improve the diagnosis of those lesions located in 3rd-5th segmental bronchus, it has certain shortcomings. For instance, radial ultrasound is not self-navigating or localizing, and its path of exploration depends on the operator's individual understanding of lesion localization. Previous studies reported that approximately 25.0% to 33.3% of lesions were undefined (29,30).
Our study also had some limitations. First, the small sample size, the single-center and retrospective nature of the present study. Second, the patient without identical lesion under REBUS was not included in the study. Third, due to the lack of long-term follow-up, true sensitivities could not be calculated. Also, there was a higher number of non-smokers in our cohort which might suggest a larger than usual number of benign etiologies. This could have impacted our diagnostic yield as a TBNA technique is not as sensitive for benign disease as it is to malignant lesions.
Finally, sampling of these nodules with CP-EBUS was not attempted. It has been shown that some of these lesions can be sampled with CP-EBUS even when the nearby airway diameter is smaller than the scope (31). Perhaps the CP-EBUS scope dilates the airway to a certain extent. So, it is possible REBUS and CP-EBUS may be complementary for lesions such as the ones described in our report. Especially for upper lobe lesions where the currently available CP-EBUS does not perform well.
Conclusions
REBUS-TBNA is a minimally invasive biopsy technique with a reasonable diagnostic rate for more central lesions and a low incidence of complications. TBNA can be used with new guided instrumentation, and more flexible dedicated peripheral tools may further improve diagnostic rates. In addition, REBUS-TBNA has emerged in the field of tumor precision therapy and it may eventually play a significant role in accurate concomitant tumor detection, typing, and treatment (21,32,33). Our future work will continue to explore and improve the clinical value of REBUS-TBNA. Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. The study was conducted in accordance with the Declaration of Helsinki (as revised in 2013). The study was approved by Ethics Committee of Tongji University Affiliated Shanghai Pulmonary Hospital (No. 18Q016NJ) and individual consent for this retrospective analysis had been obtained from all participants.
Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the noncommercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/. | 2021-07-24T06:16:52.323Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "3bc5495238e582c5f0b6da75a6bc5d4f7130a348",
"oa_license": "CCBYNCND",
"oa_url": "https://tlcr.amegroups.com/article/viewFile/53407/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "782fd00f545100e0dc669afa635bad9be3083522",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
229465438 | pes2o/s2orc | v3-fos-license | Enhanced Mechanical Properties of Carbon Nanotube/Aluminum Composites Fabricated by Powder Metallurgical and Repeated Hot ‐ Rolling Techniques
: This research aimed to fabricate lightweight and high ‐ strength carbon nanotube (CNT)/aluminum (Al) composites by powder metallurgical and repeated hot ‐ rolling techniques. The fabrication was conducted in three steps: (1) CNT dispersion, (2) preparation of CNT/Al compacts by powder metallurgical slurry methods, and (3) strengthening and refining of CNT/Al composites by repeated hot rolling. The processes of dispersion of CNTs were carried out with dimethylacetamide as a solvent and potassium carbonate as a dispersing agent, which is an inorganic salt, under ultrasonic sonication conditions. Effect of sonication time on dispersion states and mechanical properties was also examined.
Introduction
Recently, in order to cope with environmental issues such as CO2 emissions reduction, replacing various structural materials like heavy metals such as iron (Fe) materials, comprising the major parts in automobiles, buildings, bridges, etc., with light metals such as aluminum (Al), magnesium (Mg), and titanium (Ti) has been highly desirable. However, the weight reduction of structural materials by light metals often leads to a decrease in strength, which causes problems related to safety and sustainability [1,2].
Carbon nanotubes (CNTs) have received broad scientific and industrial attention in the past decades due to their excellent physical and chemical properties such as high elastic modulus, high strength, high thermal conductivity, low density, and so on. CNTs are tube-like materials with a diameter of a nanometer scale, which is made up of carbon. Depending on the number of carbon layers, CNTs can be classified into two categories: single-walled carbon nanotubes (SWCNTs) and multi-walled carbon nanotubes (MWCNTs). Due to these interesting properties, CNTs can play a significant role in the fields of nanotechnology, electronics, optics, and materials science [2][3][4]. CNTs have been regarded as a powerful candidate for the reinforcement of metal matrix composites (MMCs) [5,6].
Attempts to develop CNT/Al matrix composites with enhanced strength are highly attractive, as they can be suitable structural materials in aerospace and automobile industries. However, at present, there are very few practical examples due to the difficulty in obtaining uniform dispersion and wetting of CNTs with the matrix. Most bulk CNT/Al composites exhibit poorer mechanical properties than expected. Many efforts have been made to prepare such CNT/metal composites with homogeneous distribution as well as high volume fraction of CNT simultaneously. Namely, the processing difficulty is represented by the uniform dispersion of the reinforcements into the matrix without damaging the nanotubes. To achieve this issue, chemical and mechanical treatments have mainly been conducted [5][6][7].
CNTs can be subject to some surface treatment to improve dispersion and wetting, which is a kind of chemical treatment. It has been reported that there is a possibility that properties of CNT may be impaired by surface treatments. The processing difficulty is represented by the uniform dispersion of the reinforcements into the matrix without damaging the nanotubes [8]. It was also proposed that in order not to reduce the advantage in the properties of CNT, dimethylacetamide as a solvent and potassium carbonate, which is an inorganic salt, acting as a dispersing agent, were used for dispersion under sonication treatments [9].
Mechanical treatments also contribute to improving the dispersibility of CNTs in MMCs. Uniform dispersion of CNT can be improved by plastic deformation after powder metallurgical processing. The post-processing could improve the interfacial bonding through the elimination of the residual pores and voids, enhancement of metal-CNT interfacial bonding, breakage of CNT clusters, and strengthening the alignment of CNTs [10]. It is considered that porosity in the composites can degrade mechanical properties, so it is effective to strengthen the interfacial bonding and enhance the densification by post-processing. Some researchers [11][12][13] have reported that hot extrusion and equal channel angular pressing (ECAP) were useful post-sintering processes to enhance the bonding strength between Al powders and improve dispersibility of the CNTs in Al matrix. Hot rolling can also be a candidate for this purpose to obtain flat or plate-shape samples [10].
In this study, high-performance lightweight CNT/Al-based composites were fabricated by combining powder metallurgy and repeated hot-rolling techniques. Dimethylacetamide as a solvent and potassium carbonate, which is an inorganic salt, as a dispersing agent can be used to reduce agglomeration of CNTs to bring out the inherent ability of CNTs in the composites. Microstructures and mechanical properties were investigated through scanning electron microscope (SEM) observations, micro Vickers hardness, and tensile tests to be related to the fabrication processes.
Fabrication
Al powder with a particle size of 30 μm and purity of 99.8% (supplied by Nilaco Co. Ltd., Nilaco Bldg., 1-20-6 Ginza, Chuo-ku, Tokyo 104-0061, Japan) and MWCNTs with a diameter of 10-15 nm and an aspect ratio of 1000 (supplied by CNano Technology Co. Ltd. through Marubeni Information Systems (MSYS) Co. Ltd., Shinjuku Garden Tower, 3-8-2, Okubo, Shinjuku-ku, Tokyo 169-0072, Japan) were mixed. Potassium carbonate was used as the dispersant, and dimethylacetamide was used as the solvent. The CNT powder and potassium carbonate were put together into dimethylacetamide, followed by sonication using ultrasonic equipment. Figure 1 shows a schematic illustration of the CNT dispersion process. The mixture was filtered using filter paper, and the CNT powder was taken out. A mixed powder was prepared from the CNT powder and Al powder, poured into a mold, and dried using a heat gun. After that, compression molding was performed at 150 kPa using a press machine. Then, it was sintered at 500 °C for 2 h in an electric furnace.
Hot Rolling
The sintered pieces were heated in an electric furnace at 400 °C for 15 min, and repeated hot rolling was performed with a reduction ratio of 3% for one rolling at the beginning. After rolling ten times, the reduction ratio was changed from 4 to 5% for one rolling. Then, the rolling was repeatedly conducted to a total reduction ratio of 30%. Figure 2 shows the repeated processes of hot rolling.
Characterization
After preparation of the composite materials, evaluation of the dispersibility of CNTs in the composites, Vickers hardness test, three-point bending test, and tensile test were carried out. Crosssectional observation of the CNT/Al composites carried out with a scanning electron microscope (SEM, SU8020, Hitachi High-Tech Co. Ltd., Toranomon Hills Business Tower, 1-17-1 Toranomon, Minato-ku, Tokyo 105-6409, Japan) and energy dispersive x-ray spectrometry (EDX). For Vickers hardness test, the cross section of the test piece was polished, and the Vickers hardness test (at five points) was performed. For three-point bending tests, the test pieces were prepared by cutting the outside edges from rolled samples. Displacement rate was 1.0 mm/min. The support span was 25 mm. For tensile tests, Instron-type tensile equipment was used, and the strain rate was set at 5.6 × 10 −6 /s. Elastic modulus, tensile strength, maximum strain could be estimated and anisotropy also examined.
Results
The SEM backscattered electron image and EDX mapping results for the cross section of 1.0 mass.%. CNT/Al composites with an ultrasonic treatment of 1 h are shown in Figure 3. Cylinder-like aggregation of CNT can be seen with the length of more than 100 μm. Figure 4 shows the crosssectional SEM images for the CNT/Al composites, which covered CNT contents of 0.5, 1.0, and 2.0 mass.%. and ultrasonic treatment time of 1, 3, and 5 h. It can be seen that the CNTs were relatively uniformly dispersed, even though some aggregations of CNT could be observed.
Next, we focused on estimating the CNT dispersion states. CNT dispersion states are represented by the combination of the length and number of CNT aggregations. Length and number of CNT aggregations were measured in the cross-section area of 1 mm 2 . Figure 5 shows the influence of sonication time on CNT dispersion states for 1.0 mass.% CNT composites. It can be seen in Figure 5 that with increasing sonication time, the number of CNT aggregation decreased, in particular, long CNT aggregation significantly decreased. Figure 6 shows the influence of CNT content on CNT dispersion states for a sonication time of 3 h. It can be seen in Figure 6 that with decreasing CNT content, the number of CNT aggregation decreased, for most of the length of the CNT aggregations. The photos of the test pieces before and after repeated hot rolling are shown in Figure 7. It can be seen that the edge parts had rolling cracks, which were cut off for making bending and tensile test pieces. Figure 8 shows the relationship between bending stress and strain (at the top surface of test pieces) derived from three-point bending load and displacement for a sonication time of 0, 3, and 4 h for 0.5 mass.% CNT/Al composites. Elastic analysis was applied to the whole deformation. It can be seen that the samples at a sonication time of 0 h and 4 h showed large deformation, but there was not as much difference in the maximum bending strength among the samples. Figure 9 shows the relationship between flexural modulus and sonication time for Al and 0.5 mass.% CNT/Al composites. It was confirmed that the flexural modulus increased with sonication time up to a sonication time of 3 h. In the case of 4 h, the value dropped sharply, which was almost the same as the value for 0 h. The reason for the low value of the flexural modulus at 0 h was due to the least dispersion of CNTs, that is, CNTs aggregated and formed into lumps. Next, we move to the results of the tensile test. Figure 10 shows the stress-strain curves for pure Al and 0.5 mass.% CNT/Al composites for sonication times of 0, 1, 2, 3, and 4 h. For elongation, compared with Al, the 0.5 mass.% CNT/Al composites with a sonication time of 3 and 4 h showed similar or larger elongation. The Young's modulus of pure Al is around 90 GPa, and the average values of Young's modulus of the composites were between 30 and 92 GPa. Maximum value was achieved for the composites with a sonication time of 2 h, and the minimum for the composites with a sonication time of 4 h. There can be a high value of Young's modulus for the composites with a sonication time of 2 h. For tensile strength, with an increase in sonication time, tensile strength tended to increase. The composites with a sonication time of 4 h showed the highest value of tensile strength. It can also be seen in Figure 10 that the composites with a sonication time of 4 h showed the highest tensile strength as well as the highest deformation ability. Figure 12 shows the Vickers hardness test results. The CNT content of the composites was 0.5 mass.%. The literature value of the Vickers hardness of pure Al (1000 series) is around 30 (HV). In this study, the rolled Al showed a hardness of more than four times. This is because work hardening occurs with plastic deformation by rolling. It was also seen that the influence of sonication time was not that high, and containing CNTs did not have much of an effect on Vickers hardness.
Discussion
In this study, chemical treatments were carried out to disperse CNTs in an Al matrix. Dimethylacetamide was used as a solvent and potassium carbonate, which is an inorganic salt, was used as a dispersing agent. Ultrasonic sonication was conducted for the slurry-like mixture of CNT and Al powders. The process of sonication is very important to make CNTs uniformly dispersed. Sonication time highly affects the dispersibility of CNTs in an Al matrix. It was shown in Figure 5, that with increasing sonication time, the amount of CNT aggregation decreased, particularly, long CNT aggregation decreased in the case of a sonication time longer than 2 h. Even though this result was prior to repeated hot rolling, a similar tendency in the relationship between the dispersibility of CNTs and sonication time can expected to be confirmed after repeated hot-rolling samples. It can be considered that the mechanical properties of the composites after repeated hot-rolling can be experimentally connected to sonication time, as shown in Figures 8-12, which are possibly also connected to the dispersibility of CNTs in Figure 5.
From Figures 5, 9, and 11a, the flexural modulus and (tensile) Young's modulus increased to some extent with increasing sonication time up to 2 or 3 h, then decreased. This may be due to the effect caused by a combination of states of aggregation of CNTs and rolling. Even so, flexural and Young's moduli were not so much affected by sonication time, which means that the dispersibility of CNTs did not have much influence on these moduli.
Meanwhile, from Figures 5 and 11b, with increasing sonication time, which led to higher dispersibility of CNTs, the tensile strength of the composites after repeated hot rolling increased. Repeated hot-rolling processes can also facilitate the increase in the tensile strength of the composites. As described in the Introduction, it is considered that post-sintering processing improves bonding strength between Al powders and CNTs, and the dispersion uniformity of CNTs in an Al matrix. The enhanced strength of the nanocomposites can be attributed to the stronger diffusional bonds and homogeneous distribution of CNTs in the Al matrix. It has been reported that homogenously distributed CNTs in an Al matrix can act as reinforcements to effectively prevent dislocation movement [10].
It can be seen in Figure 10 (tensile stress-strain curves) that the composites with sonication time of 4 h showed the largest elongation in the tensile test samples. This result is of much interest because some literature has reported that higher CNT content leads to lower toughness and lower ductility of the composites [10]. It is considered that plastic deformation such as rolling, extrusion, and drawing forms a strong texture, while hot rolling possibly causes dynamic recovery and recrystallization in such composites. Repeated hot rolling processes may contribute to higher tensile strength as well as high deformability of the composites.
Conclusions
CNT/Al composites were fabricated by combining powder metallurgy and repeated hot rolling techniques. Fabrication processes, microstructures including dispersibility of CNTs, and mechanical properties were examined. The following summary was obtained.
1. Chemical treatments using potassium carbonate as the dispersant and dimethylacetamide as the solvent followed by ultrasonic sonication were effective at avoiding agglomeration of CNTs and uniformly dispersed the CNTs in the Al matrix. 2. The sonication time had a great influence on the dispersing state of CNTs. With increasing sonication time, the amount of CNT aggregation decreased. 3. Vickers hardness of the composites was not particularly influenced by sonication time and CNT content. 4. It was demonstrated from the three-point bending test that flexural modulus increased with sonication time increasing up to 3 h. 5. It was demonstrated from the tensile test that with increasing sonication time, which led to higher dispersibility of the CNTs, the tensile strength of the composites after repeated hot rolling increased. The composites with a sonication time of 4 h (longest one in the current study) showed the highest tensile strength as well as the highest deformation ability. | 2020-11-26T09:07:34.832Z | 2020-11-20T00:00:00.000 | {
"year": 2020,
"sha1": "fcbad3df4246bec448418ca3075682b9322788e0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2504-477X/4/4/169/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "6e0a15ba65052dfd6a8ca928846f70bf28cf530e",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
238580994 | pes2o/s2orc | v3-fos-license | Single-Task or Dual-Task? Gait Assessment as a Potential Diagnostic Tool for Alzheimer’s Dementia
Background: A person’s gait performance requires the integration of sensorimotor and cognitive systems. Therefore, a person’s gait may be influenced by concurrent cognitive load such as simultaneous talking. Although it has been known that gait performance of people with Alzheimer’s dementia (AD) is compromised when they attempt a dual-task walking task, it is unclear if using a dual-task gait performance during an AD assessment yields higher diagnostic accuracy. Objective: This study was designed to compare the predictive power for AD of dual-task gait performance in an AD assessment to that of single-task gait performance. Methods: Participants (14 with AD and 15 healthy controls) walked across the GAITRite© Portable Walkway mat under three different cognitive load conditions: no simultaneous cognitive load, walking while counting numbers by ones, and walking while completing category naming. Results: Multiple logistic regression revealed that the gait performance under a dual-task condition (i.e., concurrent counting or category naming) increased the proportion of variance explained by the FAP, SL, and DST, of the incidence of AD. Conclusion: Dual-task walking and talking may be a more effective diagnostic feature than single-task walking in a comprehensive AD diagnostic assessment.
INTRODUCTION
It has long been believed that the motor system is separate from the cognitive system both functionally and anatomically. Based on this perception, walking has been considered an over-learned and automatic activity. However, the motor and cognitive systems are interwoven at the cerebrum level [1], thus, gait coordination in walking cannot be completely spontaneous automatic. Rather, it involves continuous control of the body position [2] as well as highermental processes such as attention, working memory, decision-making, and problem-solving [3]. In other words, successful gait performance requires appropriate integration of the sensorimotor and cognitive systems [4]. When a person fails to combine the two systems successfully, the risk of falls may be elevated. Previous studies have shown that a concurrent cognitive load can affect postural control such as gait and result in injurious falls [1,[5][6][7][8].
As gait performance requires the coordination of sensorimotor and cognitive systems, gait instability is commonly observed in people with Alzheimer's dementia (AD) [9][10][11][12][13][14]. The high incidence of gait 1184 C. Oh / Dual-Task Gait Assessment for AD Diagnosis instability among people with dementia has resulted in recommendations for providing fall-prevention training [15,16]. In addition, gait compromise in people with AD may be an early sign of the disease [9,13,[17][18][19]. To date, the majority of studies investigating gait performance in people with AD have employed a single-task paradigm, which involves walking only. However, dual-task gait assessment such as walking and talking testing can provide more practical insight given that people commonly combine the two activities in their daily lives [14]. This combination creates a commonly occurring dual-task activity that requires both motor control and cognitive performance [20]. The manipulation of the concurrent cognitive load (i.e., talking) affects gait [21] and thus, the gait assessment under different levels of cognitive load (i.e., different speech tasks) may be useful to show the impact of cognitive load on gait. Concurrent walking and talking is more taxing for people with AD because of their cognitive impairments [14,20,21].
Given the relationship between cognitive and walking performances, authors of recent studies have started to employ dual-task walking assessment and have suggested adopting the dual-task paradigm as a screening tool for cognitive impairment. For example, Rosso and colleagues proposed that dual-task walking assessment can be used as a risk assessment tool for mild cognitive impairment (MCI) [22]. Similarly, Mancioppi and colleagues stated that a dualtask assessment incorporating cognitive and motor tasks is effective for MCI diagnosis [23]. Finally, de Oliveira Silva and colleagues suggested that poor dual-task gait performance should be considered as a functional screening tool for dementia [24].
Despite the usefulness of a dual-task walking assessment to screen cognitive impairment, it is still unclear whether the dual-task assessment significantly improves the accuracy in detecting AD, compared to the single-task paradigm. In order to suggest the inclusion of dual-task gait performance in AD screening/diagnosis tools, it should be clear that the dual-task assessment is more desirable than the single-task assessment. Therefore, this study was designed to investigate the predictive power of gait performance both as a single-task and when gait is paired with a concurrent cognitive load in a dual-task. It is hypothesized that 1) gait performance of people with AD will deteriorate when completing a dual-task of walking and talking in comparison to the singletask of walking without a cognitive load, showing greater decrements when the dual-task demand is increased, and 2) the addition of a concurrent cog-nitive load will yield a significant improvement in AD detection.
Participants
Fourteen individuals with AD and fifteen healthy older adults participated in this study, approved by the institutional review board at Ohio University. The individuals with AD were those who 1) were diagnosed with probable AD per the NINCDS-ADRDA Work Group procedures [25] but with no additional neurological diagnosis, 2) obtained a score representing mild to moderate dementia (a score between 80 and 129) on the Dementia Rating Scale 2 (DRS-2) [26], and 3) had no history of injurious falls within 12 months of participation. The healthy older adults were eligible to participate in this study if they obtained scores in the normal range (≥26) using the Montreal Cognitive Assessment (MoCA) [27]. A MoCA score of 26 and higher is considered not pathological [27]. In addition, the healthy participants were required to have no history of injurious falls in the past 12 months.
All participants exhibited sufficient vision and hearing to respond appropriately to the orally delivered study instructions. In addition, they were able to walk 580 cm with or without assistance. The average age of the people with AD was 78.03 years (standard deviation [SD] = 12.06) with an average of 15.81 years (SD = 2.48) of education. This group obtained a mean score of 88.14 (SD = 7.07) on the DRS-2. For healthy older adults, the mean age was 72.71 years (SD = 11.86) with 15.92 years (SD = 2.08) of education. On average, the healthy older adults scored a 27.73 (SD = 1.29) on the MoCA, which was above the cutoff score (26) for possible cognitive impairment. The two groups did not differ significantly in age (t(25) = 1.08, p = 0.23) or education (t(25) = 0.61, p = 0.53). The participants' demographic information can be found in Table 1.
Instrumentation
The GAITRite© Portable Walkway System from CIR Systems, Inc., was employed to obtain quantified gait performance from each participant. The walkway system consists of an electronic walkway mat and the gait analysis software. The walkway mat is 580 cmlong and 88 cm-wide and encapsulates 18,432 sensors in an active area 488 cm-long and 61 cm-wide with 1.27 cm between the sensors. The sensors record 24 temporal and 14 spatial gait parameters from each footfall at a 120 samples per second and a temporal resolution of 18.75 ms. The GAITRite© system is allows walkers to be tested using walking aids of their choice. The gait analysis software isolates traces of walking aids that are separate from footfalls and erases them. When signs of walking aids are not identified correctly, they can be manually removed. The GAITRite© Portable Walkway System exhibits strong validity and reliability. McDonough and colleagues found excellent intraclass correlation coefficient (ICC) between paper-and-pencil and GAITRite©-measured spatial parameters (ICC > 95) and between video-based and GAITRite©-measured temporal parameters (ICC > 93) [28]. Bilney and colleagues reported that GAITRite© system had strong validity and test-retest reliability for specific gait parameters when compared to the Clinical Stride Analyzer, a valid and reliable tool for gait assessment (e.g., ICC for velocity, stride length, and cadence between the two systems = 0.99) [29]. Parallel results were found by Webster and colleagues who stated that the GAITRite© system had excellent validity and reliability for specific gait parameters (e.g., velocity, cadence, step time variables, etc.) in comparison to the Vicon-512, a 3-demensional motion analysis system. (e.g., ICC > 92 and repeatability coefficients between 1.0% and 5.9% of mean values for velocity) [30].
Procedures
This study followed the study protocol used in a previous study [14]. Participants were directed to complete walks on the 980 cm-long GAITRite© mat including 200 cm off either end for acceleration and deceleration under three different cognitive load conditions: the baseline (single-task), low cognitive load (dual-task), and high cognitive load (dual-task) conditions. In the baseline condition, participants were asked to walk along the GAITRite© walkway mat normally without talking. The low cognitive load condition was defined as walking while counting numbers by ones. For this task, a two-digit number was randomly selected as a starting number and given to each individual. Lastly, the high cognitive load condition consisted of walking while generating as many words in a given category (e.g., animal) as possible. This task was drawn from the category fluency test by Benton [31].
Prior to completing the three walking conditions, each participant had an opportunity(es) to practice the dual-task conditions while seated and walking to ensure that s/he understood the tasks. Participants were instructed to use any kind of walking assistance (e.g., human assistance or a cane) as needed to represent their daily ambulation. All participants were able to complete the dual-task walking. On average, individuals with AD counted 10 numbers (SD = 1.6) and generated 4 words (SD = 2.0) while healthy controls counted 12 numbers (SD = 2.1) and generated 8 words (SD = 1.3) under the cognitive load conditions. Each condition was repeated two times, and the three walking conditions were distributed randomly to account for potential order effects. Breaks were offered to the participants as many times and at any time during the experiment session.
Analyses
For statistical analyses, four gait parameters were selected as dependent variables: functional ambulation profile (FAP), stride length (SL), velocity, and double support time (DST). The FAP is a composite gait score that ranges from 0 to 100; a score between 95 and 100 is typical for healthy adults. First described by Nelson [32], the FAP is widely used to evaluate the stability of gait. Although FAP is an aggregate score considering various gait parameters, other important parameters such as SL and DST are not factored into the FAP calculation. In addition, for the FAP computation, a person's gait velocity is normalized after taking the person's leg length into account. Therefore, SL, DST, and the direct measurement of gait velocity were selected for separate analyses.
In order to determine the predictive power of gait performance with three different task settings (baseline, low cognitive load, and high cognitive load) to the incidence of AD, multiple logistic regression analyses were conducted with the incidence of AD as the dependent variable (Responses: Yes/No), and one of the four selected gait parameters combined with single-or dual-task conditions as the independent variables (e.g., FAP under the baseline condition, FAP under the low cognitive load condition, and FAP under the high cognitive load condition, etc.). Furthermore, to examine whether the gait performance under the dual-task conditions can account for greater variance in the incidence of AD, the performance at the baseline, low and high cognitive load conditions was entered sequentially for each gait parameter, resulting in several different models. This was followed by a model comparison based on residual deviation with the model with only single-task performance as the reference. These analyses were conducted for each of the four gait parameters separately. The analyses were conducted using R version 3.6.3.
Functional ambulation profile
All models of the Omnibus test for multiple logistic regression indicated that FAP both under the singleand dual-task conditions accounted for a significant amount of variability between the two groups of participants (Table 2). Furthermore, Table 2 shows a 15% proportional improvement in pseudo R 2 from model 1 (reference model) to model 4 (full model). Thus, the predictive power increased significantly when the FAP score under the high cognitive load condition was added to the reference model with or without the score under the low cognitive load condition. Similarly, Table 3 shows a model comparison analysis based on residual deviance. Specifically, model 3 utilizing the FAP scores under the baseline and high cognitive load conditions (p = 0.001) and the model 4 using the FAP scores under all of the three conditions (p = 0.004) were significantly better than the reference model, in which only FAP under the baseline condition was included, or the model 2 where the FAP scores under the baseline and low cognitive load conditions were taken into account. The results are also illustrated in Fig. 1. Figure 2 shows that both groups walked slower when taxed with a higher cognitive load. On average, people with AD walked at a rate of 48.47 cm/s (SD = 17.35) under the baseline condition, 42.76 cm/s Similar to FAP, both single-and dual-task velocity explained significant amount of variability in predicting AD (for all models, p < 0.001, see Table 4). Although adding the dual-task velocity improved the model fit, the improvement was not statistically significant as detailed in Table 5.
Stride length
In accordance with FAP and velocity, participants' SL decreased for cognitive load (see Fig. 3). People with AD walked with a mean SL of 84.26 cm (SD = 24.66) under the baseline condition, which was shortened to a mean of 76.85 cm (SD = 23.65) under the low cognitive load condition, then to 68.53 cm (SD = 14.99) under the high cognitive load condition. The healthy controls' SL was more persistent across cognitive load conditions than the AD group's as the baseline condition SL was 119.26 cm (SD = 16.01), which decreased to 118.54 cm (SD = 15.92) and to 112.25 cm (SD = 10.10) under low and high cognitive load conditions respectively.
The logistic regression analysis showed that the single-task SL alone did not predict the AD status of the participant, but any combination of single-and dual-task SL did (i.e., Models 2 and 4, p < 0.001; Model 3, p = 0.001, as in Table 6). Similarly, when compared to the reference model, the single-task SL combined with the SL under low cognitive load (p < 0.01), with the SL under high cognitive load (p < 0.001), or with the SL under both low and high cognitive load (p < 0.001) conditions significantly improved the model fit (Table 7).
Double support time
Given the nature of this parameter, participants' DST was in an inverse pattern to the other three parameters. Namely, DST of both people with AD and healthy older adults increased with higher cognitive load. People with AD needed a mean DST of 0.950 s (SD = 0.784) to complete walking along the 580 cm walkway mat under the baseline condi- Fig. 4). Similar to FAP and velocity, significant improvements in identifying people with AD occurred for both single-and dual-task DSTs (all p < 0.001, see Table 8). The model compar- ison revealed that adding one or both of the dual-task DST to the reference model significantly improved the model fit (Table 9). More specifically, adding DST under low cognitive load condition to the baseline model yielded a significant improvement (p < 0.01), so did adding DST at under the high cognitive load condition (p < 0.001), and adding both (p < 0.001).
DISCUSSION
The present investigation was designed to demonstrate the usefulness of adopting the dual-task gait assessment as a component for AD diagnosis. Two hypotheses were developed in alignment with the purpose: First, gait performance of people with AD will be compromised when they are cognitively taxed, and second, the dual-task gait assessment will improve the accuracy for screening AD than the single-task gait assessment. The findings of the current study fully confirmed the first hypothesis and partially the second hypothesis.
It was evident that concurrent cognitive load affected gait in people with AD: all of the four gait parameters (i.e., FAP, velocity, SL, and DST) were compromised as cognitive demands increased. As Kahneman proposed, the simultaneous activity of walking and talking requires cognitive resource allocation [33], specifically executive function and attention divided between gait and cognition [34][35][36][37][38][39]. The ability to share the cognitive resource is particularly impaired in people with AD [40][41][42].
Neuroimaging studies have shown that brain networks are shared by cognitive and motor control in the frontal and temporal areas, which results in poorer gait performance such as slow gait (velocity) [21,[43][44][45]. Due to the increased load on cortical activity for complicated situations when gait stability is affected, people with dementia have difficulty allocating sufficient cognitive resources available in the frontal or temporal lobes [44]. These situations may result in a higher risk and prevalence of injurious falls among people with AD [3]. This claim reinforces studies showing poorer gait performance associated with damage/atrophy in the prefrontal and hippocampal regions [45][46][47][48]. More specifically, these authors found that reduced walking speeds share neural substrate such as smaller hippocampus [48][49][50][51]; accumulation of amyloid- [51]; hyperintensities in the subcortical regions [52]; and stride length variability with Apolipoprotein E4 [53,54].
In the present study adoption of a dual task in gait assessment appeared to be more useful for AD screening than a single task of walking without a concurrent cognitive load. In general, the addition of the concurrent cognitive load improved the probability of detecting AD significantly. Three of the four selected gait parameters (i.e., FAP, SL, and DST) explained a larger proportion of the variance and had higher odds ratios under the high cognitive load condition of simultaneous walking and category naming. For DST, factoring in counting numbers by ones also improved the model fit. However, for velocity neither the low nor high cognitive load increased the probability of detecting AD. It should be noted that people with AD in this study slowed their gait when cognitively taxed. This finding is consistent with findings of previous studies showing slow gait speed in people with AD or other dementias when performing a dual-task activity [20,21,36,37,58]. This current investigation revealed that including the dual-task gait velocity, either under the low or high cognitive load condition, did not significantly improve the model fit over the single-task gait velocity. This non-significant effect of adding the dual-task gait velocity on predicting incidence of AD may be attributed to the small sample size or the experimental tasks (i.e., counting numbers and category naming) that do not represent daily living activity. Alternatively, the short walkway mat might have not been adequate to induce effects of high cognitive load on walking, as people often recall category members well in the beginning but struggle after they name several [59]. As an option, future studies may consider weighing various gait parameters differently for a more accurate AD screening/diagnosis. To date, there has been no study investigating how to weigh different gait parameters as a component of an AD diagnosis. However, when Meilán et al. [60] weighed different speech parameters to discriminate AD, they reported 84.8% accuracy. Therefore, scrutinizing impacts of different gait parameters on AD screening/diagnosis may yield a more accurate result.
Several limitations exist in this study: First, the small sample size (total n = 29) makes this study underpowered for the logistic regression analyses and the specific subpopulation recruited for this study (i.e., people with mild to moderate AD with limited consideration of functional differences) reduces the generalizability of the current results. The insufficient power may lead to a reduced reproducibility of the findings or may even result in yielding unstable and/or null findings. Therefore, future studies including more participants from more diverse populations are warranted. In fact, a number of studies have proposed that poorer gait performance may be associated with non-Alzheimer's type dementia [5,61,62]. Future studies that are sufficiently powered for logistic regression analyses will provide stronger evidence to support the findings of previous studies. In addition, studies that consider functional differences in people with different levels of cognitive functioning will be useful. Second, participants of this study were instructed to use any kind of walking aids that they typically used for their daily ambulation. When a participant needed human assistance, the requested assistance was provided to the degree that does not impede or benefit the person's walking performance. Although wheeled-walker may increase walking speed, [e.g., 63,64], stride length [64] and swing time [64], the increase in walking speed is not significant when a person walks with a cane or crutch [63]. The limited data on the effects of walking aids on parameters of gait make it difficult to draw a generalized conclusion. Thus, future dual-task studies should include samples of people who use a variety of walking aids, including human assistance. Third, walking along the short walkway mat while count-ing numbers or completing a naming task may not be sufficient to represent daily walking performance. Thus, it is recommended that future investigations adopt more real-life dual-task activities such as spontaneous conversation and consider assisted walking and independent walking groups separately. Finally, investigations of gait parameters other than the four selected ones for this investigation may add valuable information for determining the optimum gait measurements to adopt for dual-task assessments for people with dementia.
Despite the limitations, the current investigation adds theoretical and empirical evidence to the literature emphasizing the role of dual-task gait assessments in an AD assessment battery. The current results indicate the challenges experienced by people with AD whose cognitive reserve is less available when walking and completing a simultaneous oral task. These data indicate that the dual-task gait assessment may supplement the current AD screening/diagnostic tools.
ACKNOWLEDGMENTS
I gratefully acknowledge the contribution of Dr. Elizabeth Madden for providing partial data. I also would like to thank Dr. Richard Morris and Xianhui Wang for their careful readings of the manuscript and statistical advice. | 2021-10-12T06:23:26.273Z | 2021-10-08T00:00:00.000 | {
"year": 2021,
"sha1": "f6ddd49c4a0d8d575843c5b281492ba84087766d",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "7c1ee8c2b6eb71de2fbb3de2adb94e4d2ce2cc6f",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
229721694 | pes2o/s2orc | v3-fos-license | Cyclodextrins as a Key Piece in Nanostructured Materials: Quantitation and Remediation of Pollutants
Separation and pre-concentration of trace pollutants from their matrix by reversible formation of inclusion complexes has turned into a widely studied field, especially for the benefits provided to different areas. Cyclodextrins are non-toxic oligosaccharides that are well known for their host–guest chemistry, low prices, and negligible environmental impact. Therefore, they have been widely used as chiral selectors and delivery systems in the pharmaceutical and food industry over time. However, their use for extraction purposes is hampered by their high solubility in water. This difficulty is being overcome with a variety of investigations in materials science. The setting-up of novel solid sorbents with improved properties thanks to the presence of cyclodextrins at their structure is still an open research area. Some properties they can offer, such as an increased selectivity or a good distribution along the surface of a solid support, which provides better accessibility for guest molecules, are characteristics of great interest. This systematic review reports the most significant uses of cyclodextrins for the adsorption of pollutants in different-origin samples based on the works reported in the literature in the last years. The study has been carried out indistinctly for quantitation and remediation purposes.
The Environmental Problem
The reduction of environmental pollution is one of the highest challenges worldwide for global ecological preservation. Since industry and transport have become an essential part of modern society, waste production is an inevitable outcome of human developmental activities [1]. In recent years, the release of various legacy and non-regulated harmful compounds into the environment has attracted great attention because of their toxicity and widespread use [2]. The pollutants emitted from different sources contaminate air, water, and soil environments, as well as food crops and other scenes with an impact on both human health and the ecological system [1]. The more frequently detected pollutants cover such a broad range of organic and inorganic compounds [3] such as trace metals, polycyclic aromatic hydrocarbons (PAHs), volatile organic compounds (VOCs), pesticides, dyes, pharmaceutical residues, and other emerging pollutants whose specific effects are in the majority of cases still poorly known. Indeed, the large number of pollutants of potential environmental concern poses a challenge for regulatory agencies [4].
Over the years, different regulations have established a legislative framework for the presence of pollutants all around. In Europe, there exist directives and recommendations regarding the quality of water [5,6], air [7,8], soil [9], and even food products [10,11]. Some of them establish specific concentration limits for harmful compounds in the respective samples. Since the effects that some of these pollutants may have are increasingly better known, the concentration limits established are lower every time, frequently reaching the trace level.
On the one hand, the growing demand for using the available natural resources has prompted rapid developments in waste management by introducing cleaning, recycle, and reuse. Recycling human-affected natural sources requires competent methods for the removal of habitual and emerging pollutants from them [12]. Over the last decades, there have been significant research and engineering advances in remediation. In this sense, treatment technologies include extraction, transformation/degradation, or sequestration and immobilization by sorption, either used individually or in a combined way [13]. On the other hand, it is mandatory to develop analytical methods that allow detecting such low concentrations of the compounds of interest to monitor them and implement the appropriate corrective measures if necessary. Considerable efforts have been made in recent decades towards the identification and quantitation of the more relevant contaminants of emerging concern in the environment, certain types of nourishments, or for health control [4], among others.
Sorption techniques are presented as a worthy opportunity both for remediation and for monitoring purposes. A variety of studies have shown that the sorption of trace-level pollutants in aqueous matrices and through air monitoring presents problems such as the lack of accuracy and precision of the results due to the low concentrations being treated [14]. It is also critical to avoid analytes' losses due to the wrong adsorption or chemical and photochemical degradation. In this sense, it is crucial to develop simple, rapid, and efficient methods for adsorbing pollutants [15].
In monitoring, a separation step that eliminates matrix-origin interferences and preconcentrates analytes is mandatory [3] to improve the detection limits of the analytical methodologies applied. Despite the latest improvements in the sensitivity and selectivity of modern detection systems, conventional separation techniques are frequently used to overcome interferences [2]. They all present advantages and disadvantages [16]. Concretely, batch and column techniques in which analytes are adsorbed on water-insoluble materials and then eluted have been widely used among the existing enrichment techniques. For the materials to be beneficial in the extraction of these pollutants, the collection of analytes should also be quantitative and repeatable, and they should be eluted with minimum efforts exerted in the experimental procedure [2].
Several adsorbents are commercially accessible for the adsorption of pollutants in different-origin matrices, mainly water, air, and food. The most widely used adsorbent is undoubtedly C18 [15] due to its high availability, affordable price, and the good results it usually offers. Other commercially available solid phases showing high request for the described purposes are hydrophilic-lipophilic balance cartridges, ionic exchange columns, or carbon-based materials, such as activated carbon or carbon molecular sieves [17,18]. Some criteria have been established to guarantee accurate cleaning and/or determination of the specific analytes under study. Among them, the correct enrichment of the analytes, their complete and fast desorption, a homogeneous and inert surface to avoid artifact formation, irreversible adsorption, and catalytic effects, low affinity to water, low competition with other constituents of the sample, high stability, and multiple uses can be mentioned [18].
However, the lack of selectivity and the frequent competition of analytes with water in aqueous matrices are some drawbacks presented by commercial solid phases [19]. For this reason, the investigation and development of new materials with enhanced properties for their application in sorption processes is a challenge that must be overcome. Improving selectivity through structural variations in the adsorbents is recently being developed with increasing force [20,21]. Oppositely, the formation of inclusion complexes between adequate molecules in the solid phase and the analytes of interest is also an area of ongoing research.
Host-Guest Adsorption: Cyclodextrins
Inclusion complexes, or host-guest complexes, are non-covalent reversible structures of two or more molecules with superior physicochemical properties than those exhibited by the molecules individually [22]. Although historically developed in solution, there exists increasing interest in implementing these principles to systems assembling solid surfaces, since the presence of a solid surface not only ensures a high degree of crystallinity in the host network, thus enabling efficient capture of guests, but also provides additional stability to the resultant host-guest complex via molecule-surface interactions. In this sense, host-guest interactions are already being exploited for the reversible adsorption of analytes in solid materials, which can provide an improvement in some features of analytical methods [23]. To this end, several types of compounds that are able to act as host molecules have been synthesized and used in sorbents, including crown ethers, cryptands, carcerands, cucurbiturils, paracyclophanes, calixarenes, and cyclodextrins.
Cyclodextrins (CDs) are a family of cyclic oligosaccharides obtained from the union of glucose monomers linked by α-1,4 glycosidic bonds. The natural occurrence of CDs can be classified into α-, β-, and γ-cyclodextrins, which are composed of 6, 7, and 8 glucose units, respectively ( Figure 1). Thus, their diameters increase with the number of glucose units. They are shaped as truncated cones due to the constitutional asymmetry of the glucopyranose rings. The hydroxyl groups are oriented to the outer space flanking the upper and lower rims, with the primary hydroxyl groups towards the narrow edge of the cone and the secondary hydroxyl groups towards the wider edge [24]. The central cavity of the cone is lined with the skeletal carbons and ethereal oxygen of the glucose residues, which produce a hydrophobic zone. Therefore, they exhibit the ability to trap guest molecules inside them through the formation of host-guest complexes [25]. This feature, together with the possibility of adapting the type and size of the analyte to be encapsulated taking into account the cyclodextrin used, as well as to the medium in which analytes are contained [26], has positioned CDs as promising nanoscale carriers. They are capable of improving stability while decreasing the reactivity of the guest compound. adequate molecules in the solid phase and the analytes of interest is also an area of ongoing research.
Host-Guest Adsorption: Cyclodextrins
Inclusion complexes, or host-guest complexes, are non-covalent reversible structures of two or more molecules with superior physicochemical properties than those exhibited by the molecules individually [22]. Although historically developed in solution, there exists increasing interest in implementing these principles to systems assembling solid surfaces, since the presence of a solid surface not only ensures a high degree of crystallinity in the host network, thus enabling efficient capture of guests, but also provides additional stability to the resultant host-guest complex via molecule-surface interactions. In this sense, host-guest interactions are already being exploited for the reversible adsorption of analytes in solid materials, which can provide an improvement in some features of analytical methods [23]. To this end, several types of compounds that are able to act as host molecules have been synthesized and used in sorbents, including crown ethers, cryptands, carcerands, cucurbiturils, paracyclophanes, calixarenes, and cyclodextrins.
Cyclodextrins (CDs) are a family of cyclic oligosaccharides obtained from the union of glucose monomers linked by α-1,4 glycosidic bonds. The natural occurrence of CDs can be classified into α-, β-, and γ-cyclodextrins, which are composed of 6, 7, and 8 glucose units, respectively ( Figure 1). Thus, their diameters increase with the number of glucose units. They are shaped as truncated cones due to the constitutional asymmetry of the glucopyranose rings. The hydroxyl groups are oriented to the outer space flanking the upper and lower rims, with the primary hydroxyl groups towards the narrow edge of the cone and the secondary hydroxyl groups towards the wider edge [24]. The central cavity of the cone is lined with the skeletal carbons and ethereal oxygen of the glucose residues, which produce a hydrophobic zone. Therefore, they exhibit the ability to trap guest molecules inside them through the formation of host-guest complexes [25]. This feature, together with the possibility of adapting the type and size of the analyte to be encapsulated taking into account the cyclodextrin used, as well as to the medium in which analytes are contained [26], has positioned CDs as promising nanoscale carriers. They are capable of improving stability while decreasing the reactivity of the guest compound. Further, CDs can be modified by means of their hydroxyl groups in the external hydrophilic zone. In fact, different derivatives have been synthesized by amination, esterification, or etherification [27,28] to suit the application. This possibility has produced the study of new synthetic methods for producing cyclodextrin-based materials, since using the well-known host-guest chemistry of natural CDs is a logical stepping-stone to form Nanomaterials 2021, 11, 7 4 of 28 more complex materials. A variety of research on materials synthesized from CDs that allow the extraction of different compounds from urine, water, soil, or food sample has been described. They have been extensively used as adsorbents in analytical chemistry, not only in the form of cross-linked cyclodextrins [29], but also for the functionalization of other supports [30].
This review aims to offer an overview of the different types of materials containing cyclodextrin that have been used to date for the adsorption of pollutants in differentorigin samples, either with quantitation or remediation purposes, based on the most relevant works reported in the literature in the last 20 years. The benefits of the reversible encapsulation of analytes by the formation of inclusion complexes are emphasized.
The Relevance of the Support
It is well known that CDs have certain limitations for their use as individual sorbents [31] or simply as a part of the mentioned hybrid materials. Their solubility in water makes their losses during the pollutant-caption procedure in aqueous samples significant, which is reflected in a decrease in the repeatability of analytical methods and the loading capacity of the solid phases. For example, native CDs were used as sorbents to perform solid-phase extraction (SPE) of pesticides from water samples, tomato juice, and orange juice [32]. Additionally, cyclodextrin-hybrid materials were tested to extract PAHs from water and VOCs from air samples [33][34][35]. These studies showed their usages, but also their limitations. In this sense, the key to make CDs suitable for extraction purposes is to increase their insolubility [31] by chemically connecting them to water-insoluble supports.
These supports can be of a very varied chemical nature: inorganic, organic, and also hybrid solids. Regardless of their composition, they all provide a fundamental feature: a good dispersion and accessibility of CDs, thus maximizing the possible interactions with analytes to be retained in them, is necessary. This premise generally implies maximizing the surface/mass ratio of the support. However, the irregular distribution of CDs and the frequently low cyclodextrin loading of these types of phases can limit their adsorption capability [36]. For this reason, the materials involved must provide high surface areas, which are achieved through the existence of pores or by reducing particle sizes. Therefore, microporous (<2 nm), mesoporous (2 to 50 nm), and macroporous (>50 nm) (nano)materials [37] are commonly synthesized.
Silica-Based Supports
Commercial amorphous silica can be classified into wet-or dry-type silica [38] depending on the preparation method used. On the one hand, silica gel is a granular, porous form of SiO 2 manufactured on a large scale from sodium silicate by working under aqueous alkaline media (Figure 2b). The relatively high inter-particle condensation leads to void formation. Then, the resulting solid shows a high porosity and surface areas up to 800 m 2 g −1 . On the other hand, fumed silica is known as pyrogenic silica, because it is produced in a flame. It consists of nano/micrometric primary particles of amorphous silica fused into branched chain-like aggregates at the submicron scale (Figure 2a). Its main particle size is established between 5-50 nm, and the grains are non-porous, with surface areas in the 50-600 m 2 g −1 range. From the structural point of view, the main difference between both types lies in the aggregation level of the primary particles, with greater compactness in the case of silica gel when compared to fumed silica due to the preparative method used. Thus, the proportion of silanol groups (Q 2 and Q 3 ) is much higher in the silica gel, which converts it into highly hydrophilic, as well as an appropriate candidate for the functionalization or anchoring of modified CDs to it. Contrary, the condensed Si species (Q 4 ) dominate in the fumed silica, which provides them with a marked hydrophobicity. Additionally, it is possible to use already shaped siliceous supports such as commercial capillary silica (untreated or later modified), which are normally obtained by pyrolysis and can be therefore classified as fumed silica. component constitutes an important step that can modify its textural properties. The most common way is through a mild heat treatment that usually induces moderated collapse of the structure (Figure 2b). However, the liquid component for the gel is replaced with a gas (through supercritical drying or freeze-drying); a lower collapse of the gel structure occurs (Figure 2c). The result is a solid with extremely low density called aerogel [40], which usually shows larger pores in comparison with xerogels. In 1992, a revolution in porous materials occurred when scientists from the Mobil Company published the synthesis and characterization of the material called MCM-41, the first ordered mesoporous silica [41]. This solid and many others described since then are synthesized taking advantage of the template effect generated by the surfactant micelles [42]. The condensation of the inorganic component in the inter-micellar space of the surfactant-silica self-assembly leads to solid that can be considered as mineral replicas of Oppositely to commercial products, sol-gel chemistry strategies using other different Si sources such as tetraethylorthosilicate (TEOS) or other modified alkoxides have made it possible to synthesize a great variety of silica gels (pure or hybrid), which can generate silica or organo-silica xerogels [39] after the extraction of the solvent used. Depending on various preparative parameters such as pH, temperature, reaction medium, the proportion of TEOS compared to other silanes with organic groups, the size and nature of the organic groups, etc., a wide variety of porous silica-based xerogels are available in a range of sizes, from microporous to macroporous. The porous structure can be tuned by properly choosing the experimental parameters. The key to designing the desired porosity is to achieve fine control of the hydrolysis and condensation processes (highly pH-dependent) of siliceous species. Xerogels prepared at a pH of around the silica isoelectric point are microporous. As the pH increases, an evolution through mesoscale to macroscale pores occurs. Along with the aging time of the gel, the extraction way of the liquid component constitutes an important step that can modify its textural properties. The most common way is through a mild heat treatment that usually induces moderated collapse of the structure (Figure 2b). However, the liquid component for the gel is replaced with a gas (through supercritical drying or freeze-drying); a lower collapse of the gel structure occurs (Figure 2c). The result is a solid with extremely low density called aerogel [40], which usually shows larger pores in comparison with xerogels.
In 1992, a revolution in porous materials occurred when scientists from the Mobil Company published the synthesis and characterization of the material called MCM-41, the first ordered mesoporous silica [41]. This solid and many others described since then are synthesized taking advantage of the template effect generated by the surfactant micelles [42]. The condensation of the inorganic component in the inter-micellar space of the surfactant-silica self-assembly leads to solid that can be considered as mineral replicas of liquid crystal phases. The surfactant removal through thermal or chemical treatment generated the mesopores (Figure 2d). Thus, controlling the size of the micelles is the key to modulate the dimensions of the mesopores. The possibility of using different surfactants (ionic, anionic, neutral), as well as the use of swelling agents, makes it possible to regulate pore sizes between ca. 2 and 50 nm. These solids reach surface areas around 1000 m 2 g −1 .
A large variety of materials have been described [43], not only with different pore sizes but also with different mesopore arrays (hexagonal or cubic symmetry). The most common ones are solids MCM-41, MSU-H, SBA-15, MCM-48, and SBA-1. The differences in their symmetry can affect the degree of accessibility of pollutants to active centers such as CDs, with cubic arrays being in principle more favorable due to the interconnection of the mesopores in 3D. Regardless of symmetry and unlike xerogels (which normally generate cage-like pores), the mesopores generated thanks to the surfactant micelles are cylindrical and with very homogeneous sizes.
Polymeric Supports
The variety of chemical reactions involved in the formation of polymeric supports is much wider than in the case of silica supports. Polymeric supports are an extensive and common family used for various applications, including those related to environmental problems [44]. Two synthesis strategies can be differentiated in a simple approach: one of two-pot type, involving the post-modification of an already formed support, and the other of one-pot type, where the CD can be incorporated simultaneously to the formation of the polymer.
To increase the final area of the material, it is possible to use fibers or layers as the phase where the polymer is deposited. Polymers such as poly(dimethylsiloxane) can be used for this purpose. When the thickness of the polymeric layer is small (<30 µm), the material does not usually present porosity, so only the CDs located on the surface are exposed to the analytes or pollutants of interest (Figure 3a,d). When the layer thickness increases (<50 µm), a certain porosity can be generated, which is associated with the globular growth of the polymer. In this case, the pores formed are in the macropore range due to the micrometric size of the polymeric globules. Additionally, nanosponges are an extensive family of nanomaterials synthesized using one-pot methods. The term was first used in 1999 by Min Ma and De Quan Li [46] to refer to novel nanoporous polymers made up of CDs connected with diisocyanate linkers. However, the history of cross-linked insoluble CD polymers dates back to 1965, when Solms and Egli published the preparation of polymeric networks made up of cross-linked CDs with epichlorohydrin [47]. Now, the term nanosponge (NS) refers to a class of insoluble materials with distinctive nanometric porosity that can be synthesized using either organic or inorganic compounds. A recent review on the subject classifies nanosponges into four categories [48]. The first generation of nanosponges comprises urethane, carbonate, ether, and ester NSs synthesized by reacting CDs with a cross-linking agent. The addition of specific functionalities to the first-generation nanosponges allowed them to extend their field of application and gave rise to the second generation. In this sense, three strategies can be used to incorporate the new functional groups to them: post-cross-linking functionalization, pre-cross-linking modification of CDs, or addition of the functionalizing agent simultaneously to the cross-linking step. The third generation contains, therefore, stimuli-response NSs whose behavior can be modified according to changes in the environment. Finally, the fourth generation includes molecularly imprinted nanosponges with high selectivity towards specific guest molecules. The synthesis of molecularly imprinted polymers (MIPs) is based on the incorporation of a template molecule In other cases, an additional substrate is not necessary. It is possible to synthesize the polymer and perform a post-functionalization process. Thus, polymers derived from methacrylate, such as poly(glycidyl-co-ethylene dimethacrylate) [45], can be synthesized from glycidyl methacrylate to obtain functionalizable solids. Different alternatives are possible to connect different modifiers, including click-chemistry reactions. The globular growth of the polymer allows the formation of large pores (macropores) that will also be dominant in the materials containing CDs (Figure 3b).
One-pot strategies are the most used for the incorporation of CDs. In some cases, materials similar to those prepared in two different stages are obtained. In those cases, the morphology and porosity of the final solid containing the CD are similar to that of the pure polymer. This occurs when acryl-type polymers are prepared in the presence of CD-acryloyl functional moieties acting as monomers. The result obtained is a macroporous polymer with bounded CD molecules.
Additionally, nanosponges are an extensive family of nanomaterials synthesized using one-pot methods. The term was first used in 1999 by Min Ma and De Quan Li [46] to refer to novel nanoporous polymers made up of CDs connected with diisocyanate linkers. However, the history of cross-linked insoluble CD polymers dates back to 1965, when Solms and Egli published the preparation of polymeric networks made up of cross-linked CDs with epichlorohydrin [47]. Now, the term nanosponge (NS) refers to a class of insoluble materials with distinctive nanometric porosity that can be synthesized using either organic or inorganic compounds. A recent review on the subject classifies nanosponges into four categories [48]. The first generation of nanosponges comprises urethane, carbonate, ether, and ester NSs synthesized by reacting CDs with a cross-linking agent. The addition of specific functionalities to the first-generation nanosponges allowed them to extend their field of application and gave rise to the second generation. In this sense, three strategies can be used to incorporate the new functional groups to them: post-cross-linking functionalization, pre-cross-linking modification of CDs, or addition of the functionalizing agent simultaneously to the cross-linking step. The third generation contains, therefore, stimuliresponse NSs whose behavior can be modified according to changes in the environment. Finally, the fourth generation includes molecularly imprinted nanosponges with high selectivity towards specific guest molecules. The synthesis of molecularly imprinted polymers (MIPs) is based on the incorporation of a template molecule during the polymerization process [49]. Contrary to what occurs in siliceous materials, where the functional groups (including CDs) are not an intrinsic part of the essential backbone of the support, these groups are essential in the case of nanosponges. Cyclodextrins are a fundamental part of the structure in addition to the new functionalities they provide the support. Taking into account the size of the CD monomers and the common linkers used, the resulting materials are normally in the range of micropores and small mesopores.
Covalent Organic Frameworks
Perhaps covalent organic frameworks (COFs) are one of the newest families of porous materials [50]. COFs represent an emerging class of crystalline solids entirely composed of light elements and connected by covalent bonds in two and three dimensions ( Figure 4a). These materials were first described in 2005 [51] and combine diverse interesting properties such as a high specific surface area and low framework density, homogeneous pore size distribution, and stable structures that give them special applicability in a wide range of fields. Pre-designable topologies and tunable pore sizes, usually in the range of micro and small mesopores, can be achieved by selecting the adequate experimental conditions during the synthesis process, including the nature and size of the linking units used [52,53]. To date, more than twenty different linkages have been described. Between them, boronic esters, triazines, CQC bond, and imines can be mainly mentioned. range of micro and small mesopores, can be achieved by selecting the adequate experimental conditions during the synthesis process, including the nature and size of the linking units used [52,53]. To date, more than twenty different linkages have been described. Between them, boronic esters, triazines, CQC bond, and imines can be mainly mentioned.
Although they bear certain similarities with nanosponges (the whole skeleton has organic nature and the pores are in the same size domain), they present important differences. COFs are crystalline, while polymeric nanosponges are unordered materials. Furthermore, while nanosponges necessarily require modified CDs for their preparation, in the case of COFs they are only an option. Recently, COFs containing CDs in their crystalline structure have been described. The CD molecules can be incorporated through onepot strategies, thus taking part in the crystalline COF skeleton (Figure 4c) [54], or through a post-functionalization or two-pot procedures ( Figure 4b) [55].
Metal-Organic Frameworks
Along with COFs, metal-organic frameworks (MOFs) also constitute an extensive family of porous materials, in which a great variety of metal ions or clusters take part together with organic ligands. The discovery and development of MOFs occurred in the 1990s thanks to several pioneering groups led by Robson, Moore, Yaghi, Kitagawa, and Ferey [56]. MOFs are nanoporous materials, also referred to as porous coordination poly- Although they bear certain similarities with nanosponges (the whole skeleton has organic nature and the pores are in the same size domain), they present important dif-ferences. COFs are crystalline, while polymeric nanosponges are unordered materials. Furthermore, while nanosponges necessarily require modified CDs for their preparation, in the case of COFs they are only an option. Recently, COFs containing CDs in their crystalline structure have been described. The CD molecules can be incorporated through one-pot strategies, thus taking part in the crystalline COF skeleton (Figure 4c) [54], or through a post-functionalization or two-pot procedures (Figure 4b) [55].
Metal-Organic Frameworks
Along with COFs, metal-organic frameworks (MOFs) also constitute an extensive family of porous materials, in which a great variety of metal ions or clusters take part together with organic ligands. The discovery and development of MOFs occurred in the 1990s thanks to several pioneering groups led by Robson, Moore, Yaghi, Kitagawa, and Ferey [56]. MOFs are nanoporous materials, also referred to as porous coordination polymers, that show one-, two-, or three-dimensional structures. Among the possible metallic species used in the structures, alkaline and alkaline earth metals, p-block elements, lanthanides, and actinides can be mentioned. Similarly, a variety of organic linkers such as carboxylate, phosphonate, sulfonate, pyridyl, imidazole, and azolate functional groups has been used for their obtaining. The number of coordination compounds that could be considered as MOFs is enormous, and some authors indicate that it is around one million [57]. Diversity turns out to be a label for MOFs. They can reach surface areas in the 1000 to 10,000 m 2 g −1 range, much higher than other porous materials.
To date, CDs have been included in the MOFs' structure based on alkaline or alkaline earth metals as inorganic counterparts through different synthetic strategies, such as hydrothermal or solvothermal methods, vapor diffusion, or microwave irradiation [58]. Regardless of the specific method used for their obtaining, the CD incorporation takes place during the MOF formation without additional functionalization treatments, which constitutes a great benefit.
Complex Nanocomposites
The use of nanoparticles as support is also a versatile strategy to enhance the active surface are where the CDs must be located. Moreover, in order to favor and ease the separation, the designed composites can incorporate magnetic nanoparticles, usually Fe 3 O 4 . There exist different well-established protocols for the isolation and stabilization of magnetite nanoparticles with fine control of the size and shape (usually spherical) [21]. However, these particles cannot be used for many applications without a protective layer due to their chemical reactivity. Then, it is necessary to cover the Fe 3 O 4 nanoparticles with more stable and less reactive materials, such as silica or polymers. The resulting core-shell nanoparticles [59] preserve the magnetic properties and can incorporate CD molecules in the external shell ( Figure 5). The silica shell can be massive or porous depending on the preparative conditions. In the case of polymeric shells, the previously mentioned preparative strategies for polymeric supports can be adapted for the CD anchoring, including the use of MIPs. more stable and less reactive materials, such as silica or polymers. The resulting core-shell nanoparticles [59] preserve the magnetic properties and can incorporate CD molecules in the external shell ( Figure 5). The silica shell can be massive or porous depending on the preparative conditions. In the case of polymeric shells, the previously mentioned preparative strategies for polymeric supports can be adapted for the CD anchoring, including the use of MIPs.
Deeping in: The Environmental Benefits of Using Cyclodextrins
An increase in the use of CDs as everyday commodities in separation sciences is evident for some years, when we have witnessed a revival in their interest through the progressive increase in the number of inventions related to cyclodextrin-based solid supports ( Figure 6). Chemical aspects such as their structures or their intercalation mode were already studied some years ago. Their properties have been extensively used not only in the pharmaceutical industry, but also in the food one to improve the availability of poorly aqueous soluble or biodegradable compounds. However, the structural aspects of CDs that enable the improvement of separations and the enhancement of sensitivity and accuracy in analytical methods have been currently more deeply discussed [60,61]. The increasing number of publications on the complexation of pollutants and development or improvement of remediation technologies using CDs shows that there is a significant interest in the applications of CDs for environmental depollution [1]. Thus, the integration of cyclodextrin molecules and their chemical derivatives in supporting structures is being more widely studied each time. This includes the optimization of the preparation, the characterization, and the finding of their potential applications as the most important issues of the research reported [36].
Deeping in: The Environmental Benefits of Using Cyclodextrins
An increase in the use of CDs as everyday commodities in separation sciences is evident for some years, when we have witnessed a revival in their interest through the progressive increase in the number of inventions related to cyclodextrin-based solid supports ( Figure 6). Chemical aspects such as their structures or their intercalation mode were already studied some years ago. Their properties have been extensively used not only in the pharmaceutical industry, but also in the food one to improve the availability of poorly aqueous soluble or biodegradable compounds. However, the structural aspects of CDs that enable the improvement of separations and the enhancement of sensitivity and accuracy in analytical methods have been currently more deeply discussed [60,61]. The increasing number of publications on the complexation of pollutants and development or improvement of remediation technologies using CDs shows that there is a significant interest in the applications of CDs for environmental depollution [1]. Thus, the integration of cyclodextrin molecules and their chemical derivatives in supporting structures is being more widely studied each time. This includes the optimization of the preparation, the characterization, and the finding of their potential applications as the most important issues of the research reported [36].
As has been glimpsed previously, the synthesis of the cyclodextrin-containing materials can be carried out mainly in two ways: on the one hand, by chemical bonding through grafting or coating reactions using previously functionalized CDs, or on the other hand, by inclusion through sol-gel or self-assembly processes, hereafter referred to as cyclodextrin-hybrid materials, by using either native or previously modified cyclodextrins.
It has been mentioned that cyclodextrins present certain limitations for their application as individual sorbents [31] or as a part of the mentioned hybrid materials. Their solubility in water makes their losses during the caption procedure in aqueous samples significant, which is reflected in a decrease in the repeatability of analytical methods and the loading capacity of the tested solid phases. As an advantage, grafted or coated materials with CDs offer improved accessibility to form the directing inclusion complexes, since the molecules are on the external surface of the material. However, the irregular distribution of CDs and the frequently low cyclodextrin loading of these types of phases can limit the adsorption capacities reached [36].
Regarding the nature of the backup materials, both organic-and inorganic-origin supports have been used thus far. Among the polymeric-natured ones, varied solid phases with attached CDs, as well as cross-linked cyclodextrins by using polymeric reagents as couplers can be found [62,63]. These materials have been used in quite heterogeneous contexts due to their insoluble nature. Compared to the use of polymeric materials such as polyurethane or dimethacrylate, other inorganic ones such as those derived from silica have some virtues. Their physical robustness, their enhanced chemical inertness, or their large surface area [64] must be highlighted. In fact, the use of silica as a support for CDs has spread due to the variety of siliceous structures that can be obtained through the control of the reactions through their low reactivity and their ease of incorporating new groups [65]. As has been glimpsed previously, the synthesis of the cyclodextrin-containing materials can be carried out mainly in two ways: on the one hand, by chemical bonding through grafting or coating reactions using previously functionalized CDs, or on the other hand, by inclusion through sol-gel or self-assembly processes, hereafter referred to as cyclodextrin-hybrid materials, by using either native or previously modified cyclodextrins.
It has been mentioned that cyclodextrins present certain limitations for their application as individual sorbents [31] or as a part of the mentioned hybrid materials. Their solubility in water makes their losses during the caption procedure in aqueous samples significant, which is reflected in a decrease in the repeatability of analytical methods and the loading capacity of the tested solid phases. As an advantage, grafted or coated materials with CDs offer improved accessibility to form the directing inclusion complexes, since the molecules are on the external surface of the material. However, the irregular distribution of CDs and the frequently low cyclodextrin loading of these types of phases can limit the adsorption capacities reached [36].
Regarding the nature of the backup materials, both organic-and inorganic-origin supports have been used thus far. Among the polymeric-natured ones, varied solid phases with attached CDs, as well as cross-linked cyclodextrins by using polymeric reagents as couplers can be found [62,63]. These materials have been used in quite heterogeneous contexts due to their insoluble nature. Compared to the use of polymeric materials such as polyurethane or dimethacrylate, other inorganic ones such as those derived from silica All the supporting materials reported have both advantages and disadvantages, which makes an in-depth analysis of their obtaining and application convenient. Concretely, a wide variety of supports used for the inclusion of cyclodextrins can be mainly divided into silica-based materials, polymeric-based materials, and nanomaterials and metallic nanoparticles such as carbon-based materials and phases with magnetic properties, among others.
Cyclodextrin-Silica Materials
The use of silica as support offers a wide perspective of functionalities, which can be enhanced when cyclodextrin units are added to the structure of the solid phase. There exist several studies on the synthesis, characterization, and applications of cyclodextrin-based silica materials in separation technologies [36] (Table 1).
First, cyclodextrin-silica materials can be divided into those using commercial silica during their synthesis and those whose source of silica is not commercial, but it is obtained from the co-reaction of silica-based substances such as TEOS.
Some examples use minimally modified commercial silica. These include the incorporation of vinyl groups on the surface of silica gel or the immobilization of polymerizable derivatives of CD on silica gel [66], among others. The synthesis and application for analytical purposes of this type of phases had greater success in the first decade of this century. For example, Fan et al. used commercial silica gel (20-30 µm) to bound β-CD and then used it as a selective SPE sorbent for extracting 4-nitrophenol and 2,4-nitrophenol from lake-coming water samples [67,68]. Moreover, commercial fused silica fibers subsequently coated with β-CD were also reported to extract and quantify phenolic compounds from water samples through SPME [69]. Faraji et al. quantified phenolic compounds in water samples with the help of β-cyclodextrin-bonded silica synthesized from purchased irregular silica gel. They optimized the extraction of these compounds, first using SPE [70], then SBSE [71], and finally LPME [72], while maintaining the synthesis procedure and thus the basic properties of the adsorbent used. More recently, a silica adsorbent (40-63 µm) containing β-CD was developed and used for the separation and purification of epigallocatechin gallate from green tea extracts [73]. In this case, the batch adsorption experiments demonstrated that the CD-bonded silica adsorbent possessed enhanced selectivity towards this compound compared to other tea catechins and caffeine.
Other materials use non-commercial silica and are a feasible alternative to the previous ones. Sawicki et al. obtained a mesoporous silica solid phase with chemically attached CD in a two-step process, and then applied it in remediation, for cleaning water from pesticides through their adsorption in the developed material [74]. As it is known, remains of pesticides can reach drinking or superficial waters affecting human health, since they show carcinogenic and mutagenic effects. That is the reason why they must be controlled and monitored according to water guidelines [75]. Mauri et al. presented a one-pot synthetic process for the obtaining of silica-cyclodextrin xerogels and applied them to air sampling of VOCs [33,35]. Additionally, β-CD functionalized silica-coated magnetic graphene oxide materials (Fe 3 O 4 @SiO 2 @GO-β-CD) using TEOS as a silica source were used for the entrapment of tetracycline, oxytetracycline, and doxycycline coming from bovine milk samples [76]. These compounds are broad-spectrum antibiotics widely used in human and veterinary medicine. The environmental problem they represent resides in their presence in animal-based food, which poses a serious threat to consumer health (allergic reactions, chronic toxicity, and antimicrobial resistance). In this case, the use of TEOS allowed for greater flexibility when designing the material, which in turn can supply a better extraction performance.
Another possible classification of cyclodextrin-silica materials is by paying attention to their physical parameters, such as the pore size or the order that the supporting structure presents. Between them, non-ordered hybrid materials and mesoporous silica ones can be mentioned.
Due to the large internal surface area, high porosity, and high amount of silanol groups in the mesoporous materials, they have attracted considerable attention for applications in catalysis, filtration and separation, adsorption, and storage of gases. A large number of studies have been carried out concerning the use of surfactants as a template to obtain ordered silica structures such as the synthesis of mesoporous oxides through the called atrane route [77,78]. Cyclodextrin molecules are in this case linked into the well-constructed cavities of the materials and have demonstrated being effective as adsorbents in both aqueous and gaseous media [79]. In these cases, the X-ray diffraction measurements are consistent with the preservation of an ordered mesophase, as expected [80,81]. Xu et al. applied imprinting technology to mesoporous silica materials by using SBA-15 as support and linked β-CD to it through molecule templated during the synthetic procedure. Specifically, they used cholesterol as a template [82] and observed that the binding amount of these molecules was enhanced in chromatography and for SPE in water. In addition, 2,4-dichlorophenoxyacetic (2,4-D) served as a template in silica-cyclodextrin mesoporous hybrid materials [83]. In this case, the molecularly imprinted material was prepared by using 2,4-D as the template molecule, alkyne-modified β-CD and propargyl amine as the combinatorial multifunctional monomers, and SBA-15 as the supporter. The results of the equilibrium binding experiments and selective tests demonstrated that the material had binding affinity and specificity for a group of analytes with similar size and shape to those of the template, and the binding kinetic experiments showed an enhancement of the mass transfer rate through the imprinting approach described. Moreover, SPE recoveries for this compound in aqueous samples were around 80%. Figure 7 shows a schematic representation of the synthesis of the molecularly imprinted material with CD and mesoporous silica described.
addition, 2,4-dichlorophenoxyacetic (2,4-D) served as a template in silica-cyclodextrin mesoporous hybrid materials [83]. In this case, the molecularly imprinted material was prepared by using 2,4-D as the template molecule, alkyne-modified β-CD and propargyl amine as the combinatorial multifunctional monomers, and SBA-15 as the supporter. The results of the equilibrium binding experiments and selective tests demonstrated that the material had binding affinity and specificity for a group of analytes with similar size and shape to those of the template, and the binding kinetic experiments showed an enhancement of the mass transfer rate through the imprinting approach described. Moreover, SPE recoveries for this compound in aqueous samples were around 80%. Figure 7 shows a schematic representation of the synthesis of the molecularly imprinted material with CD and mesoporous silica described. However, there exist certainly few studies in which the mesoporous order has been combined with the presence of CDs and their analytical use has been verified. Despite the interest that the mesoporous solids have aroused, there exists some controversy regarding the virtues associated with their order. In short, an ordered mesoporous structure does not present great advantages over other types of materials in some specific applications However, there exist certainly few studies in which the mesoporous order has been combined with the presence of CDs and their analytical use has been verified. Despite the interest that the mesoporous solids have aroused, there exists some controversy regarding the virtues associated with their order. In short, an ordered mesoporous structure does not present great advantages over other types of materials in some specific applications such as catalysis, remediation, or analytical determination. For this reason, a growing interest in preparative alternatives for porous materials in the absence of surfactants, which also implies additional costs, has also recently been observed [84]. In some cases, it is necessary to go back to classical sol-gel synthesis ideas, which made it possible to prepare porous materials such as xerogels and aerogels in the absence of surfactants. In fact, a variety of synthetic strategies with analytical applications has been described to obtain silica gels with no structural order. The versatility of sol-gel chemistry allows the synthesis of a great variety of siliceous and organosiliceous materials with controlled structure, composition, morphology, and porosity, generally with simple procedures and at low temperatures. This type of silica-based sol-gel derivatives has been given a priority place in several research areas, since it is greatly versatile in controlling the porosity, the hydrophobic-hydrophilic balance, and its reactivity. Fan et al. [85] used sol-gel chemistry to link CD to commercial capillary silica and then used the developed phase for in-tube SPME of non-steroidal anti-inflammatory drugs in urine samples. A year later, Zhou et al. proposed the sol-gel technology to obtain a novel fiber from hydroxyl-terminated silicone oil coated with CD. This fiber was used to extract ephedrine and methamphetamine in human urine [86] and polybrominated diphenyl ethers in soil [87] by carrying it out through headspace SPME. Zhang et al. described the development of β-CD-modified silica for SPE of methyl jasmonate in aqueous and plant samples [88], and Chen et al. reported the analysis of forchlorfenuron and thidiazuron in fruits and vegetable by surfaceenhanced Raman spectroscopy after selective SPE with 3,5-dimethyl phenyl carbamoylated β-CD bonded silica gel [89]. Moreover, a study on different hydrophobic-hydrophilic natures of xerogels and aerogels to understand the dominant adsorption interactions of phenolic compounds with silica-based adsorbents was carried out. The functionalization of aerogels with cyclodextrin was compared with the previously cited solid phases [90]. As the authors describe, the sol-gel synthesis followed a one-step catalyzed procedure and the subsequent drying of the gels was accomplished in this case by supercritical fluid drying and extraction with CO 2 to obtain aerogels, and evaporative drying to produce xerogels. In recent times, Mauri et al. obtained silica-based xerogels with covalently attached βand γ-CD to isolate PAHs and phenolic compounds from water [91] and aroma incense cones [26], and polychlorinated biphenyls (PCBs) in environmental water samples [92]. PAHs and PCBs are ubiquitous environmental pollutants that tend to be very persistent and bioaccumulate in different ecosystems. For this reason, their monitoring in environmental matrices seems to be a good share of global ecological and health preservation. Also newly, Chen et al. obtained an acryloyl β-CD-silica hybrid monolithic column by applying a sol-gel polymerization method in their synthesis. These materials have been demonstrated to be useful for pipette-tip SPE of parathion and fenthion [93]. The determination of carbendazim and carbaryl in leafy vegetables was also carried out with the same material through SPME [94], with limits of detection of 1.0 µg kg −1 for carbendazim and 1.5 µg kg −1 for carbaryl, respectively. In addition, recoveries ranged from 93% to 110%. Figure 8 shows the synthesis procedure of the materials mentioned. Finally, other approaches have been also reported regarding the improvement of the analytical performance in sorptive supports based on silica in presence of cyclodextrin molecules. As an example, attapulgite modified with glycidoxypropyltrimethoxysilane and modified β-CD showed to be effective to adsorb fluoroquinolones from honey through dispersive SPE [95] with high extraction efficiency and selectivity. At the same time, Gao et al. used functionalized silica gel modified with cyclodextrin and vinyl groups to obtain surface molecularly imprinted materials. These were used in the selective determination of (-)-epigallocatechin gallate by applying a SPE methodology in toothpaste samples [96]. The work reported a promising approach for the purification of complex samples. Additionally, functionalized β-CD was grafted with silica gel in the presence of salicylamide for the adsorption of UO2 2+ . Uranium plays an important role in the modern energy industry. For this reason, large amounts of wastewater containing uranium have been discharged into the environment, which has resulted in widespread environmental contamination and can contribute to severe damage to health. In this study, the developed material was demonstrated to be effective in the presence of interfering ions [97]. Finally, other approaches have been also reported regarding the improvement of the analytical performance in sorptive supports based on silica in presence of cyclodextrin molecules. As an example, attapulgite modified with glycidoxypropyltrimethoxysilane and modified β-CD showed to be effective to adsorb fluoroquinolones from honey through dispersive SPE [95] with high extraction efficiency and selectivity. At the same time, Gao et al. used functionalized silica gel modified with cyclodextrin and vinyl groups to obtain surface molecularly imprinted materials. These were used in the selective determination of (-)-epigallocatechin gallate by applying a SPE methodology in toothpaste samples [96]. The work reported a promising approach for the purification of complex samples. Additionally, functionalized β-CD was grafted with silica gel in the presence of salicylamide for the adsorption of UO 2 2+ . Uranium plays an important role in the modern energy industry. For this reason, large amounts of wastewater containing uranium have been discharged into the environment, which has resulted in widespread environmental contamination and can contribute to severe damage to health. In this study, the developed material was demonstrated to be effective in the presence of interfering ions [97]. Air [26] 2018 Acryloyl β-CD-silica hybrid monolithic columns
Organic-Based Supports with Cyclodextrin Units
Polymers have been used as drug delivery systems, although in more recent times many of them have found application in SPE and other extraction methods with analytical purposes [31] (Table 2).
On the one hand, the polymeric-natured solid phases with longer synthetic procedures can be mentioned. In this case, cyclodextrin molecules, which should be previously functionalized, are anchored to an already existing polymer-based support (frequently fibers or columns, but also batch materials) using an appropriate binding agent. A poly(dimethylsiloxane)/β-cyclodextrin coating was prepared in the form of a membrane to extract phenolic compounds and PAHs from water [98] and in the form of a fiber to reversibly adsorb phenolic compounds and amines from aqueous samples [99] by SPME. The coating demonstrated to have a porous structure that provided high surface areas and allowed for high extraction efficiency in both cases, together with a low cost of preparation. Another example is the preparation of an acryloyl β-CD polymeric monolithic column for the SPEM of carbofuran and carbaryl in rice. These pesticides have manifested to be hazardous for humans and animals due to their accumulation and potentially toxic effects on living organisms, which involves food safety as a part of an environmental problem. An advantage of this work is revealed by its "one-step" polymerization method [100]. Recently, Liu et al. reported a SPME procedure with cyclodextrin molecularly imprinted fibers of polymeric nature for the selective recognition of polychlorophenols in water [101]. Additionally, a poly(glycidyl-co-ethylene dimethacrylate) hybrid modified with β-CD was used as a sorbent for the SPE of phenols [63]. Although the results obtained were satisfactory from the analytical point of view, the two-step synthesis was still improvable.
On the other hand, some works describe cross-linked cyclodextrin units in the form of polymers for the adsorption of a diversity of analytes. Epichlorohydrin has been frequently used as a linker. For example, Yu et al. described a β-cyclodextrin epichlorohydrin copolymer as a SPE adsorbent for aromatic compounds in water [102], and Zhu et al. used a β-CD cross-linked polymer as SPE material for the separation of trace Cu 2+ [103] and Co 2+ [104]. As it is known, metal contamination in the water stream from industries is a major problem, since the effects of acute poisoning in humans and plants are very serious, potentially leading to liver damage with prolonged exposure. For this reason, the determination of trace metals in the environment constitutes a contribution to the field. Moreover, cyclodextrin-cross-linked copolymers were examined in terms of the sorption towards p-nitrophenol and methyl chloride, two model agrochemical pollutants [30]. Other different linkers reported in the literature are bifunctional isocyanate linkers [105] and 1,4-phenylenediisocyanate [106], both used to obtain cyclodextrin-based polymeric materials as supramolecular sorbents for environmental remediation in aqueous samples. SPE of pollutants such as diphenylphthalate, phenolic compounds, glycyrrhizic acid, and pyrethroids was achieved by using molecularly imprinted polymers of allyl-β-cyclodextrin and methacrylic acid [107], β-CD functionalized ionic liquid polymers [108], molecularly imprinted polymers with bismethacryloyl-β-cyclodextrin and methacrylic acid as double functional monomers [109], and a namely hyperbranched polymer functionalized with cyclodextrin [110]. Ibuprofen is a drug of environmental concern, since it has been found that pharmaceutical substances are commonly found in the environment and cause negative impacts on aquatic life. In this sense, Shang et al. developed an immobilized poly(vinyl alcohol)/cyclodextrin eco-adsorbent, which has also been described for the removal of ibuprofen from pharmaceutical sewage [111] in the form of a transparent and easy-handle film, with entrapment efficiencies of around 90%.
A group of special interest inside the use of cross-linking agents is cyclodextrin-based nanosponges. They can comprise inorganic and organic materials and, subsequently, are not only limited to polymeric-natured solid phases, although they constitute a majority inside the group. Nanosponges are insoluble materials that, despite being micro-or macrosized objects, have been classified as nanomaterials by virtue of their internal cavities, pores, or voids in the nanometer range [31]. A good illustration of this type of solid phases can be found in the β-cyclodextrin-polyurethane polymer used as SPE material for the analysis of carcinogenic aromatic amines in water described by Bhaskar et al. [112], or in the β-cyclodextrin polymers for the extraction of steroidal compounds from urine [113] and BTEX from aqueous solutions [114], also based on the use of epichlorohydrin as cross-linker. Important is to mention the work of Alsbaiee et al. [29], where a porous β-cyclodextrin polymeric network for remediation of micropollutants in environmental water samples was described. Specifically, β-CD units were cross-linked with rigid aromatic groups, providing thus a high surface area, which supposed an advantage in comparison to other nanosponge-type materials reported. The mesoporous polymer of β-CD showed to be able to sequester a variety of organic micropollutants with adsorption rate constants greater than those of non-porous β-CD adsorbents. Moreover, the reusability of the material permitted the rapid removal of a complex mixture of organic micropollutants at environmentally relevant concentrations for several times. This material gained such a lot of attention that it was described afterward for different applications such as the dispersive SPE of quinolones from water [115] or the SPE of bisphenols in water and orange juice [116,117].
Finally, a new family of organic-based supports, COFs, has been combined with CDs to improve their properties. However, the environmental applications of these novel materials remain mostly unexplored, and the works reported in this sense are limited to date. A β-CD covalent organic framework has been described as a chiral stationary phase for the separation of antibiotics [118] as a proof of concept. The interest of this work resides in the proven capability of cyclodextrins in COFs to encapsulate analytes of environmental interest for separation purposes. In this sense, the described material can be applied in the future to the extraction of the same trace pollutants from complex environmental matrices. Additionally, Yang et al. [119] have reported a β-CD-AuNPs-functionalized COF as a magnetic sorbent for the SPE of sulfonamides, reaching limits of detection in the range of 0.8-1.6 µg kg −1 and recoveries from 79% to 112%. Table 2. An overview of the reported studies on the use of organic-based supports with cyclodextrin for the adsorption of environmentally concerning compounds.
Year
Material Analytes Sorption Technique Matrix Ref.
Nanomaterials and Nanoparticles Combined with Cyclodextrin
Nanomaterials and nanoparticles present some advantages in comparison with supports based on micro-sized materials. In general, they present a superior extraction capability and selectivity due to a higher surface-area-to-volume ratio and easily modifiable surface functionality. Among them, magnetic nanoparticles (Fe 3 O 4 , Fe 2 O 3 , etc.), metallic nanoparticles (Al 2 O 3 , MnO, etc.), or carbonaceous nanomaterials (graphene, carbon nanoparticles, etc.) are the main focus of a great number of the existing studies [120,121] (Table 3). Depending on their dimensionality, nanomaterials are classified in zero-dimensional (nanoparticles), one-dimensional (nanotubes), and two-dimensional (nanowalls, nanodiscs, etc.). Numerous nanomaterials have been combined with cyclodextrins to obtain composites with improved sorbent properties for analytical uses due to the benefits there can be obtained. For example, an enhanced extraction capability and selectivity are attributable to the heterogeneity of the composites and so to the different interactions carried out. Moreover, the influence of CDs is essential when the analyte molecule size plays an important role [31].
One group to be mentioned is the nanomaterials or nanoparticles combining magnetic properties with the advantages of host-guest chemistry. In this sense, the liquid-solid separation is facilitated due to the magnetism of the material used, for example in the use of a magnetic SPE procedure (MSPE). Ghosh et al. described magnetic Fe 3 O 4 silica-coated nanoparticles whose surface grafted with carboxymethyl-β-cyclodextrin via carbodiimide activation [122]. Taking profit of the enantiomeric properties of CDs, these nanoparticles were used to adsorb chiral aromatic amino acid enantiomers as a proof of concept of the adsorption advances they can provide. A similar procedure based on cyclodextrin functionalization of magnetic nanoparticles with the participation of silica was described with remediation purposes for the removal of carcinogenic azo dyes from water [123] with favorable results regarding the sorption ability reached, which was reported around 98-99%. A different remediation achievement was carried out by Badruddoza et al. for the selective removal of Pb 2+ , Cd 2+ , and Ni 2+ from water by substituting the silica part with polymer participation in the synthesis procedure [124]. Specifically, epichlorohydrincross-linked carboxymethyl-β-CD was used to coat magnetic iron nanoparticles, and the adsorption process was found to be dependent on pH, ionic strength, and temperature. From 2014 onwards, the studies describing the analytical applications of these magnetic-CD approaches are more frequent each time. In this way, there can be mentioned the magnetic Fe 3 O 4 nanoparticles previously coated with silica as support of cyclodextrin molecules for the SPE of 5-hydroxy-3-indole acid from urine [125]. Carboxymethyl-hydroxypropyl-β-CD and carboxymethyl-β-CD were also used to modify magnetite nanoparticles with the help of polymer modification [126] and amino groups [127] for the adsorption of rutin from plants and PCBs from the soil. Karimnezhaz et al. reported the use of magnetic chitosan nanoparticles grafted with β-CD for the dispersive SPE of Zn 2+ and Co 2+ from water followed by quantitation by adsorption spectrometry [128,129]. In both cases, the loading capacity of the sorbent was demonstrated to be quite good. Over time, the presence of silica as a facilitator for the anchoring of CD can be seen in the work of Wang et al. [130], who described a new approach of Fe 3 O 4 @β-CD superparamagnetic composites for the hostguest adsorption of PCBs, and Chen et al. [131], who functionalized a graphene oxide (GO) network containing linked CD with the advantages of silica-modified magnetic nanoparticles to obtain Fe 3 O 4 @SiO 2 @GO/β-CD for the dispersive SPE of plant growth regulators through the formation of inclusion complexes with CD in plant residues. In this case, the merits of superparamagnetism were combined with antioxidation, high surface area, and high supramolecular recognition in an environmentally friendly methodology. The work has been recently improved with the same end [132]. The variety of works is so large that a wide selection of nanoparticles and nanomaterials structures for the adsorption of different types of analytes has been reported. Zhang et al. carried out the separation of erythromycin-A from wastewater with imprinted magnetic nanoparticles containing β-CD [133], and Liu et al. reported the advantages of using an ionic liquid-coated CD-functionalized magnetic core dendrimer for the dispersive SPME of pyrethroids in juice samples [134]. The importance of this achievement is found in the fact that pyrethroids residues are an important source of pollution in agriculture and a potential public health threat. Indeed, it has been proved that pyrethroids intoxication can alter the nerves' function. Additionally, the combination of the advantages of polymer-natured parts and silica for the obtaining of a namely magnetic porous cyclodextrin polymer (Fe 3 O 4 @SiO 2 @P-CDP) was applied for the magnetic SPE extraction of microcystins from environmental water samples, with limits of detection in the ppt order and good extraction efficiencies [135]. Microcystins have raised a concern, making their detection at trace levels in drinking water necessary, because they are a family of monocyclic heptapeptide toxins produced by cyanobacteria. In this sense, they can produce acute poisoning and promote cancer through chronic exposure. Recently, Yazdanpanah et al. have reported the use of cyclodextrin onto iron oxide/silica core-shell nanoparticles obtained through a polydopamine-assisted synthesis procedure for the magnetic SPE of aromatic molecules from environmental samples [136], and Moradi et al. have studied the simultaneous magnetic SPE of malachite green and crystal violet from aqueous samples with a poly(β-CD-ester) functionalized silica-coated magnetic nanoparticles [137], reporting recoveries in the range of 92-100%. Additionally, a MOF with functionalized β-CD, which was prepared by creating metal-organic framework layers on the surface of a Fe 3 O 4 -graphene oxide nanocomposite and bonding them with β-CD molecules, was applied for the efficient extraction and determination of prochloraz and triazole fungicides in vegetable samples [138]. In this case, the functionality granted by the MOF is mainly related to the magnetic activity of the solid phase for an easier separation, but it is not due to its structural properties as porous support.
Making the difference from the rest of the reported works based on MOFs, the one presented by Wang et al. described an efficient γ-CD-MOF-K + for the adsorption of formaldehyde molecules from the air with high selectivity, speed, and capacity at 293 K and 1 atm [139]. The excellent properties showed by the material are due both to its porous structure and to a synergistic effect of hydrogen bonding and host-guest interactions. Since formaldehyde is a major indoor pollutant due to its use in adhesives for construction and furnishing and plays, therefore, a very important role in human health, the environmental interest of this research is completely justified.
Other nanomaterials in the literature with the carbon intervention in their structures as a remarkable point have been informed. Song et al. described the application of a hollow fiber-based on carbon nanotubes modified with β-CD for the efficient and environmentally friendly SPME of plant hormones to overcome the lack of selectivity of hollow fibers [140]. Moreover, a novel U(VI)-imprinted graphitic carbon nitride composite for the selective and efficient removal of U(VI) from seawater was reported to break with the side effects of the future long-term development of nuclear energy in the world [141]. The adsorption capacity was calculated to be 860 mg g −1 at 25 • C, and the selectivity factors were high enough to affirm the high selectivity of the material for the purpose. Finally, Tejerzi et al. have newly described a facile one-pot green synthesis of a porous graphene nanohybrid decorated with cyclodextrin units as a highly efficient adsorbent for extraction aflatoxins from maize and animal feeds [142] through SPE. In this case, the large specific surface area of the porous graphene and high recognition and enrichment capability of CD moieties helped the nanohybrid to be an effective adsorbent for integrated sample clean-up, extraction, and pre-concentration of aflatoxins, which are a current issue as secondary metabolites in a wide variety of agricultural products and plantations [143,144], being a risk to both human and animal health. The synthesis procedure of the graphene-cyclodextrin nanohybrid can be observed in Figure 9. and furnishing and plays, therefore, a very important role in human health, the environmental interest of this research is completely justified.
Other nanomaterials in the literature with the carbon intervention in their structures as a remarkable point have been informed. Song et al. described the application of a hollow fiber-based on carbon nanotubes modified with β-CD for the efficient and environmentally friendly SPME of plant hormones to overcome the lack of selectivity of hollow fibers [140]. Moreover, a novel U(VI)-imprinted graphitic carbon nitride composite for the selective and efficient removal of U(VI) from seawater was reported to break with the side effects of the future long-term development of nuclear energy in the world [141]. The adsorption capacity was calculated to be 860 mg g −1 at 25 °C, and the selectivity factors were high enough to affirm the high selectivity of the material for the purpose. Finally, Tejerzi et al. have newly described a facile one-pot green synthesis of a porous graphene nanohybrid decorated with cyclodextrin units as a highly efficient adsorbent for extraction aflatoxins from maize and animal feeds [142] through SPE. In this case, the large specific surface area of the porous graphene and high recognition and enrichment capability of CD moieties helped the nanohybrid to be an effective adsorbent for integrated sample clean-up, extraction, and pre-concentration of aflatoxins, which are a current issue as secondary metabolites in a wide variety of agricultural products and plantations [143,144], being a risk to both human and animal health. The synthesis procedure of the graphenecyclodextrin nanohybrid can be observed in Figure 9. Figure 9. Schematic representation of the experimental procedure to prepare β-CDgraphene nanohybrid and its interactions with an aflatoxin molecule. Reproduced from [142], with permission from Copyright 2020 Elsevier.
Critical Analysis: Cyclodextrin-Containing Solid Phases
Once the studies with the greatest impact on the use of cyclodextrins for the adsorption of pollutants have been analyzed in detail, a critical comparison between them is possible. Thus, tendencies, benefits, and disadvantages can be analyzed with respect to other materials, and some general conclusions can be extracted. Overall, the determining factors for choosing one or another type of support, the type of cyclodextrin used, the way CDs will be found in the support (that is, simply included or chemically anchored to it), as well as their accessibility, are very varied.
First, it can be highlighted that no significant differences are observed regarding the use of one or another type of material for certain analytes or sample types. Instead, an important difference between one and the other can be, for example, the price. Thus, the type of material we are looking for must be selected based on the application we want to give it. For example, more affordable solid phases can be chosen for remediation actions, since what really matters in this case is not specifically the structure or the functionalities of the material (e.g., high porosity, CD anchoring to the support), but mainly that it is capable of performing its function, that is, the environmental cleaning, efficiently. Moreover, it has been observed that the chemical anchoring of the CD to the support is better for certain types of samples, but not decisive. In aqueous samples, the significant solubility [31] of CDs causes them to be lost little by little during adsorption processes, so the solid phase will lose little by little its expected capabilities and benefits offered by the presence of CD units in it. Oppositely, in air samples, there exist different examples of hybrid materials where the CD is not anchored, but the material developed also shows good functionality [33,35]. Indeed, similar experiments were carried out with a support containing anchored CD [26] with similar recovery results. Thus, for air samples, the advantages between anchoring the CD (which implies besides an increase in the price and in the time invested) and not anchoring it are hard to find, since the CD losses by lixiviation are not as clear as with aqueous samples. In short, it can be emphasized that the choice of the complexity of the material based on the application that is going to be given is a necessary previous step when developing a new environmental application.
In addition, it has been observed that the materials containing CD units are very versatile platforms as long as cyclodextrins, which have the leading role in adsorption processes, are accessible to the compounds we want to trap. Some studies have shown the importance of the size of the CD cavity to encapsulate analytes [26], since it can influence not only the capability of the CD to host the pollutant molecules, depending on the size of the last ones, but also in the porosity of the supporting material. However, it is remarkable that the more commonly used CD is β-CD, probably due to its intermediate size, as well as its cheaper price, which makes it the most flexible of the three native CDs. On the one hand, it may be that the pollutant does not comfortably fit in the CD cavity if its molecules are too large, but it may also be that the CD cavity is too big for our analyte, being the directing apolar interactions with it more fragile. Therefore, the retention would not occur so strongly. On the other hand, the porosity of the material influences the ease with which the analytes diffuse through the support.
In the case of porous silica, different strategies can be proposed to favor the accessibility of the analytes to the active centers by modulating both the size of the pores and their shape and organization. The cage-like pores that xerogels usually present have to guarantee a pore window with enough size for the passage of pollutants. This aspect can be controlled with the final silica preparative method. For example, in ordered mesoporous materials, it is advisable to avoid mesoporous blocking by the anchored CD. Materials with larger pores, such as SBA-15 silica, may be more accessible than typical MCM-41 materials [83]. On the other hand, interconnected mesoporous systems (either ordered or disordered) such as MCM-48 may have advantages over one-dimensional and non-interconnected pores [41][42][43]. Hierarchical porous materials can provide certain accessibility advantages over unimodal pore systems [145]. An example of hierarchical systems is bimodal silicas (meso and macroporous of the UVM-7 type [146,147]) formed by aggregation of mesoporous nanoparticles. These combine the typical porosity of MCM-41 with excellent capabilities for detection and pre-concentration of several pollutants. For these reasons, bimodal UVM-7-type silicas may represent a field of great interest in combination with CD for its application in environmental analysis and can constitute a new and attention-grabbing research field.
In the case of polymeric supports, larger CDs can also lead to a certain clogging of the pores, which prevents the accessibility of the pollutants of interest to the CD units, and the hydration of the material may be necessary. It is the case of some types of the reported nanosponges, being their effectivity is enhanced when they are applied in combination with aqueous samples. Then, the use of rigid connectors for the nanosponges [29] should be valued, since they allow greater accessibility of the CDs in the support with an adaptable porous system to different types of environmental samples. In other words, while for the analysis or remediation of air samples the use of materials with anchored CD units is not so determining, the selection of the base porous system is important. In general, there is greater accessibility in open or larger pore systems of interconnected ones. Additionally, those with cubic-shaped porosity tend to give better adsorption results than those with a hexagonal shape.
There still exist novel porous systems showing diverse virtues in terms of structure and porosity, but whose combination with CD and application for reversible adsorption purposes remains relatively unexplored. For example, as far as we know, only one example of MOF containing CD in which its porous crystalline structure is used to adsorb pollutants [139] has been described in the literature. A family of nanomaterials as extensive and versatile as MOFs surely offers multiple opportunities for the design of concentration and remediation systems for analytes based on the presence of CD in their porous structures.
Conclusions
A variety of materials, nanomaterials, and nanoparticles with cyclodextrin units on their structures has shown great potential in analytical chemistry and remediation actions in the last decades. In this review, an outlook on the extensive use of different types of cyclodextrins for the preparation of composite materials for food, environmental, and bioanalytical applications has been shown. The last section of the review summarizes the main types of CD-based sorbents while highlighting the main uses and advantages offered in each case. However, an accurate division of the reported solid phases is difficult to carry out, since in many cases the advantages offered by different types of support sorbents are used in a combined way.
As mentioned, cyclodextrins can provide a wide range of advantages in separation techniques due to their ability to form host-guest complexes with appropriate compounds. They are oligosaccharides, so obtaining native CDs is easy and environmentally friendly. Additionally, their capacity to be functionalized through their external hydroxyl groups makes their applications expand every time, mainly intending to bind them to solid supports that significantly reduce or eliminate their high solubility in water, which is an important drawback reported, expanding at the time their applications.
This work reviews promising alternatives to conventional commercial materials usually used for the objectives described. However, additional efforts should be aimed in the future at translating the achievements reported into practical environmental applications, contributing in this sense further to the environmental benefits of nanotechnology. For this reason, future studies on the development of new CD-based reversible adsorbents for remediation and quantitation in analytical methods may focus in the following areas: (1) greater sophistication of CD-based materials with more flexible and efficient synthetic methods of supporting materials, nanomaterials, and nanoparticles as carriers for cyclodextrins; (2) improved distribution of CD units along the solid support used to increase the accessibility to the analytes to be captures; (2) development of faster, easier, more affordable, greener, and smarter separation techniques with better abilities for the isolation of the compounds of interest thanks to a progress in the structure of the adsorbents used; and (4) use of more efficient analytical methodologies with better analytical parameters obtained, including higher extraction recoveries, selectivity, and sensitivity, especially in the cases of methods regarding the quantitation of emerging pollutants in complex samples at trace level.
While major progress has been accomplished in the creation of new opportunities in the field, the demonstration of the possibilities of the adsorbents reported in their use on an industrial scale with a promising capacity of reusability remains still pending. Therefore, further research on developing and selecting the most promising types of CDbased materials is still necessary. In this sense, we hope that this review will motivate higher efforts in environmental applications of cyclodextrins in the nanotechnology field along with the scientific community.
Data Availability Statement:
No new data were created or analyzed in this study. Data sharing is not applicable to this article.
Conflicts of Interest:
The authors declare no conflict of interest. | 2020-12-24T09:04:58.013Z | 2020-12-23T00:00:00.000 | {
"year": 2020,
"sha1": "596d2d875d5f355ad46854081560b4c6322aa9e7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-4991/11/1/7/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "670ed9991ce773355aceb500b9a80d01e7e1fdf9",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
114506044 | pes2o/s2orc | v3-fos-license | Optimisation of the vehicle transmission and the gear-shifting strategy for the minimum fuel consumption and the minimum nitrogen oxide emissions
The paper outlines a computationally efficient analytical method for evaluating the fuel consumption and the nitrogen oxide emissions during manoeuvres pertaining to the New European Driving Cycle. An integrated optimisation procedure is also included in the analyses with minimisation of the brake specific fuel consumption and minimisation of the nitrogen oxide emissions as objective functions. A set of optimum gear ratios are determined for a four-speed transmission, a five-speed transmission and six-speed transmission as the governing parameters in the optimisation process. The analysis highlights the determination of gear-shifting objective-driven strategies based on the minimisation of either of the declared objective functions. A reduction of 7.5% in the brake specific fuel consumption and a reduction of 6.75% in nitrogen oxide emissions are attainable in the best-case scenario for a six-speed transmission and a gear-shifting strategy based on the lowest brake specific fuel consumption for the case of an engine. The novel integrated analytical simulations and multi-objective optimisation have not been hitherto reported in literature. It provides the opportunity for an objective intelligent-based approach to the use of gear shift indicator technology. The results of this study also show that transmission optimisation can act as an effective and inexpensive mean to enhance the fuel efficiency and to reduce the emissions.
Introduction
The exhaust emissions associated with burning fossil fuels in internal-combustion engines is a growing environmental concern. Many of the constituents of these emissions contribute to greenhouse gases, which absorb heat in the atmosphere, leading to increased temperatures and thus global warming. 1 The increase in environmental greenhouse gases can result in flooding, droughts, population displacement and significant damage to the ecosystem. 2 The exhaust emissions also affect the quality of air with health-related implications, particularly an increase in the incidence of respiratory diseases. 3 Burning fossil fuels such as petrol and diesel not only affect the environment but also lead to their depletion. There are significant difficulties in estimating how long reserves of fossil fuels will last. 4 For road transport, alternatives for fossil fuel as a source of energy are emerging rapidly, such as hybrid or electrical propulsion systems. However, for the foreseeable future and at least until the middle of the twenty-first century, internal-combustion energy is expected to play the major role as the means of propulsion for road transport. Therefore, improved fuel efficiency and reduced emissions from internal-combustion engines and powertrain systems remain important research activities.
Legislation and directives regarding levels of emissions are progressively becoming more stringent as the automotive manufacturers strive for improved fuel efficiency with new innovative solutions or practical palliations. There have been many emergent technologies to reduce the brake specific fuel consumption (BSFC). 5 They include downsizing of powertrain systems, improved output power-to-weight ratio, turbocharging, cylinder deactivation and stop-start in congested urban driving.
Fraser et al. 6 carried out driving-cycle simulations with a class D vehicle to investigate the fuel consumption benefits that can be accrued through downsizing. The original vehicle engine was a 2.0. It turbocharged gasoline direct-injection engine and the 'aggressively' downsized selected engine was the 1.2 l MAHLE downsized engine. The simulations reported by Fraser et al. 6 showed a fuel saving of almost 15%.
Douglas et al. 7 investigated the effects of cylinder deactivation (CDA) the air controlled autoignition (CAI) on the fuel consumption and the emissions. CDA is used during low-load conditions. When a number of cylinders are deactivated, this constitutes an effective engine downsizing. To maintain the engine torque with fewer cylinders, the fuel and the air supply need to be increased by using an increased throttle. Therefore, the combustion pressure in the active cylinders is increased, resulting in more efficient combustion. The closed valves of the deactivated cylinders reduce the pumping losses of the engine, thus increasing its overall efficiency. Additional fuel savings can also be accrued with a reduction in the effective surface area of the cylinders. Therefore, less heat is lost through conduction.
CAI is a combustion strategy in which fuel and air are premixed and ignited through air-fuel compression. Ignition occurs at multiple points, resulting in a rapid burn rate. This controlled ignition leads to lower cylinder temperatures owing to internal exhaust gas recirculation. The benefits of CAI are increased efficiencies, lower nitrogen oxide (NO x ) emissions, lower carbon dioxide (CO 2 ) emissions and lower particulate emissions. The results from driving-cycle simulations on engines using both CDA and CAI showed a fuel consumption saving of 10% and a reduction of 28% in the NO x emissions during the New European Driving Cycle (NEDC) 8 (which consists of four repeated Economic Commission for Europe R15 (ECE R15) urban driving cycles and one Extra-Urban Driving Cycle (EUDC)).
Hybrid powertrains are an alternative approach. They also make use of energy recovery systems to store some of the otherwise parasitic energy loss and to recover the same for useful purposes, including for propulsion. Three different hybrid systems with stored energy in a battery, or a flywheel or as high-pressure fluid in a hydraulic system were analysed by Dingel et al. 9 The simulation results showed decreases in the fuel consumption for the three systems during the NEDC of approximately 31%, 33% and 27.5% respectively. 9 Gear shift indicators are devices which are designed to indicate to the driver when a gear shift should be made. The on-board computer calculates the fuel consumption when in any gear and suggests a shift in accordance with the lowest attainable fuel consumption and emissions. Vagg et al. 10 showed through simulations that the vehicle fuel consumption can be reduced by 4.3% following this approach. The CO 2 emissions can also be reduced by 4.5% during the NEDC. 10 Norris et al. 11 showed that the gear shift indicator is able to reduce the fuel consumption by 4% and 7% for a Mini Cooper and a Ford Transit van respectively. However, the Volkswagen Golf tested in the same paper showed little improvement. 11 The fuel savings depend greatly on the vehicle and the gear-shifting strategy. The use of shift indicators does not require any significant modifications to the vehicle or engine. Thus, they make a simple, inexpensive and effective way to reduce the fuel consumption and the emissions.
This paper investigates the gear ratios and the gearshifting strategy in a simultaneous manner in order to obtain an optimum design, which has not hitherto been studied in combination. These factors can shift the engine operating point to a more efficient region, reducing the fuel consumption and the NO x emissions. A numerical method is developed to calculate the firstgear ratio to provide adequate gradeability and a topgear ratio to reduce the fuel consumption in highway driving. The intervening gear ratios are initially equally spaced. Subsequently, a range of new gear ratios instead of the initial intervening values are calculated. The fuel consumption, the NO x emissions and the 0-60 mile/h acceleration times are calculated for each gear ratio combination. A multi-objective optimisation approach is used to find the optimum gearbox configuration for the specified range of gear ratios. The optimum gearbox design can provide the lowest fuel consumption, the lowest NO x emissions or a trade-off between these objective functions. The 0-60 mile/h acceleration times are intended to show how the vehicle performances are affected by the optimum gearbox configurations, as this is an important driveability metric. The optimum gearbox concepts do not consider any design constraints. Therefore, they are intended to be used as a target and a starting point for transmission designers.
Simulations are carried out using the NEDC, and the savings made are compared with the original gearbox fitted to the studied vehicle. The results show that, with the addition of another gear pair, optimisation of the gear ratios and changes to the gear-shifting strategy, the fuel consumption and the NO x emissions can potentially be reduced by up to 7.52% and 7.6% respectively. The 0-60 mile/h acceleration times remain almost unchanged, and so the vehicle transient performance can be maintained with the optimum designs. The results reveal that optimisation of the transmission can be considered as an effective and inexpensive alternative approach to reduce the fuel consumption and the emissions.
Longitudinal dynamics
The equation of motion is derived from a longitudinal force balance 12 ( Figure 1) where F x is the tractive (motive) force, F is the aerodynamic drag, F R is the rolling resistance and F G is any gradient loading. The vehicle traction force 12 includes the effects of the inertias of the drivetrain components. As the vehicle accelerates, the drivetrain components also need to accelerate which leads to a lower acceleration value as given by A transaxle front-wheel-drive vehicle is considered ( Figure 2). For this configuration, the effective inertia is 12 The aerodynamic drag acting on the front projected area A f of the vehicle at the forward speed v is 12 The rolling resistance and the coefficient of friction are calculated as 12 where m = 0.01(1 + 2.236 94v/147) and where positive angles correspond to uphill travel and negative angles represent downhill manoeuvres. 13 The gradient force is Selection of first gear ratio The first-gear ratio is selected in order to ensure an adequate vehicle hill start capability. It is also selected to provide a low creeping speed to avoid excessive clutch use in congested traffic. 14 For these reasons the first gear ratio is fixed and no further optimisation is carried out. A hill with a 1-in-3 gradient (33%) is usually used to test the vehicle hill start capabilities, which is the approach adopted here. In hill climb, the applied wheel torque to maintain the required acceleration and to overcome the resistive forces is determined. The acceleration of the vehicle is assumed to be constant with its value taken as the lowest starting acceleration 8 in NEDC conditions (0.534 m/s 2 ). The initial engine speed is assumed to be 1000 r/min (by the clutch) with the engine torque at full load. Initially no drag, rolling resistance or inertia effects are taken into account, with iterations undertaken thereafter according to 12 The velocity of the vehicle is calculated with this gear ratio at 1000 r/min, so that the maximum resistive force at the start of the manoeuvre is The resistive and inertial forces are thus obtained as 12 This process is repeated iteratively until the first-gear ratio converges to within an error tolerance of 10 -4 .
Selection of the top-gear ratio
Traditionally, the top-gear ratio for a vehicle is selected to provide the maximum speed. This is limited by the engine power and the resistive forces, predominantly the aerodynamic drag, when travelling at high speeds. 14 The aim of optimisation is to reduce the fuel consumption and the NO x emissions. Therefore, the selection criteria for the selection of the top-gear ratio are changed to achieve these aims, noting the maximum legislated speed limit. Here, the top-gear ratio is selected so that it provides the maximum efficiency at the maximum motorway legal speed. For the UK, the legal speed limit is 70 mile/h (31.3 m/s). Generally, the maximum efficiency region (the lowest BSFC) for an engine is for engine speeds between 2000 r/min and 3000 r/min. Therefore, the top-gear ratio can be selected as Intervening-gear ratios After the first-gear ratio and the top-gear ratio are determined, it is possible to estimate a set of intervening gears, followed by an optimisation process. These ratios are initially set at discrete equal spacings between the first-gear ratio and the top-gear ratio. A range of intervening-gear ratios can then be initially calculated.
For each of these ratios, a range is defined as a percentage above and below the initially estimated range.
Gear-shifting strategies
Fixed engine speed. In this gear-shifting strategy, the gear is changed once the engine has reached a defined speed. The defined speeds are different for different situations. For city and highway driving, most drivers aim to keep the engine speed relatively low (below 2500 r/min) as this generally attains a better fuel consumption by early upshifting. 16 This type of fixed-speed gear change is ideal for driving-cycle simulations, where the fuel consumption and the emissions are the most important. For situations where an increased acceleration is required, such as overtaking or joining a highway, drivers tend to allow the engine to reach higher speeds before upshifting (greater than 3000) as this results in a higher output at the wheels. This type of fixed-speed gear changing is ideal for simulations of an accelerative manoeuvre.
Minimum fuel consumption and minimum NO x emissions (driving cycle). To ensure the minimum fuel consumption or the minimum NO x emissions, a gear should be selected to achieve these outcomes. Each potential gear should be analysed to predict the repercussions for the fuel consumption and/or the NO x emissions according to the instantaneous prevailing conditions. The optimum prediction should also keep the engine speed between the idle and the maximum with the engine torque not exceeding the full load. In practice, the driver does not know the required gear selection for the lowest fuel consumption or the lowest NO x emissions a priori. Therefore, the vehicle needs to be fitted with a gear shift indicator device or an automated shifting system. Gear shift indicator devices are already in use in some road vehicles in order to reduce the fuel consumption and to achieve lower emissions. The vehicle's on-board computer is used to calculate the best gear, depending on the current speed, load and throttle position. Then, the most suitable gear is indicated on the indicator. 11 For simulation purposes, it is assumed that the driver follows the gear shift indicator or that an automatic shifting system is employed.
Acceleration manoeuvre
A model for the acceleration of the vehicle is needed in order to analyse the effects of each set of gear ratio combinations on the performance of the vehicle. An acceleration manoeuvre consists of a vehicle driven at full throttle along a straight flat road until a certain criterion is encountered. Most manufacturers quote a 0-60 mile/h acceleration time. This is the criterion used in the current study.
A vehicle start model is used with the vehicle travelling at its lowest forward velocity in first gear with an engine speed of 1000 r/min. The first-gear ratio is fixed at this speed with an adequate hill-start capability, as already mentioned. Therefore, the same starting procedure is used in all the reported simulations.
Simulation methodology
1. The accelerative manoeuvre is carried out at full throttle, taking the engine torque from the fullload torque curve. The time histories of the acceleration, the velocity, the displacement and the traction force are obtained by successive integrations of the equation of motion (equation (1)). 2. During the simulations, the gear-shifting strategy should be monitored to ascertain whether the gear needs to be changed. For the maximum engine torque gear-shifting strategy, the engine torque at the next gear is calculated. If the calculated torque exceeds the current torque, then a change in gear is required. For a fixed-engine-speed gear-shifting strategy, a gear change is necessary, if the engine speed is greater than that defined. 3. Having calculated the time histories of the engine speed and torque, the BSFC and the NO x values corresponding to these conditions can be obtained from three-dimensional engine maps. The mass of burned fuel and the mass of NO x produced during the specified manoeuvre can be calculated as Driving-cycle analysis Driving cycles are a set of vehicle conditions which attempt to replicate actual road driving conditions. They are used to compare the fuel consumption and the emissions for various road vehicles. All vehicles destined for the European market must adhere to the Euro legislation on emissions. The emissions measurements are taken from an NEDC test ( Figure 3). The testing is usually carried out on a chassis dynamometer, because it is difficult to achieve consistent results in a road test although, from 2017, new testing rules require that a road test is also carried out, using a new driving cycle called the World Harmonised Light Vehicles Test Cycle (WLTC). 17 During simulations, it is assumed that the vehicle follows the driving cycle exactly and that the throttle response is instantaneous. The fuel consumption and the NO x emissions produced during the cycle are calculated. These values are used in the optimisation process in order to find the optimum gear ratios for best fuel economy or the lowest NO x emissions.
Simulation methodology
1. As the driving cycle needs to be followed precisely, the vehicle velocity is known a priori at each step of the simulations. Therefore, the required acceleration can be found simply as a = Dv Dt 2. The required engine torque to propel the vehicle at the required velocity and acceleration can simply be calculated by rearranging the equation of motion (equation (1)). 3. The engine speed at any prevailing gear is obtained as 4. For the minimum fuel consumption and/or the minimum NO x emissions, the gear-shifting strategy needs to predict the upcoming conditions in all potential gears at a point in the driving cycle. It is important to select the lowest gear ratio, but one which maintains the engine speed with the lowest torque. This forms the basis of the approach highlighted here. However, it will be necessary in the future to ensure that no sudden change in the torque surge or fade occurs as this can lead to impulsive action, inducing a plethora of drivetrain noise, vibration and harshness issues such as driveline clonk or exacerbated gear rattle. [18][19][20] 5. With the engine speed and torque evaluated, the corresponding BSFC and NO x level can be obtained by using two-dimensional interpolation of the engine map. The burned mass of fuel and the mass of NO x produced during the time history can be calculated, using equations (11) and (12).
Results and discussion
Vehicle and engine data The vehicle considered in this study is a front-wheeldrive five-speed manual transmission C-segment 1.6 l vehicle with a four-cylinder petrol engine. The pertinent data are listed in Tables 1 and 2 r/min, with each gear change duration of 0.5 s. Both these criteria were used in the simulation study. Figure 6 shows the velocity-time graph comparison between the measured (experimental) results and the simulated results. The 0-60 mile/h acceleration time for the experiment was reported 7 as 10.89 s and that for the simulation study is 10.27 s. The results show good correlation with a 5.7% deviation from the measured data.
Validation for the NEDC (the fuel consumption and the NO x emissions). An NEDC measurement was also presented by Douglas et al. 7 This is used to validate the model predictions for the fuel consumption and the NO x emissions. In this baseline experimental test, 7 the fixedengine-speed gear-shifting strategy was used in the NEDC experimental test with a gear upshift when an engine speed of 2450 r/min was reached.
The testing of an NEDC requires a cold start. The engine is not running at its optimum temperature, which leads to increased friction and thus increased fuel consumption. The tests used for producing the engine maps are normally carried out on a 'hot' engine operating at its optimum temperature. Therefore, a difference in the results is expected in the cold-start region of the It should be noted that this is only an approximation of the higher fuel consumption in order to compensate for the deviation in the results due to the cold start. A more precise calibration equation or use of specifically designed engine maps can be employed in order to obtain closer values. Temperature calibration was not applied to the NO x model as the results were quite similar.
The flow rate of fuel and the produced NO x at idle are estimated using the experimental graphs obtained by Douglas et al. 7 The average fuel flow rate at idle is 0.156 g/s, and the average rate of NO x generation is 0.001 264 g/s. Figure 7 shows the comparison of the predicted fuel consumption with the aforementioned experimental data, both instantaneous and in the accumulative form. The measured value 7 of the cumulative fuel consumed was 711 g, whereas the predicted value from the current analysis is 641.2 g. There is a difference of 10%, which constitutes an acceptable degree of predicted accuracy. The difference is probably due to certain simplifying assumptions in the model, such as the quasi-static tyre model, as well as engine maps which are constructed from steady-state test conditions, rather than in the transient conditions of the driving cycle. These transient conditions result in lower efficiencies which cause an increase in the fuel consumption. 23 Figure 8 shows a comparison between the NO x emissions (pre-catalyst) predicted here and the measured values 7 for the instantaneous amounts and the total cumulative amounts over the NEDC. The measured total NO x emissions are 28.3 g, whereas the predicted value is obtained as 29.3 g, which is a difference of 3.5% (a higher predicted level). Again the correlation is acceptable and the difference is expected to be due to the use of a steady-state NO x map.
Optimisation process. The first task in the optimisation process is to determine an optimum set of gear ratios which reduces the fuel consumption and the NO x emissions, while still maintaining the vehicle acceleration performance.
In addition, the number of gear stages in the transmission system is also altered to ascertain whether any additional reductions in the fuel consumption or the NO x can be accrued by considering an additional gear pair. As the number of gears increases, the gearbox cost, the compactness, the weight and the complexity also increase. A four-speed gearbox was tested to see whether removal of one set provides any tangible benefit. A six-speed gearbox was also tested to see whether sufficient reductions in the fuel consumption and in the NO x emissions occur to justify the disadvantages arising from the aforementioned issues of the cost and the compact light weight.
The first-gear ratio and the top-gear ratio are fixed as the values based on the hill-start capability and efficient motorway driving (previously noted). The intervening-gear ratios are initially assumed to be equally interspaced. The optimisation process is applied to these intervening-gear ratios. A spread of these ratios of 610% of their equally spaced values is used (five increments for a six-speed transmission). The percentage difference between the performances (for the BSFC and the NO x emissions) of any combination of the chosen intervening-gear ratios and the performances of the initial set is used as the optimisation objective function(s).
The accuracy of predictions is dependent on the time step which is used in the analysis. To keep the simulation time to a minimum, while still maintaining a good degree of accuracy, a conservative time step of 0.5 s is used. A time-step sensitivity analysis is carried out, the outcome of which shows that changing the time step near the selected value had little effect on the final outcome of the optimum set of gear ratios. With these optimal configuration(s) (depending on the set objective functions of the BSFC and the NO x emissions), acceleration manoeuvres were carried out in order to show how much compromise is made in terms of the acceleration to optimise the fuel consumption and the NO x emissions. All the acceleration simulations used a fixed-engine-speed gear-shifting strategy. The gears were shifted once the engine speed reached 6700 r/min.
Fixed-speed gear-shifting strategy. A number of drivingcycle simulations using the NEDC are carried out. A fixed-speed gear-shifting strategy, similar to that presented by Douglas et al., 7 is employed in order to compare the results with the results obtained from the original transmission configuration. A gear-shifting speed of 2450 r/min was used. Simulations were carried out to find the optimum number of gear pairs. Additionally, the intervening-gear ratios are allowed to alter in the prespecified range to obtain the most optimal configuration. Two optimum sets of selected ratios correspond to the minimum fuel consumption and the minimum NO x emissions. The purpose of this is to ascertain whether the fuel consumption and the NO x emissions can be reduced by using a four-speed gearbox, a five-speed gearbox or a six-speed gearbox.
The results in Table 3 show the optimum combination of gear ratios for a four-speed transmission, a five-speed transmission and a six-speed transmission yielding the lowest BSFC. All these transmission configurations reduce the fuel consumption and the NO x emissions during the NEDC in comparison with those of the original configuration of the vehicle. The fourspeed gearbox gives the greatest reductions in the BSFC and the NO x emissions, namely 3.74% and 2.28% respectively.
The results also show that the 0-60 mile/h acceleration time is increased for all the optimum gearbox configurations. Therefore, there is a trade-off between the acceleration performance and the improved BSFC and NO x emissions. However, the percentage deterioration in the acceleration performance is negligible, with the worst case adding a mere 0.5 s to the current installed vehicle configuration.
The optimum second-gear ratio and third-gear ratio for the six-speed transmission and five-speed transmission are very close. This suggests that one of these gear pairs should ideally be removed to reduce the manufacturing and assembly costs.
The European emissions legislation is becoming more stringent on NO x emissions. As an approach, the transmissions may be redesigned to reduce these emissions and to meet the requirements of the directives. The results in Table 4 show the optimum combination of gear ratios for a four-speed transmission system, a five-speed transmission system and a sixspeed transmission system with the lowest NO x emissions. The results show that all the optimum alternatives can significantly improve on the current vehicle transmission and all have reduced fuel consumption values. The four-speed alternative shows the greatest improvement in reducing the NO x emissions and improving the BSFC by 3.03% and 2.89% respectively.
The acceleration performance (0-60 mile/h) has increased in time, when compared with the current configuration, for the five-speed transmission and the sixspeed transmission, but the optimum four-speed version in fact shows an improved acceleration performance.
Although various four-speed configurations show the lowest fuel consumption and the lowest NO x emissions (based on the fixed-speed gear-shifting strategy), they can have other disadvantages from the viewpoints of driveability and comfort. With widely apart gear ratios, large variations in the performance can ensue, which makes timely and smooth gear shifting difficult, somewhat putting the onus on the driver of a manual system.
Metric-based gear-shifting strategies. The previous section dealt with alternative transmission configurations (i.e. fixed-ratio gear shifting of n-speed transmissions, n = 4, 5 or 6). Two different four-speed gearboxes were found to provide an optimum performance based on fixed gear-shifting strategies. These two alternatives are not necessarily the most optimum, since other shifting Table 4. Optimum combination of fixed gear ratios for a four-speed transmission, a five-speed transmission and a six-speed transmission with the lowest NO x emissions. strategies such as that based on the metric of minimum fuel consumption or on the metric of minimum NO x emissions should be investigated.
Case (i): optimising for minimum fuel consumption. The approach adopted here is to determine a new gearbox configuration which yields the lowest BSFC.
Driving-cycle simulations are carried out using the minimum BSFC as the adopted performance metric for the gear-shifting strategy rather than upshifting according to a predetermined vehicle or engine speed.
The results are shown in Table 5, indicating that the six-speed transmission achieves the lowest BSFC during the NEDC. Therefore, the gear ratio optimisation process is carried out for the six-speed transmission system.
Thus far, all the simulation studies have used a discrete set of gear ratios and combinations. The simulation results in this section can be used in an optimisation process for a continuous range of gear ratios. AVL CAMEO was selected as the optimisation tool. 24 It is a multi-objective optimisation tool and thus suitable for the current study. It is based on a genetic algorithm. 24 The matrix of the results for all the gear ratios and combinations from the driving-cycle simulations are imported into CAMEO, which fits the data to a continuous range of variables. An accurate data fit is essential for the quality of the optimisation process as these fitted values are used within the optimisation itself. Figure 9 shows the fitted results versus the discretely obtained simulation results. A goodness of fit is achieved.
A single-objective optimisation was carried out with AVL CAMEO to find the gear ratio set with the lowest fuel consumption. An optimum set of gear ratios is found as presented in Table 6.
The results show that a further improvement in the BSFC can be achieved when the gear-shifting strategy is based on metric-based gear ratios rather than merely on the vehicle speed. The fuel consumption can be further reduced by 7.52%. The NO x emissions can also be reduced by 6.73%, and the vehicle acceleration performance deteriorates only slightly by 0.45% in comparison with the values for the original vehicle transmission and the original gear-shifting strategy. As a six-speed transmission is more expensive to design and manufacture than the four-speed alternatives are (described in the previous section), then any reductions in the BSFC and reductions in the NO x emissions should justify the increased costs. Figure 10 shows the fuel consumption (g/s) map and a comparison of the engine operating points during the NEDC for the original gearbox configuration and the optimum six-speed gearbox with the gear-shifting strategy based on the lowest BSFC. It can be seen that the optimum gearbox configuration shifts the engine operating points to a more efficient region of the map.
Case (ii): optimising for the minimum NO x emissions. For this optimisation the selected metric is the minimised NO x emissions. This is a desired outcome for meeting the stringent European emissions legislation. NEDC simulations are carried out using the minimum NO x emissions to determine the new gear-shifting strategy for various speed transmissions. Table 7 shows that the lowest NO x emissions during the NEDC can be achieved with a six-speed transmission using the minimum-NO x -emissions gearshifting strategy. Therefore, gear ratio optimisation is carried out for the six-speed transmission.
The matrix of discrete results for all the gear ratios and combinations from the driving-cycle simulations are imported into AVL CAMEO so that an optimum result can be determined. CAMEO is then used to model and fit the simulation-based data. Figure 11 shows the satisfactory goodness of fit to the discrete simulation results.
A single-objective optimisation based on the lowest NO x emissions is carried out to find the optimum set of gear ratios. The results are presented in Table 8.
The results show that the NO x emissions can be reduced by 7.6% and the fuel consumption by 4.65% in this case, as well as the acceleration performance by 0.3%. Figure 12 shows the engine NO x emissions rate (g/s) map and a comparison of the engine operating points during the NEDC for the original gearbox configuration and the optimum six-speed gearbox. It can be seen that the optimum gearbox configuration shifts the engine operating point to more efficient regions of the map with lower NO x emissions (g/s).
Case (iii): trade-off between the minimum fuel consumption and the minimum NO x emissions. Both the BSFC and the NO x emissions are clearly important from a commercial viewpoint as well as a legal perspective. Therefore, multi-objective optimisation is the ideal approach, in this case with both these metrics as objective functions. The results of case (i) and case (ii) clearly show that, with a unitary objective function and a given transmission configuration, an appropriate gear shift indicator can be developed. In the case of multi-objective problems, clearly a degree of trade-off or priority weighting should be used between the intended outcomes. In the case studied here, Tables 6 and 8 yield two sets of gear ratio outcomes. However, the car can be driven with Figure 11. NO x emissions measured and predicted data obtained using AVL CAMEO.
NOx: nitrogen oxides. only a unique gear-shifting strategy. In this case, the minimum-BSFC objective is chosen as the primary objective because larger reductions in both the fuel consumption and the NO x emissions can be attained.
The previous results have shown that the lowest fuel consumption and the lowest NO x emissions during the NEDC are achieved with a six-speed transmission for the vehicle under consideration. The matrix of all the NEDC simulation results for all the gear ratios are imported into the optimisation process. The software is then able to model and fit the fuel consumption and the NO x emissions data. Predictions are made for both the fuel consumption and the NO x emissions for any set of gear ratios with a high degree of confidence. This is shown for both set of results fitted by the optimisation routine against the discrete simulated values in Figure 13, showing high degrees of conformance.
A multi-objective optimisation is then carried out using CAMEO to find the optimal gear ratios with the lowest fuel consumption and the best attainable NO x emissions as both criteria cannot be fully optimized. A 'Pareto front' graph is shown in Figure 14. All the predicted fuel consumption levels and the corresponding NO x emissions for the various determined gear ratios are shown in the figure. The dark curve at the bottom highlights the Pareto front outcome.
The two optimum gear ratio sets with the minimum BSFC and the lowest simultaneous NO x emissions are presented in Table 9.
The results show that, for the minimum fuel consumption and the lowest NO x emissions on the Pareto front, there is a difference of 0.25% in the fuel consumption and 0.037% in the NO x emissions. The optimum gear ratio set depends on the importance attached to these criteria in the optimisation process. However, there is only a small reduction in the NO x emissions compared with the possible reduction in the fuel consumption, meaning that the gear ratio set corresponding to the minimum BSFC yields the best outcome in the case studied. NOx: nitrogen oxides.
Conclusion
The paper outlines a novel computationally efficient analytical method to evaluate the fuel consumption and the NO x emissions during simulations of the driving cycle. It also provides a good test of the vehicle acceleration performance. The vehicle performance during an NEDC is assessed and verified against measured experimental tests reported elsewhere. 7 The method sets the first-gear ratio based on an adequate vehicle hill climb performance. The top-gear ratio is selected for the lowest BSFC in highway driving conditions. The intervening-gear ratios for various fourspeed, five-speed and six-speed transmission configurations are calculated on the basis of the optimal BSFC or NO x emissions and optimised using a genetic-algorithm-based optimisation routine CAMEO.
It is shown that with the minimum BSFC as the primary objective function, choosing a determined set of optimum gear ratios and altering the gear-shifting strategy results in a reduction in the BSFC of 7.52% and a reduction in the NO x emissions of 6.73% relative to the original fixed-speed gear shifts. With the NO x emissions level as the primary objective, optimisation of gear ratios leads to a reduction of 7.6% in the NO x emissions with a decrease of 4.65%in the BSFC. In the optimised cases a six-speed transmission shows the best outcome in comparison with those for the four-speed transmissions and five-speed transmission, but clearly with increased manufacturing costs.
The computationally efficient analytical simulation as well as rapid scenario-building optimisation enable application of the methodology to gear shift indicator technology, thus embedding a certain degree of inherent intelligent feedback to the drivers of manual transmission. For other transmissions, this action can be automated.
The current study concentrated on the fuel consumption and the NO x emissions. In order to show the ultimate potential, maximum flexibility and fewer constraints were considered in the model. In reality, the efficiency and the emissions may show contradictory behaviour to the ride comfort of the vehicle. For example, the optimum gear ratios for the best fuel economy and emissions may cause the driveability or the shifting quality to deteriorate. The shifting quality can be mitigated by using technologies such as an automated manual transmission in order to reduce the side effects of the performed optimisation. The presented model and conclusions are based on the overall drivetrain ratios as shown in Tables 3 to 8. Therefore, engineers will have the flexibility to optimise further the specific ratio of the transmission and the final drive to achieve a desired configuration such as a direct-drive transmission or an overdrive transmission. This is important since the direct-drive configuration provides a potentially simpler and lighter design.
Considering the importance of implementing realworld driving cycles such as the WLTC, the same model can be used to optimise the transmission on these new driving cycles in the future. | 2019-04-15T13:11:12.561Z | 2017-04-26T00:00:00.000 | {
"year": 2017,
"sha1": "90c51abf6b393b5cb17a877a299521581b104f02",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0954407017702985",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a2867c27afa90485f2f9164ca3219e864b52755c",
"s2fieldsofstudy": [
"Business",
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
242063982 | pes2o/s2orc | v3-fos-license | The Role of CD4+ T Cells and Microbiota in the Pathogenesis of Asthma
Asthma, a chronic respiratory disease involving variable airflow limitations, exhibits two phenotypes: eosinophilic and neutrophilic. The asthma phenotype must be considered because the prognosis and drug responsiveness of eosinophilic and neutrophilic asthma differ. CD4+ T cells are the main determinant of asthma phenotype. Th2, Th9 and Tfh cells mediate the development of eosinophilic asthma, whereas Th1 and Th17 cells mediate the development of neutrophilic asthma. Elucidating the biological roles of CD4+ T cells is thus essential for developing effective asthma treatments and predicting a patient’s prognosis. Commensal bacteria also play a key role in the pathogenesis of asthma. Beneficial bacteria within the host act to suppress asthma, whereas harmful bacteria exacerbate asthma. Recent literature indicates that imbalances between beneficial and harmful bacteria affect the differentiation of CD4+ T cells, leading to the development of asthma. Correcting bacterial imbalances using probiotics reportedly improves asthma symptoms. In this review, we investigate the effects of crosstalk between the microbiota and CD4+ T cells on the development of asthma.
Introduction
Asthma is a common respiratory disease involving chronic airway inflammation, primarily caused by allergens such as house dust mites (HDMs), pollen, and animal dander [1]. In general, the prevalence of asthma is approximately 15-20%, but this varies by country [2]. Chronic inflammation resulting from continuous inhalation of allergens can lead to airway remodeling, which in turn can induce various symptoms associated with asthma, such as cough, dyspnea, and wheezing due to airway narrowing [1].
Steroids are often prescribed to control airway inflammation and represent the gold standard for asthma treatment [3]. Although steroid use has improved the quality of life of many asthma patients [4], some patients with severe asthma are refractory to current steroid treatment protocols [3]. These severe asthma patients have poorer quality of life due to a higher frequency of asthma attacks [5]. A variety of drugs for treating severe asthma have been developed in recent years, including mepolizumab, reslizumab, benralizumab and dupilumab [6]. However, these drugs were developed for patients with T helper (Th)2 asthma, and unfortunately, no drugs for patients with non-Th2-asthma are currently available. Thus, novel therapeutic targets for drugs to treat non-Th2-asthma are needed, but the development of such drugs will require elucidation of the mechanism underlying the role of CD4 + T cells in asthma pathogenesis.
Two asthma phenotypes have been described, Th2 and non-Th2, which are determined by CD4 + T cells [7]. The asthma phenotype can change depending on which type of CD4 T cell is differentiated; consequently, the response to asthma drugs can change accordingly [8]. Th2-asthma (i.e., eosinophilic asthma) is characterized by eosinophilic infiltrate in the sputum [7]. The pathogenesis of eosinophilic asthma is characterized by secretion of high levels of interleukin (IL)-4, IL-5 and IL-13 by Th2 cells [1]. In general, eosinophilic asthma is responsive to steroid treatment, and severe eosinophilic asthma is effectively treated by various newly developed drugs [5]. Non-Th2 asthma (i.e., neutrophilic asthma), by contrast, is characterized by neutrophilic infiltrate in the sputum [7] and secretion of high levels of interferon gamma (IFN-γ) and IL-17 by Th1 and Th17 cells. In contrast to Th2asthma, non-Th2 asthma does not respond steroids or newly developed asthma drugs [7]. As the disease progression pattern and asthma treatment options differ depending on the differentiation of CD4 + T cells, elucidating the biological roles of CD4 + T cells in the pathogenesis of asthma is critical for developing effective asthma treatments and predicting patient prognosis.
Although CD4 + T cells and other immune cells play key roles in the pathogenesis of asthma, several studies have reported a relationship between the host microbiota and asthma [9][10][11]. Commensal bacteria, which constitute a subtype of the microbiota, are symbiotic bacteria [12]. An adult male weighing 70 kg reportedly harbors approximately 3.8 × 10 13 commensal bacteria [13]. Approximately 29% of commensal bacteria reside in the gastrointestinal tract, 26% in the oral cavity, 21% on the skin, 14% in the airways, 9% in the urogenital tract, and 1% in the blood [12]. Commensal bacteria perform a variety of biological functions important to the host, including fermentation of undigested dietary carbohydrates, synthesis of bile acids and vitamins, and immune surveillance [14]. Importantly, alterations in the composition of commensal bacteria have been associated with various chronic inflammatory diseases, such as asthma, inflammatory bowel disease, and obesity [15]. Recent literature indicates that the composition of beneficial and harmful bacteria in the host determines the disease pattern of asthma [16]. The same study revealed that various environmental factors that affect these bacteria also affect the differentiation of CD4 + T cells, resulting in the development of asthma [16].
In this review, we discuss the detailed mechanism of the pathogenesis of asthma as it relates to Th2-asthma and non-Th2 asthma, with a particular focus on CD4 + T cells. In addition, we discuss the role of the bacterial microbiota in the induction of asthma and its effect on CD4 + T cells in asthma.
Th2-Asthma with Eosinophilic Inflammation
Th2 cells play a central role in the development of Th2-asthma [17]. The hallmark of Th2-asthma is infiltration of the airways by eosinophils. Eosinophilic asthma is diagnosed when the proportion of eosinophils in the sputum is >3% [17]. Th2-asthma can be caused by allergens and non-allergens, including pollutants, microbes, and glycolipids [18]. Approximately 50% of asthmatic adults have Th2-asthma [5]. Although various immune cells are involved in the pathogenesis of Th2-asthma, the Th2, Th9, and T follicular helper cell (Tfh) CD4 + T cell subtypes play particularly key roles ( Figure 1).
Th2 cytokines play a major role in airway eosinophilic infiltration in Th2-asthma [21]. Compared with healthy control subjects, expression of the Th2 cytokine-related genes IL-5, GPR55, and ELAVL1 is upregulated in peripheral blood mononuclear cells (PBMCs) of asthma patients [22]. Figure 1. Pathogenesis of eosinophilic asthma mediated by T helper (Th)2, Th9 and T follicular helper (Tfh) cells. The development of eosinophilic asthma is associated with the Th2, Th9 and Tfh subtypes of CD4 + T cells. Th2 cells play roles in eosinophilic infiltration, goblet cell hyperplasia, airway hyperresponsiveness, immunoglobulin (Ig)E production, and upregulation of endothelial molecules, including vascular cell adhesion molecule (VCAM)-1 and intercellular adhesion molecule (ICAM)-1. GATA-binding protein 3 (GATA3) and signal transducer and activator of transcription (STAT)6 are transcriptional factors in Th2 cells. Th9 cells mediate mast cell infiltration and IgE production. PU and Irf4 are transcriptional factors in Th9 cells. Bcl6-expressing Tfh cells mediate isotype switching and IgE production. Text color: Black, cytokine. LN, lymph node; DC, dendritic cell; IL, interleukin; Irf4, interferon regulatory factor 4; CXCR5, C-X-C chemokine receptor type 5. Figure created using BioRender.com (accessed on 6 September 2021).
Th2 cytokines play a major role in airway eosinophilic infiltration in Th2-asthma [21]. Compared with healthy control subjects, expression of the Th2 cytokine-related genes IL-5, GPR55, and ELAVL1 is upregulated in peripheral blood mononuclear cells (PBMCs) of asthma patients [22]. IL-4 secreted by Th2 cells binds to the IL-4 receptor (IL-4R) in an autocrine manner to continuously initiate Th2 differentiation [23]. Th2-derived IL-4 also promotes allergen- IL-4 secreted by Th2 cells binds to the IL-4 receptor (IL-4R) in an autocrine manner to continuously initiate Th2 differentiation [23]. Th2-derived IL-4 also promotes allergenspecific immunoglobulin (Ig)E class switching in B cells [24], and upregulates the expression of intercellular adhesion molecule-1 and vascular cell adhesion molecule (VCAM)-1 in endothelial cells in the lungs, resulting in eosinophil recruitment [24]. IL-5 also plays a critical role in eosinophilic inflammation [25]. Foster et al. reported that IL-5-deficient mice exhibit reduced airway eosinophilia despite allergen-induced allergic inflammation [26]. In the bone marrow, IL-5 promotes the differentiation of myeloid precursor cells to mature eosinophils [27]. Circulating mature eosinophils that were triggered to differentiate by IL-5 then adhere to VCAM-1 on endothelial cells and migrate to the bronchial lumen [28]. Accumulation of mature eosinophils in the bronchial lumen exacerbates eosinophilic asthma because activation of Jak2 and Raf-1 inhibits eosinophil apoptosis [25]. In addition, eosinophil survival is prolonged due to the upregulation of mitogen-activated protein kinase genes [25].
IL-13 plays an important role in airway remodeling [29]. IL-13-STAT6 signaling in human epithelial cells induces goblet cell hyperplasia via the upregulation of the mucin 5AC gene [30]. In addition, IL-13 promotes airway hyperresponsiveness, which is aggravated narrowing of the airways in response to external stimuli, by upregulating smooth muscle cell contractility and pulmonary fibrosis [31].
As Th2 cytokines have a marked effect on the occurrence of eosinophilic asthma, various therapeutic agents targeting Th2 cytokines have been developed [6]. Current Th2 cytokine-targeted therapies approved by the Food and Drug Administration can be divided into two classes: drugs that target cytokines (e.g., mepolizumab and reslizumab), and drugs that target cytokine-binding receptors (e.g., benralizumab and dupilumab).
Mepolizumab, an IgG1 monoclonal antibody targeting IL-5, is administered via subcutaneous injection of 100 mg every 4 weeks [6]. Compared with the placebo, mepolizumab reduced glucocorticoid and asthma exacerbation in patients with eosinophilic asthma [32,33]. Reslizumab, an IgG4 monoclonal antibody targeting IL-5, is administered via intravenous injection of 3 mg/kg every 4 weeks [6]. Reslizumab also reduces the number of acute exacerbations and the amount of maintenance steroids required in patients with moderate to severe eosinophilic asthma [34].
Benralizumab, a humanized IgG1 monoclonal antibody targeting IL-5 receptor α, is administered via subcutaneous injection of 30 mg every 8 weeks [6]. Compared with the placebo, benralizumab decreased glucocorticoid use by 75% and decreased the number of asthma exacerbations by 70% in patients with severe eosinophilic asthma [35]. Dupilumab, an IL-4Rα antagonist, is administered via subcutaneous injection every 2 weeks [36]. Compared with the placebo, dupilumab decreased the number of asthma exacerbations by 47.7% in patients with moderate to severe uncontrolled asthma [36]. Furthermore, dupilumab improves lung function, which has not been demonstrated with the other Th2 cytokine-targeted therapies [36]. After 12 weeks of dupilumab use, an improvement in forced expiratory volume in 1 s (FEV1) was observed, with an average increase in FEV1 of 0.32 L [36].
In addition to these cytokines and cytokine-binding receptor-targeted therapy, drugs targeting Th2 transcription factors are also under development [37]. For example, SB010, a GATA3-specific DNAzyme that inhibits transcription of the GATA3 gene, improved lung function and decreased plasma IL-5 levels compared with the placebo [38]. However, that study had several limitations, such as the small study group involving only 40 asthma patients [38]. Large-scale studies of SB010 targeting patients with severe eosinophilic asthma are thus needed.
Th9 Cells
Recent reports suggest that Th9 cells induce allergic reactions and inflammatory responses [39]. Th9 cells are a subset of CD4 + T cells that secrete IL-9 and were initially thought to be a subtype of Th2 cells [40]. However, research has revealed that Th9 cells do not produce IL-4, Il-5, or IL-13 and only secrete IL-9 [41]. In addition, Th9 cells express PU.1 and interferon regulatory factor 4 (Irf4) as transcription factors [42]. Th9 cells are therefore recognized as a new subtype of CD4 + T cells due to differences compared with conventional Th2 cells in terms of the cytokines and transcription factors produced [41].
Th9-derived IL-9 plays an important role in the development of eosinophilic asthma by assisting the action of Th2 cells [21]. For example, IL-9 enhances IgE production by B cells in conjunction with Th2-derived IL-4. Petit-Frere et al. reported that simultaneous administration of IL-4 and IL-9 exhibited synergistic effects that resulted in upregulation of IgE production [43]. McLane et al. reported that serum IgE levels were elevated in IL-9 transgenic mice compared with normal mice [44]. Analyses of PBMCs isolated from patients with allergen-induced asthma revealed a positive correlation between the number of Th9 cells and plasma IgE level [45]. Other studies found that IL-9 exacerbates eosinophilic inflammation by amplifying the effects of Th2 cytokines. Temann et al. found that compared with normal mice, transgenic mice overexpressing IL-9 exhibited increased production of Th2 cytokines, including IL-5 and IL-13 [46]. The increased levels of IL-5 and IL-13 resulting from IL-9 stimulation increase eosinopoiesis in the bone marrow and enhance goblet cell metaplasia of epithelial cells [47,48]. Chang et al. reported that mice with T cell-specific deletion of PU.1 exhibit reduced OVA-induced eosinophilic inflammation compared with wild-type mice [49].
A unique role of IL-9 compared with Th2 cytokines is the effect on the infiltration of mast cells in lungs. It was previously thought that Th2 cytokines, including IL-4 and IL-13, were responsible for mastocytosis [50]. However, Sehra et al. demonstrated that IL-9 derived from Th9 cells regulates mast cell infiltration in the lungs [51]. Using adoptive Th9 transfer, they found that only IL-9 blockade-and not IL-13 blockade-effectively reduced the infiltration of mast cells in the lungs [51].
Several murine studies examining IL-9 blockade demonstrated effective improvement in eosinophilic asthma factors such as inflammation, suggesting that IL-9 is a novel therapeutic target for treating eosinophilic asthma [52,53]. Unfortunately, however, a randomized controlled trial involving over 300 asthma patients did not find any beneficial improvement in asthma symptoms and lung function compared with the placebo group in patients treated with MEDI-528, a humanized IgG1 monoclonal antibody that inhibits the function of IL-9 [54]. JQ1, a bromodomain-containing protein 4 inhibitor that suppresses chromatin looping, resulting in reduced IL-9 transcription, has attracted recent attention for its potential in Th9 cell-targeted therapies [55]. In a murine study performed by Xiao et al., JQ1 alleviated OVA-induced allergic inflammation [56]. However, the short half-life of JQ1 currently poses an obstacle to clinical use [57]. Therefore, it will be necessary to develop improved Th9 cell-targeted drugs that can be used in asthma patients.
Tfh Cells
Tfh cells constitute a subset of CD4 + T cells that localize primarily in lymphoid tissues and function as key regulators of B-cell functions, including proliferation, cytokine production, and isotype switching [58]. When DCs secrete IL-6 in lymphoid tissues after allergen binding, naïve CD4 + T cells differentiate into C-X-C chemokine receptor type 5 (CXCR5)-expressing Tfh cells [59]. Regulated by the transcription factor B-cell lymphoma 6 (Bcl6), Tfh cells then secrete IL-4 and IL-21 [60].
Tfh-derived cytokines are major stimulators of IgE production by B cells. Previous studies indicated that IL-4 and IL-9 are involved in IgE production [61]. Kobayashi et al. reported reduced levels of serum IgE in T cell-specific Bcl6-depleted mice compared with control mice, despite no changes in levels of Th2 cytokines such as IL-4, IL-5, and IL-13 [62]. Noble and Zhao reported abnormalities in class switching of IgG as well as IgE in T cellspecific IL-6R mutant mice [63]. A study in humans reported a positive correlation between circulating Tfh cells and HDM-specific IgE [64]. These results suggest that Tfh cells-but not Th2 cells-play an important role in IgE production.
Tfh cells also play a role in amplifying the effects of Th2 cytokines during the induction of Th2-asthma. Two hypotheses have been proposed to explain this phenomenon. The first hypothesis holds that peripheral Tfh cells, which do not express CXCR5, migrate directly from the mediastinal lymph nodes to the lungs. The second hypothesis holds that Tfh cells are transformed into pathogenic Th2 cells. Using IL-21-green fluorescent protein reporter mice, Coquet et al. concluded that IL-21-producing cells presumed to be of Tfh origin localize in lungs and amplify Th2 cell responses via the binding of IL-21 to IL-21R on Th2 cells [65]. In contrast, Ballesteros-Tato et al. reported that IL-4-producing Tfh cells can differentiate into precursors of pathogenic Th2 cells [66].
Two types of therapeutics targeting Tfh cells have been developed: an inducible T-cell costimulatory (ICOS) ligand-targeted antibody, and a CXCR5-targeted therapy. Uwadiae et al. reported that the ICOS ligand-targeted antibody alleviated HDM-induced eosinophilic inflammation in a murine model [67]. Using PBMCs isolated from asthma patients and healthy controls, Zhang et al. reported that miR-192, a small, non-coding RNA that regulates CXCR5 expression, inhibits the function of Tfh cells [68]. Because Tfh cell-targeted therapies are still in the experimental stage, clinical trials of the ICOS ligand-targeted antibody and miR-192 are in progress.
Non-Th2 Asthma with Neutrophilic Inflammation
Non-Th2 asthma refers to asthma involving <3% eosinophilic infiltration in the sputum [7]. Fewer than 50% of asthma patients are diagnosed with non-Th2 asthma, which primarily occurs in adulthood [69]. Non-Th2 asthma is induced by non-allergenic factors such as smoking, air pollution, inhaled ozone, and infection [7]. Patients with non-Th2 asthma suffer from poor asthma control and experience frequent exacerbations of asthma symptoms due to the development of medication resistance [70]. Neutrophil infiltration is a key characteristic of patients presenting with non-Th2-asthma. Among the CD4 + T cell subsets, Th17 and Th1 cells reportedly play important roles in neutrophil infiltration of the airways (Figure 2)
Th17 Cells
Th17 cells exert a significant effect on neutrophilic inflammation during the development of asthma [7]. Th17 cells secrete IL-17A, IL-17F, and IL-22 as part of the response against extracellular pathogens and fungi. In addition, Th17 cells express the
Th17 cytokines such as IL-17A, IL-17F, and IL-22 promote neutrophil recruitment in the airways. Studies in human cell lines reported that exposure to IL-17 enhances the secretion of neutrophil chemotaxis factors such as C-X-C motif chemokine ligand (CXCL)1 and CXCL8 by stimulating epithelial cells and fibrocytes [71][72][73]. Newcomb et al. found reduced neutrophil infiltration in the airways of IL-17A-knockout mice [74]. Camargo et al. reported that blockade of IL-17 reduces lipopolysaccharide-induced neutrophilic inflammation in the airways of mice [75].
Th17 cytokines are also involved in airway remodeling and hyperresponsiveness via binding to IL-17RA and IL-17RC on airway smooth muscle cells [76][77][78]. In an animal model study of airway remodeling, Ramakrishnan et al. demonstrated that IL-17-induces autophagy in fibroblasts, which initiates mitochondrial dysfunction that results in collagen deposition [79]. In a study examining hyperresponsiveness, Chiba et al. reported that the complex formed by the binding of IL-17A to the IL-17R on smooth muscle cells stimulates increased production of RhoA protein, which plays a role in upregulating intracellular calcium concentrations, resulting in enhanced smooth muscle cell contractility [80]. These data from murine studies suggest that antibodies targeting IL-17A would reduce airway remodeling and airway hyperresponsiveness [75,81].
Several other studies have reported a link between steroid resistance and Th17 [82,83]. Two hypotheses have been proposed to explain this possible relationship. The first hypothesis involves the steroid resistance of Th17 cells, whereas Th2 cells are sensitive to steroids. The second hypothesis suggests that steroids promote Th17 cell differentiation. Nanzer et al. examined PBMCs of asthma patients and showed that steroids did not inhibit cytokine synthesis by Th17 cells, in contrast to PBMCs of healthy controls [82]. However, Chambers et al. reported that steroid dose-dependent Th17 cytokine synthesis plays a role in in vitro activation of human PBMCs [83]. These data explain the high proportion of Th17 cells in asthma patients with steroid resistance.
Unfortunately, antibody-based therapy targeting IL-17A did not improve asthma symptoms in clinical trials [84]. However, treatment of a patient with chronic psoriasis and asthma with ustekinumab, a humanized IgG1 monoclonal antibody targeting both IL-12 and IL-23, resulted in improvement in asthma symptoms and a reduction in asthma maintenance medication [85]. Collectively, the above results suggest that alleviating Th17related asthma requires the control of not just one Th17 cytokine pathway but all pathways that simultaneously regulate Th17 cytokines.
According to the hygiene hypothesis, Th1 cells inhibit the development of eosinophilic asthma, whereas Th2 cells promote the development of eosinophilic asthma [87]. However, recent studies reported that Th1 cells play an important role in the pathogenesis of severe non-Th2-asthma. Cui et al. reported that administration of OVA-specific Th1 aggravated neutrophilic inflammation in the lungs [88]. Raundhal et al. reported increased levels of the Th1 cytokine IFN-γ in bronchoalveolar lavage fluid of non-Th2-asthma patients [89]. Additionally, increased neutrophilic infiltration and IFN-γ mRNA expression in the sputum were observed in patients with severe asthma compared with patients with mild to moderate asthma [90]. These data suggest that Th1 cells play a role in the pathogenesis of severe non-Th2-asthma.
Th1 cell-derived IFN-γ is associated with airway hyperresponsiveness and pathologic changes in the lungs. Raundhal et al. found that IFN-γ reduces the expression of secretory leukocyte peptidase inhibitor, which neutralizes proteases in epithelial cells, thus aggravating airway hyperresponsiveness [89]. IFN-γ transgenic mice expressing high levels of IFN-γ developed emphysematous lungs, which is frequently observed in asthma-chronic obstructive pulmonary disease (COPD) overlap [91]. In the future, it will be necessary to develop new asthma treatments targeting Th1 cells.
Beneficial and Harmful Bacteria in the Pathogenesis of Asthma
Many species of bacteria live in symbiosis with hosts and play an important role in the development of asthma [92]. Beneficial species of bacteria suppress asthma, whereas harmful bacteria induce asthma [93]. In this section, we summarize the roles of these two types of bacteria in the pathogenesis of asthma.
Beneficial Bacteria with Anti-Asthmatic Effects
Beneficial bacteria include symbiotic species of the genera Lactobacillus, Bifidobacterium, Lachnospira and Akkermansia. Fermented foods such as yogurt and kimchi contain numerous beneficial bacteria [94,95]. Recently, probiotic products incorporating these beneficial bacteria have been used to reduce the risk of asthma [16].
Members of the genus Lactobacillus are gram-positive anaerobic bacteria that play a protective role in the pathogenesis of asthma. Spacova et al. reported that intranasal administration of Lactobacillus rhamnosus alleviated pollen-induced eosinophilic inflammation in the lungs [96]. According to Li et al., butyrate, a short-chain fatty acid (SCFA) generated from the fermentation of fiber by L. reuteri, exhibits anti-inflammatory activity in patients with asthma [97]. In a randomized, placebo-controlled study, the asthma patients group who received L. gasseri A5 daily for 2 months exhibited higher lung function scores (peak expiratory flow rate) and lower clinical symptom scores, indicating improvement in asthma compared with patients who received the placebo [98].
Members of the genus Bifidobacterium are Gram-positive anaerobic bacteria that exert immunomodulatory effects that suppress the development of asthma. In a study by Raftis et al., Bifidobacterium breve strain MRx0004 suppressed HDM-induced inflammation and the number of eosinophils and neutrophils [99]. Administration of Bifidobacterium upregulates IL-10-producing regulatory T cells (Tregs), a type of CD4 + T cell that suppresses hyperactivation of immune responses [100]. In a randomized controlled study of pediatric asthma patients, administration of a Bifidobacterium mixture resulted in improvement in clinical symptoms and quality of life compared with patients who received the placebo [101].
Members of the genus Lachnospira are gram-positive anaerobic bacteria that function as major producers of SCFAs such as acetate, propionate, and butyrate [102]. These SCFAs bind to G-protein-coupled receptor (GPR) 43 on the surface of naïve CD4 + T cells [103]. The SCFA-GPR43 complex, in turn, promotes acetylation of the Treg transcription factor Foxp3 by suppressing histone deacetylase (HDAC) in naïve CD4 + T cells [104]. Arrieta et al. found that fecal transplantation with a mixture of Lachnospira reduced OVA-induced neutrophilic inflammation to a greater degree than the control [105]. It is possible that increased levels of SCFAs produced by Lachnospira enhance Treg differentiation and suppress pathogenic immune cells.
Members of the genus Akkermansia are gram-negative anaerobic bacteria that inhibit the development of asthma by promoting differentiation of Treg [106]. In a study by Kuczma et al., Akkermansia-derived antigenic peptide-induced anergy of T cells and increased the peripheral Treg population [107]. Michalovich et al. showed that oral administration of A. muciniphila reduced OVA-induced eosinophilic inflammation [106]. In a cross-sectional case-controlled study, A. muciniphila was decreased in the stool of asthma patients compared to healthy controls [108]. In addition, the fecal concentration of A. muciniphila was negatively correlated with asthma severity [106]. These results suggest that Akkermansia plays a protective role in the development of asthma.
Other bacteria also reportedly exert beneficial effects in inhibiting the induction of asthma, including members of the genera Veillonella, Faecalibacterium and Rothia [93].
Harmful Bacteria with Pro-Asthmatic Effects
Bacteria that exert harmful effects with respect to asthma include pathogens of the genera Clostridium, Staphylococcus and Pseudomonas. Under certain conditions, these harmful bacteria reportedly exacerbate enterocolitis and pneumonia [109,110]. Additionally, colonization by harmful bacteria reportedly increases the risk of asthma development [16].
Members of the genus Clostridium are gram-positive anaerobic bacteria that reportedly aggravate asthma. Nimwegen et al. reported that colonization by Clostridium difficile within 1 month after birth is associated with an increased risk of developing childhood asthma [111]. In a pediatric cohort study, asthma patients exhibited higher numbers of C. neonatale [112]. Colonization by Clostridium species was positively correlated with fecal IgE levels in a childhood asthma study, indicating that the presence of Clostridium increases the risk of asthma [113]. Although the detailed mechanism of the role of Clostridium in asthma pathogenesis has not been elucidated, infections involving Clostridium could cause excessive inflammation and increase the pathologic immune cells, thereby worsening asthma.
Members of the genus Staphylococcus are gram-positive bacteria that induce eosinophilic asthma. Stentzel et al. demonstrated that serine protease-like proteins (Spls), extracellular proteases expressed by Staphylococcus aureus, exacerbate eosinophilic asthma [114]. Proteases such as Spls bind to the protease-activated receptor-2 on epithelial cells, which then secrete alarmins such as IL-33 and TSLP, which in turn activate ILC2 and induce a Th2 response [115]. According to the National Health and Nutrition Examination Survey (NHANES), nasal colonization by S. aureus is associated with increased severity of asthma symptoms [116].
Members of the genus Pseudomonas are gram-negative bacteria known as opportunistic pathogens that cause respiratory diseases such as asthma, COPD, and bronchiectasis. Pseudomonas aeruginosa is the second most common bacteria in sputum cultures of patients with severe asthma [117]. According to Tuli et al., planktonic exo-proteins isolated from P. aeruginosa damage the mucosal barrier, thereby exacerbating asthma and chronic rhinosinusitis [118]. Flagellin isolated from P. aeruginosa was shown to increase secretion of the potent neutrophil chemoattractants IL-6 and IL-8 in human epithelial cells [119]. In a human study conducted by Green et al., asthma patients in which P. aeruginosa was the dominant pathogenic bacteria exhibited more severe neutrophilic inflammation and steroid resistance than patients in which other species were dominant [120].
In addition, nasopharyngeal colonization by members of the genera Streptococcus, Moraxella, and Haemophilus within the first year of life is associated with an increased risk of childhood asthma [121].
Dysbiosis-Induced Asthma
Dysbiosis is a disruption of the immune system caused by a dysregulation of microbiota homeostasis [122]. Several factors can initiate dysbiosis, including the use of antibiotics in the prenatal or neonatal periods, cesarean section, consumption of a low-fiber diet by the mother, or formula feeding [123]. Dysbiosis reportedly aggravates asthma by decreasing the number of Tregs and increasing the numbers of pathologic Th2 and Th17 cells [108,124]. In this section, we discuss how alterations in CD4 + T cells during dysbiosis affect the pathogenesis of asthma (Figure 3).
Antibiotics
Several reports have indicated that antibiotic use can induce asthma [125][126][127]. The use of antibiotics before and after pregnancy reportedly increases the incidence of childhood asthma [128]. The functions of CD4 + T cells can be altered by antibiotic use, subsequently provoking the development of eosinophilic asthma. Murine studies demonstrated that antibiotic-induced dysbiosis exacerbates Th2-driven allergic inflammation by reducing numbers of Tregs in the colon [129] and lungs [129,130]. Hong et al. reported abnormal immune responses to undigested food in antibiotic-treated mice, resulting in increases in food antigen-driven IL-4-producing Tfhs and IgE production [131]. In a prospective cohort study, infants who received antibiotics between birth and 1 year of age had a 50% increased risk of childhood asthma [127].
Cesarean Section
Children delivered by cesarean section are reportedly at increased risk of asthma. According to Shao et al., delivery mode is the most influential factor in the formation of the neonatal gut microbiota [132]. Babies born via vaginal delivery obtain commensal bacteria from the mother's vagina, whereas babies born via cesarean section receive commensal bacteria from the mother's skin [133]. Kim et al. reported that infants born via cesarean section harbor fewer asthma-suppressing Bifidobacterium, Lactobacillus, and Lachnospira and more asthma-promoting Pseudomonas in the gut [134]. In a murine study conducted by Zachariassen et al., mice born via cesarean section had fewer Tregs and
Antibiotics
Several reports have indicated that antibiotic use can induce asthma [125][126][127]. The use of antibiotics before and after pregnancy reportedly increases the incidence of childhood asthma [128]. The functions of CD4 + T cells can be altered by antibiotic use, subsequently provoking the development of eosinophilic asthma. Murine studies demonstrated that antibiotic-induced dysbiosis exacerbates Th2-driven allergic inflammation by reducing numbers of Tregs in the colon [129] and lungs [129,130]. Hong et al. reported abnormal immune responses to undigested food in antibiotic-treated mice, resulting in increases in food antigen-driven IL-4-producing Tfhs and IgE production [131]. In a prospective cohort study, infants who received antibiotics between birth and 1 year of age had a 50% increased risk of childhood asthma [127].
Cesarean Section
Children delivered by cesarean section are reportedly at increased risk of asthma. According to Shao et al., delivery mode is the most influential factor in the formation of the neonatal gut microbiota [132]. Babies born via vaginal delivery obtain commensal bacteria from the mother's vagina, whereas babies born via cesarean section receive commensal bacteria from the mother's skin [133]. Kim et al. reported that infants born via cesarean section harbor fewer asthma-suppressing Bifidobacterium, Lactobacillus, and Lachnospira and more asthma-promoting Pseudomonas in the gut [134]. In a murine study conducted by Zachariassen et al., mice born via cesarean section had fewer Tregs and increased numbers of IL-4-producing invariant natural killer T cells in mesenteric lymph nodes [135]. As a result, cesarean section-induced dysbiosis increases the risk of childhood asthma by 3-fold [136].
Low-Fiber Diet
A high-fiber diet protects against asthma. Fiber, a component of plants, is a complex carbohydrate structure composed of β-glycoside-linked glucose monomers [137]. Plant fibers are degraded into SCFAs, including acetate, via fermentation by gut bacteria [138]. Thorburn et al. reported that pregnant mother mice fed a high-fiber diet exhibited increased acetate production, which in turn increased the number of Tregs via HDAC9 inhibition; this led to alleviation of HDM-induced eosinophilic inflammation [139]. Fetal mice provided increased acetate via the placenta exhibited asthma-resistant lung maturation [139]. On the other hand, a low-fiber diet increased Th2 differentiation which led to eosinophilic airway inflammation [140]. Trompette et al. showed that reduction of SCFA by low-fiber diet affected hematopoiesis and increased Th2 cell response [140]. Using data from the 2007-2012 NHANES, Saeed et al. showed that low fiber intake is associated with a higher incidence of asthma as compared with high fiber intake [141].
Formula Feeding
Breast milk contains a variety of components that suppress the development of asthma in children. Mosconi et al. reported that allergen-specific IgG contained in breast milk binds to the Fc receptor of intestinal epithelial cells of the fetus, resulting in allergen-specific Treg induction and reduction of Th2 response [142]. Other breast milk components, including IL-7, cortisol, and microRNAs, aid in thymus development [143]. Ultrasound analyses comparing the size of the thymus of breastfed infants with that of formula-fed infants, the thymus size was reduced by >50% in 4-month-old formula-fed infants [144]. A murine study conducted by Nakajima et al. showed that SCFAs contained in breast milk bind to GPR41 in the fetal thymus and enhance Treg differentiation in both the thymus and peripheral organs [145]. Analyses of PBMCs from formula-fed and breastfed babies showed that formula feeding leads to a reduction in the number of Tregs, resulting in increased levels of pro-inflammatory cytokines such as IFN-γ and IL-17 [146]. A cross-sectional study including 31,049 children reported that formula-fed children had a higher incidence of asthma than breastfed children [147].
Conclusions
Asthma is a heterogenous disease that can be largely classified as either eosinophilic asthma or neutrophilic asthma. CD4 + T cells play important roles in determining the asthma phenotype. Th2, Th9, and Tfh cells are involved in the development of eosinophilic asthma, whereas Th1 and Th17 cells are involved in the development of neutrophilic asthma. Proper classification of the asthma phenotype based on CD4 + T cells is essential to determine the optimal asthma treatment and accurately predict the prognosis.
Crosstalk between the microbiota and host immune system is another important factor in asthma development. Beneficial bacteria play a protective role in the pathogenesis of asthma, whereas harmful bacteria exacerbate asthma symptoms. Dysbiosis caused by an imbalance in the microbiota homeostasis alters the differentiation of CD4 + T cells, resulting in asthma aggravation. Dysbiosis can be corrected using various probiotic products that were developed to improve asthma symptoms [148,149]. However, these probiotics still play an adjuvant role in the treatment of asthma. To evaluate the microbiota as a potential therapeutic target in greater detail, a precise mechanistic study will be necessary to fully elucidate the effects of the microbiota and CD4 + T cells on the pathogenesis of asthma. | 2021-11-04T15:12:42.079Z | 2021-10-31T00:00:00.000 | {
"year": 2021,
"sha1": "178051f38eaf2930f03978e58387251762b8e185",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/22/21/11822/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e9e7174e332137f94c12b8b2718c74897d8b3f9c",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
238587233 | pes2o/s2orc | v3-fos-license | TTPAL promotes gastric tumorigenesis by directly targeting NNMT to activate PI3K/AKT signaling
Copy number alterations are crucial for gastric cancer (GC) development. In this study, Tocopherol alpha transfer protein-like (TTPAL) was identified to be highly amplified in our primary GC cohort (30/86). Multivariate analysis showed that high TTPAL expression was correlated with the poor prognosis of GC patients. Ectopic expression of TTPAL promoted GC cell proliferation, migration, and invasion in vitro and promoted murine xenograft tumor growth and lung metastasis in vivo. Conversely, silencing of TTPAL exerted significantly opposite effects in vitro. Moreover, RNA-sequencing and co-immunoprecipitation (Co-IP) followed by liquid chromatograph-mass spectrometry (LC-MS) identified that TTPAL exerted oncogenic functions via the interaction of Nicotinamide-N-methyl transferase (NNMT) and activated PI3K/AKT signaling pathway. Collectively, TTPAL plays a pivotal oncogenic role in gastric carcinogenesis through promoting PI3K/AKT pathway via cooperating with NNMT. TTPAL may serve as a prognostic biomarker of patients with GC.
Introduction
Gastric cancer (GC) represents an important global health problem. GC is the fifth commonest cancer worldwide and the third leading cause of cancer-related death [1]. GC is a complex and highly heterogeneous disease, largely because it arises from multiple interaction of genetic alterations, epigenetic changes, infection, and the tumor microenvironment. Pathways that contribute to gastric carcinogenesis and underline driver genes have been identified based on the molecular characteristics and are potential therapeutic targets [2]. The identification of the novel oncogene and its related signaling pathway will help to recognize novel therapeutic target. Copy number alterations are common somatic changes in cancer featured with gain or loss in copies of DNA sections [3]. Recurrent gain and amplification of the long arm of chromosome 20 (20q) has been observed in 70% of primary GC [4]. Using whole genome sequencing we identified that tocopherol alpha transfer protein-like (TTPAL), located at 20q13.12, was amplified in colorectal cancer. TTPAL promoted colorectal tumorigenesis by activating Wnt/β-catenin signaling [5]. By analysis the public The Cancer Genome Atlas (TCGA) database, we found that TTPAL was frequently amplified in GC and positively correlated with its upregulated copy number variation. However, whether the contribution of TTPAL in the progression of GC is still unclear. In this study, we revealed that TTPAL exerted oncogenic role in GC, elucidated the regulatory context of TTPAL by activating PI3K/AKT signaling pathway. TTPAL high expression was associated with poorer survival of GC patients. Moreover, TTPAL was a potential therapeutic target in GC.
Results
TTPAL was frequently amplified in GC primary tissues and associated with poor survival of GC patients TTPAL mRNA expression was significantly upregulated in GC as compared with their adjacent non-tumor tissues in our cohort (N = 96, p = 0.0164) and was confirmed in paired GC tumor tissues as compared to adjacent normal controls (N = 28) and in TCGA cohort in GC tumor tissues (N = 375) compared with normal controls (N = 28) (Fig. 1A, B). TTPAL mRNA expression was positively correlated with DNA copy number (R = 0.6735, p < 0.0001) (Fig. 1C). TTPAL mRNA expression was higher in TTPAL DNA copy number amplification group compared with no amplification group (N = 412, p < 0.0001) (Fig. 1D). In keeping with mRNA expression, TTPAL protein expression was also significantly higher in primary gastric tumors as compared to adjacent non-tumor tissues by immunohistochemical (IHC) staining (N = 86, p < 0.0001) (Fig. 1E).
We further evaluated the clinicopathological and prognostic significance of TTPAL expression in patients with GC. High TTPAL expression predicted a higher risk of cancer-related death by univariate Cox regression analysis (relative risk (RR) = 1.832, 95% CI: 1.075-3.122, p = 0.026) (Supplementary Table 1). Multivariate Cox regression analysis showed that TTPAL expression was an independent poor prognostic factor for GC patients (RR = 1.831, 95% CI: 1.056-3.174, p = 0.031) (Fig. 1F). Kaplan-Meier survival curves showed that GC patients with high TTPAL expression had significantly shorter survival than those with low TTPAL expression (p = 0.0072) (Fig. 1G). This finding was further validated in GC samples from Kaplan-Meier plotter (p < 0.0001) (Fig. 1H). Moreover, after stratified by tumor staging, TTPAL overexpression predicted poor prognosis in stages I-III GC patients, but not for stage IV GC patients (Fig. 1H). No correlation was found between TTPAL expression and other clinicopathological features such as age, gender, tumor differentiation, and tumor nodes metastasis (TNM) stage (Supplementary Table 2).
TTPAL promoted migration and invasion of GC cells
Ectopic expression of TTPAL promoted the cell migration and invasion by wound healing assay (p < 0.01) (Fig. 3A) and matrigel invasion assays in BGC823 (p < 0.0001) and MGC803 (p < 0.001) cells (Fig. 3B), respectively. Conversely, TTPAL knockdown in AGS and MKN74 cells exerted opposite effects on cell migration (p < 0.0001) (Fig. 3C) and invasion (p < 0.0001) (Fig. 3D). TTPAL promoted the epithelial-mesenchymal transition (EMT) through upregulation of mesenchymal markers (N-cadherin and Snail) and downregulation of epithelial markers (Ecadherin), as shown by western blot (Fig. 3E1). In contrast, knockdown of TTPAL showed the opposite effect on these EMT markers (Fig. 3E2). These findings demonstrated that Fig. 1 TTPAL was overexpressed in GC tissues and associated with poor survival of patients. A TTPAL mRNA expression was upregulated in GC compared to paired adjacent normal tissues as shown by RT-PCR (Our cohort I, from Beijing). B TTPAL mRNA expression was upregulated in GC compared to paired adjacent normal tissues as shown by qRT-PCR (Our cohorts II, from Shijiazhuang) and RNA-seq data from TCGA study also showed upregulation of TTPAL in GC as compared to adjacent normal tissues (paired and unpaired samples). C TTPAL copy number was positively correlated with its mRNA expression in TCGA cohort by the Pearson correlation coefficient analysis. D TTPAL mRNA expression was higher in TTPAL DNA copy number amplification group compared with no amplification group. E TTPAL protein expression was significantly higher in primary GC as compared to adjacent normal tissues as shown by IHC staining (Our cohort Ш, from Shanghai). F Multivariate Cox regression analysis showed that TTPAL expression was an independent poor prognostic factor for GC patients (relative risk (RR) = 1.831, 95% CI: 1.056-3.174, p = 0.031). G Kaplan-Meier survival analysis showed GC patients with high TTPAL expression had poorer survival than those with low TTPAL expression at protein level from Our cohort Ш. H Prognostic value of TTPAL expression was validated in GC patients from Kaplan-Meier plotter (http://kmplot.com/).
To test whether the oncogenic function of TTPAL was depended on PI3K/AKT activation, GC cells with or without ectopic expression of TTPAL were treated with PI3K/AKT inhibitors GDC-0941 [8]. GDC-0941 treatment inhibited AKT phosphorylation (Fig. 4E1) and abolished growth promoting effect induced by TTPAL expression, as evidenced by cell viability (Fig. 4E2) and colony formation assays (Fig. 4E3). Meanwhile, silence of AKT (Fig. 4E1) can also abolished growth promoting effect of TTPAL on cell viability (Fig. 4E2) and colony formation (Fig. 4E3), indicating that TTPAL promotes GC by activating PI3K/ AKT pathway.
NNMT directly interacted with TTPAL
To further determine the downstream target of TTPAL, we performed Co-IP followed by liquid chromatograph-mass spectrometry (LC-MS). TTPAL-binding candidates were then identified by comparing the anti-TTPAL-Flag IP products of TTPAL (Flag)-overexpressed cells with those of control cells, Top 5 identified candidate genes were UGDH, DNM2, DDX46, CUL4A, and NNMT ( Fig. 5A and Supplementary Table 3). Among them, NNMT was involved in PI3K/AKT signaling pathway and was the most interest target candidate of TTPAL [9,10].
To validate the interaction between TTPAL and NNMT, IP assays were performed in the BGC823 and MGC803 cells stably transfected with TTPAL (Flag-tagged) expression. TTPAL with Flag and NNMT could be co-precipitated by each other in both cells (Fig. 5B), indicating the directly interaction between TTPAL and NNMT. The localization of the two proteins was further confirmed by confocal microscopy TTPAL co-localized with NNMT both in the cytoplasm of GC cells (BGC823 and MGC803) and GC tissues (Fig. 5C1). Western blotting of membrane, cytoplasmic and nuclear fractions, further validated that TTPAL and NNMT was mainly localized in the cytoplasm of BGC823 and MGC803 cells (Fig. 5C2). NNMT protein expression was upregulated by ectopic expression of TTPAL in BGC823 and MGC803 cells (Fig. 5D1). However, NNMT mRNA level was not changed by overexpression of TTPAL in BGC823 and MGC803 cells ( Supplementary Fig. 3). We therefore assessed whether TTPAL regulated the stability of NNMT protein. We treated TTPAL or control vectortransfected cells with the protein synthesis inhibitor cycloheximide. As shown in Fig. 5D2, NNMT was more stable in the presence of TTPAL in BGC823 and MGC803 cells.
The oncogenic role of TTPAL is partially dependent on NNMT
To investigate the effect of NNMT on the TTPAL-mediated cell proliferation and metastasis, BGC823 and MGC803 cells stably transfected with TTPAL or control vectors were co-transfected with siRNA against NNMT (Fig. 5E). NNMT knockdown significantly abolished the promoting effect of TTPAL on cell viability (Fig. 5F), clonogenicity (Fig. 5G), and invasion ( Fig. 5H) abilities in both BGC823 and MGC803 cells. We further examined if TTPAL activated PI3K/AKT signaling pathway through mediating NNMT. As expected, NNMT knockdown inhibited AKT phosphorylation (Fig. 5I) and blunted the TTPAL activated PI3K/AKT signaling as evidenced by luciferase reporter assay in BGC823 and MGC803 cells (Fig. 5J). These results collectively suggested that the oncogenic role of TTPAL promoted tumorigenicity and metastasis by inducing NNMT and activating PI3K/AKT signaling in vivo To validate our in vitro findings, we subcutaneously injected MGC803 cells stably transfected with TTPAL expression vector or empty vector into the left and right dorsal flanks of nude mice, respectively. As shown in Fig. 6A, TTPAL markedly promoted the growth of tumor volume and increased tumor weight in subcutaneous xenograft models. The efficient ectopic expression of TTPAL in xenograft tumors was confirmed by immunohistochemistry (Fig. 6B). Ki-67 staining showed MGC803-TTPAL xenografts significantly promoted cell proliferation as compared to controls (Fig. 6B). Moreover, the expression of NNMT and p-AKT was dramatically increased in TTPALoverexpressed xenografts by IHC (Fig. 6B) and western blotting (Fig. 6C), validating the molecular mechanisms identified in vitro. We further evaluated the effect of TTPAL in GC metastasis. MGC803 cells stably transfected with TTPAL vector or empty vector were injected through tail vein of nude mice. After 4 weeks, the number of lung metastatic tumors which was confirmed histologically was significantly increased in TTPAL group as compared with control group (p < 0.01) (Fig. 6D), suggesting that TTPAL promotes metastasis in GC. The protein expression of Ki64, NNMT, and p-AKT was also increased in TTPALoverexpressed metastatic tumors (Fig. 6E) validating that TTPAL promoted gastric metastasis by inducing NNMT and activating PI3K/AKT signaling.
Knockdown of TTPAL synergized with 5-Fluorouracil and paclitaxel
With the observation that TTPAL functions as an oncogenic factor in GC, we examined if TTPAL knockdown could synergize the chemotherapeutic effects of 5-Fluorouracil and paclitaxel. As shown in Fig. 6F, knockdown of TTPAL significantly synergized 5-Fluorouracil and paclitaxel in suppressing GC cell proliferation, inferring that TTPAL might be a potential therapeutic target in GC patients.
Discussion
In this study, we demonstrated that TTPAL was significantly upregulated in human GC at mRNA level and protein level. TTPAL is located on chromosome 20q13.12, a common region of DNA copy number gain in GC and is associated with gastric carcinogenesis [4]. TTPAL mRNA overexpression was positively correlated with its DNA copy number gain, inferring that TTPAL gene amplification contributes to its upregulation in GC. We investigated the clinical implication of TTPAL expression in GC and found that TTPAL high expression was associated with poor survival of GC patients, especially TNM stages I-III GC patients. TTPAL was an independent predictor for poor survival of GC patients (p < 0.05). In this connection, we investigated the function of TTPAL in GC both in vitro and in vivo. Ectopic expression of TTPAL in GC cells (BGC823 and MGC803) promoted cell proliferation and colony formation; while knockdown of TTPAL in AGS and MKN74 cells had opposite effects. TTPAL facilitated the G1-S phase transition by upregulation of the protein expression of cyclin D1 and CDK4, which are the key regulators of the transition through the G1 phase of the cell cycle. The significantly promoted cell proliferation by TTPAL was also confirmed by increased S phase cells, upregulated proliferation markers of PCNA and Ki-67 index.
In addition to growth promotion effect, ectopic expression of TTPAL significantly promoted cell migration and invasion abilities. TTPAL positively regulated EMT through upregulation of mesenchymal markers (N-cadherin and Snail) and downregulation of epithelial markers (Ecadherin). In accordance with in vitro findings, TTPAL promoted tumor growth in mouse subcutaneous xenograft models and promoted lung metastasis in tail vein injection mouse models. Moreover, we revealed that knockdown of TTPAL synergized the chemotherapeutic effects of 5-Fluorouracil and paclitaxel in GC cells. Collectively, these results indicated that TTPAL exerts oncogenic properties in GC via promoting cell proliferation and increasing metastatic abilities.
We next examined the molecular mechanism of TTPAL acting as an oncogenic factor in GC. We identified PI3K/AKT as the major downstream signaling mechanism underlying the oncogenic effect of TTPAL in GC. Ectopic expression of TTPAL activated PI3K/AKT pathway, induced phosphorylation of AKT and GSK-3β, [11][12][13][14], invasion, metastasis [15][16][17], and drug resistance of human cancers [18,19]. Emerging evidence indicate that proteins regulate signaling pathways as part of multi-protein complexes. Here, we performed Co-IP of TTPAL followed by protein sequencing for the identification of the TTPAL interacting partner. NNMT was identified as a potential functional partner in PI3K/AKT signaling pathway [9,10]. The direct interaction between TTPAL and NNMT was confirmed by Co-IP assays. TTPAL was co-localized in the cytoplasm with NNMT by confocal immunofluorescence assay and western blot analysis. NNMT was found upregulated following overexpression of TTPAL. These results collectively suggested that NNMT was a direct downstream interacting partner of TTPAL. Moreover, NNMT knockdown abrogated TTPAL-mediated activation of PI3K/AKT signaling and GC cell growth. Hence, TTPAL-NNMT-PI3K/AKT axis is a novel signaling cascade that cooperatively promotes GC progression (Fig. 6G).
In conclusion, we demonstrated that TTPAL was a novel oncogenic factor, which promoting GC tumorigenesis and metastasis. The oncogenic function of TTPAL was mediated by direct interaction with NNMT to activate PI3K/ AKT signaling. High expression of TTPAL predicts poor prognosis for GC patients.
GC cell lines
Six GC cell lines (AGS, BGC823, MKN45, MKN74, MGC803, HCG27) were used in this study. AGS and MKN45 were obtained from American Type Culture Collection (ATCC Manassas, VA). BGC823, HCG27, and MGC803 were obtained from Cell Research Institute, Shanghai, China. MKN74 was obtained from Japanese Collection of Research Bioresources Cell Bank, Japan. These cell lines were obtained between 2014 and 2015 and cells authentication were confirmed by short tandem repeat profiling. Cells were cultured and maintained in Dulbecco's Modified Eagle's medium (Gibco BRL) supplemented with 10% heat inactivated fetal bovine serum (Gibco BRL) and 1% penicillin/streptomycin according to the ATCC protocols. Cells were maintained at a 37°C in a humidified incubator with 5% CO 2 . Routine Mycoplasma testing was performed by PCR. Cells were grown for no more than ten passages in total for any experiment.
RNA extraction, semi-quantitative RT-PCR, and realtime PCR analyses
Total RNA was extracted from cells and tissues using TRIzol TM Reagent (Thermo Fisher Scientific). cDNA was synthesized from 1 μg of total RNA using Transcriptor Reverse Transcriptase (Roche). Semi-quantitative PCR was performed by Ampli Taq Gold DNA polymerase (Applied Biosystems; Thermo Fisher Scientific). Quantitative realtime PCR was performed by SYBR Green PCR Master Mix (Applied Biosystems; Thermo Fisher Scientific) on 7500HT Fast Real-Time PCR System (Applied Biosystems; Thermo Fisher Scientific). The primers used were listed in Supplementary Table 4. Gene expression was normalized to β-actin and calculated using 2 −ΔΔCt method. Fig. 5 The oncogenic role of TTPAL was partially dependent on NNMT. A Co-immunoprecipitation (Co-IP) followed by liquid chromatography-mass spectrometry (LC-MS) identified NNMT to be a TTPAL-binding protein. B Co-IP followed by western blot analyses confirmed the binding between TTPAL and NNMT in BGC823 and MGC803 cells. C TTPAL and NNMT are mainly co-localized in cytoplasm as demonstrated by confocal immunofluorescence analysis and western blot of membrane, cytoplasmic, and nuclear fractions in BGC823 and MGC803 cells. D Ectopic expression of TTPAL increased the protein expression of NNMT in BGC823 and MGC803 cells by western blot. TTPAL increased the stability of NNMT in BGC823 and MGC803 cells. E Knockdown of NNMT in BGC823 and MGC803 cells with stable TTPAL overexpression was confirmed by RT-PCR and western blot.
TTPAL gene overexpression or knockdown
The full-length ORF of TTPAL was cloned into PLV-puro vectors (OriGene). Cell lines stably expressing TTPAL were obtained after selection with puromycin (Sigma) for at least 2 weeks. Lentivirus particles expressing TTPAL shRNA or control shRNA were produced by Gene Pharma (Shanghai, China) and then utilized to transduce cells. Table 4. Transfection efficiency was confirmed by RT-qPCR and Western blot.
Western blotting
Proteins were separated on 10-12% SDS-polyacrylamide gel electrophoresis and transferred onto PVDF membrane. After BSA blocking, the protein-loading membrane was incubated with the primary antibody and secondary antibody. The antibodies used in this study were listed in Supplementary Table 5.
In vivo subcutaneous xenograft and lung metastasis mouse models MGC803 cells (5 × 10 6 cells in 0.1 ml phosphate-buffered saline) stably transfected with TTPAL expression vector or empty vector were injected subcutaneously into the right and left dorsal flanks of 4 to 6-week-old male Balb/c nude mice (n = 5 per group). Tumor volumes were measured every 2 days using a caliper and calculated using the formula, W 2 × L/2 (L = the longest diameters and W = the shortest diameters of the tumor). The mice were sacrificed after 2 weeks and the tumor size and tumor weight were measured. The excised tissues were either fixed in 10% neutral-buffered formalin which used for histological examination or snap frozen for molecular analyses. For lung metastasis model, MGC803 cells stably transfected with TTPAL expression vector or empty vector (5 × 10 6 cells in 0.1 ml PBS) were injected intravenously via the tail vein (n = 5). After 4 weeks, mice were sacrificed and their lungs were harvested. The lungs were sectioned and stained with HE and IHC. The number of lung metastases were counted. All experimental procedures were approved by the Animal Ethics Committee of the Chinese University of Hong Kong.
PCR array
Human PI3K/AKT signaling pathway (Qiagen) PCR array were performed according to manufacturer's instructions. Data analysis was performed using the RT 2 Profiler PCR Array Data Analysis Version 3.5 software (http://pcrdataana lysis.sabiosciences.com).
Co-immunoprecipitation (Co-IP) and liquid chromatography-mass spectrometry
Co-immunoprecipitation (Co-IP) assays were carried out as previously described [5]. Briefly, total protein from MGC803 cells stably transfected with TTPAL (Flag-tagged) expression vector or empty vector was extracted in radioimmunoprecipitation assay (RIPA) buffer supplemented with proteinase inhibitor (Novagen, Darmstadt, Germany). Immunoprecipitation was performed using anti-Flag M2 antibody (A2220, Sigma Aldrich, St. Louis, MO). The immune complexes were precipitated by Pure Pro-teome™ Protein A/G Mix magnetic beads (LSKMA-GAG02, Millipore, Burlington, MA) overnight at 4°C. Beads with extracted proteins were washed three times by 50 mm ammonium bicarbonate buffer and subjected to digestion by trypsin at 37°C for 2 h (Promega, Madison, WI). Tryptic peptides were then extracted for LC-MS analysis.
Co-immunoprecipitation of TTPAL and NNMT in GC cells
The total protein of BGC823 and MGC803 cells stably transfected with TTPAL (Flag-tagged) expression vector was extracted in RIPA buffer. Lysate (100 μg protein) and Co-IP precipitant by anti-Flag-tag, anti-NNMT antibody, or Fig. 6 TTPAL promotes tumorigenicity and metastasis by regulating NNMT and PI3K/AKT signaling in vivo. A MGC803 cells stably expressing TTPAL expression promoted subcutaneous tumor growth as compared to control vector, both in terms of tumor volume over the entire assay period and tumor weight at the end point. B IHC staining confirmed TTPAL overexpression in MGC803 subcutaneous xenografts, which enhanced cell proliferation (by Ki-67 staining). IHC staining results also showed that TTPAL increased NNMT and p-AKT expression in xenografts. C Western blot analysis further confirmed that TTPAL expression in MGC803 xenografts increased the expression of NNMT and phospho-AKT. D Ectopic expression of TTPAL promoted experimental metastasis of MGC803 cells in vivo. Representative images of lungs and H&E staining of lung tissues from nude mice injected with TTPAL or control vector-transfected MGC803 cells. Quantitative analysis showed that TTPAL expression significantly increased the number of metastatic lesions. E IHC staining results also showed that TTPAL enhanced cell proliferation (by Ki-67 staining) and increased NNMT and p-AKT expression in lung metastasis. F Knockdown of TTPAL significantly enhanced the inhibition of cell proliferation which mediated by 5-Fluorouracil (5 μmol/l) and paclitaxel (5 nmol/l) in AGS and MKN74 cells as indicated by MTT assay. G. Schematic illustration of the molecular mechanism of TTPAL in PI3K/AKT signaling of GC.
IgG were immunoblotted with either anti-NNMT or anti-TTPAL antibody to confirm the interaction of TTPAL and NNMT. The lysate (1% input, 10 μg protein) was also used as a control.
RNA-sequencing
The total RNA from MGC803 cells stably transfected with TTPAL expression vector or empty vector was isolated using TRIzol TM Reagent (Thermo Fisher Scientific). The total amount of 3 μg RNA per sample was used as input material for the RNA sample preparations. All samples had RIN values above 6.8. Sequencing libraries were generated using Illumina TruSeqTM RNA Sample Preparation Kit (Illumina, San Diego, CA). The libraries were sequenced on an Illumina HiSeq X-ten platform (Berry Genomics, Beijing, China).
Dual-luciferase reporter assay
Briefly, cells were seeded into a 24-well plate and cotransfected with FOXO reporter and Renilla (internal control) reporter. Two days after transfection, cells were harvested, and the Firefly and Renilla luminescence were measured by the dual-luciferase reporter assay system (Promega). Reporter activity was determined as the ratio of Firefly to Renilla luciferase activity.
Immunohistochemistry and immunofluorescence
Immunohistochemistry staining were conducted according to the procedure mentioned previously [21]. Briefly, anti-TTPAL, anti-NNMT, Anti-Phospho-AKT, and anti-Ki-67 were incubated overnight at 4°C. The positive percentage was scored as follows: 0, no positive staining; 1, in between 1 and 25% cells; 2, in between 26 and 50% cells; 3, in between 51 and 75% cells; 4, in more than 75% cells. The staining intensity was scored as follows: 0, negative; 1, weak; 2, moderate; and 3, high intensity [21]. The final staining score was calculated as staining intensity score × percentage of positive cells. The results were evaluated blindly by two independent observers. For immunofluorescence staining, secondary fluorescent antibodies were applied for 1 h at 37°C and sections counterstained with DAPI.
Statistical analysis
Statistical analyses were performed using GraphPad Prism software 7.0 (GraphPad Software, CA, USA) and SPSS software (Version 22.0, IL, USA). Paired t-test was used to compare mRNA and protein expression of TTPAL between tumor tissues and adjacent normal tissues. Independent samples t-test was utilized to analyze the difference between two groups. One-way analysis of variance was used to compare means of three or more experimental groups. Crude RRs of death associated with TTPAL expression were estimated by univariate Cox proportional hazards regression model first. Multivariate Cox model was then constructed to estimate the adjusted RR for TTPAL expression. Overall survival in relation to expression was evaluated by the Kaplan-Meier survival curve and the log rank test. We analyzed the TCGA stomach adenocarcinoma dataset using the UCSC Xena tool (https://xena.ucsc.edu) for gene expression and Kaplan-Meier plotter (http:// kmplot.com) for overall survival. Data were expressed as mean ± SD. p values < 0.05 were taken as statistical significance.
Author contributions WL involved in study design, conducted experiments, and drafted manuscript; HG and XW involved in study design and revised the paper; XL and XH performed experiments; XH helped bioinformatics analysis; XL, SL, and XW provided human samples; SL commented on the study; JY and SL designed, supervised the study, and revised the manuscript.
Compliance with ethical standards
Conflict of interest The authors declare no competing interests.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons. org/licenses/by/4.0/. | 2021-10-12T13:43:23.087Z | 2021-10-12T00:00:00.000 | {
"year": 2021,
"sha1": "20856cd78b8acb38ba783f734e76866a71874da2",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41388-021-01838-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "20856cd78b8acb38ba783f734e76866a71874da2",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235739531 | pes2o/s2orc | v3-fos-license | Multi-Dimensional Interpretations for Termination of Term Rewriting
. Interpretation methods constitute a foundation of termination analysis for term rewriting. From time to time remarkable instances of interpretation methods appeared, such as polynomial interpretations, matrix interpretations, arctic interpretations, and their variants. In this paper we introduce a general framework, the multi-dimensional interpretation method, that subsumes these variants as well as many previously unknown interpretation methods as instances. Employing the notion of derivers, we prove the soundness of the proposed method in an elegant way. We implement the proposed method in the termination prover NaTT and verify its significance through experiments
Introduction
Term rewriting [2] is a formalism for reasoning about function definitions or functional programs. For instance, a term rewrite system (TRS) R fact [7] consisting of the following rewrite rules defines the factorial function: fact(0) → s(0) fact(s(x)) → mul(s(x), fact(p(s(x)))) p(s(x)) → x assuming that s, p, and mul are interpreted respectively as the successor, predecessor, and multiplication functions.
Analyzing whether a TRS terminates, meaning that the corresponding functional program responds or the function is well defined, has been an active research area for decades. Consequently, several fully automatic termination provers have been developed, e.g., AProVE [10], T T T 2 [20], CiME [5], MU-TERM [23], and NaTT [34], and have been competing in the annual Termination Competitions (TermCOMP) [11].
Throughout their history, interpretation methods [25] have been foundational in termination analysis. They are categorized by the choice of well-founded carriers and the class of functions as which symbols are interpreted. Polynomial interpretations [22] use the natural numbers N as the carrier and interpretations are monotone polynomials, i.e., every variable has coefficient at least 1. Weakly monotone polynomials, i.e., zero coefficients, are allowed in the dependency pair method [1]. Negative constants are allowed using the max operator [15]. General combinations of polynomials and the max operator are proposed in both the standard [37] and the dependency pair settings [9]. Negative coefficients and thus non-monotone polynomials are also allowed, but in a more elaborated theoretical framework [15,9].
These methods share the common carrier N. In contrast, matrix interpretations [16,8] choose vectors over N as the carrier, and interpret symbols as affine maps over it. Although the carrier is generalized, matrix interpretations do not properly generalize polynomial interpretations, since not all polynomials are affine. This gap can be filled by improved matrix interpretations, that further generalize the carrier to square matrices [6], so that natural polynomial interpretations can be subsumed by matrix polynomials over 1 × 1 matrices. In arctic interpretations [19], the carrier consists of vectors over arctic naturals (N ∪ {−∞}) or integers (Z ∪ {−∞}), and interpretations are affine maps over it, where affinity is with respect to the max/plus semiring.
Having this many variations would be welcome if you are a user of a termination tool in which someone else has already implemented all of them. It would not be so if you are the developer of a termination tool in which you will have to implement all of them. Also, to ultimately trust termination tools, one needs to formalize proof methods using proof assistants and obtain trusted certifier that validates outputs of termination tools, see, e.g., IsaFoR/CeTA [31] or CoLoR/Rainbow [4] frameworks. Although some interpretation methods have already been formalized [28,30], adding missing variants one by one would cost a significant effort.
In this paper, we introduce a general framework for interpretation methods, which subsumes most of the above-mentioned methods as instances, namely, (max-)polynomial interpretations (with negative constants), (improved) matrix interpretations, and arctic interpretations, as well as a syntactic method called argument filtering [1,21]. Moreover, we obtain a bunch of previously unexplored interpretation methods as other instances.
After preliminaries, we start with a convenient fact about reduction pairs, a central tool in termination proving with dependency pairs (Section 3).
The first step to the main contribution is the use of derivers [24,33], which allow us to abstract away the mathematical details of polynomials or maxpolynomials. We will obtain a key soundness result that derivers derive monotone interpretations from monotone interpretations (Section 4).
The second step is to extend derivers to multi-dimensional ones. This setting further generalizes (improved) matrix interpretations, so that max-polynomials, negative constants, and negative entries are allowed (Section 5). It will also be hinted that multi-dimensional derivers can emulate the effect of negative coefficients, although theoretical comparison is left for future work. We also show that our approach subsumes arctic interpretations by adding a treatment for −∞ (Section 6). Although the original formulation by Koprowski and Waldmann [19] has some trickiness, we will show that our simpler formulation is sufficient.
As strict monotonicity is crucial for proving termination without dependency pairs, and is still useful with dependency pairs, we will see how to ensure strict monotonicity (Section 7). At this point, the convenient fact we have seen in Section 3 becomes crucial.
Finally, the proposed method is implemented in the termination prover NaTT, and experimental results are reported (Section 8). We evaluate various instances of our method, some corresponding to known interpretation methods and many others not. We choose two new instances to integrate to the NaTT strategy. The new strategy proved the termination of 20 more benchmarks than the old one, and five of them were not proved by any tool in TermCOMP 2020.
Preliminaries
We start with order-sorted algebras. Let S = S, be a partially ordered set, where elements in S are called sorts and is called the subsort relation. An S-sorted set is an S-indexed family A = {A σ } σ∈S such that σ τ implies A σ ⊆ A τ . We write A (σ1,...,σn) for the set A σ1 × · · · × A σn . A sorted map between S-sorted sets X and A is a mapping f , written f : An S-sorted signature is an S * × S-indexed family F = {F σ,τ } σ,τ ∈S * ×S of function symbols. 1 When f ∈ F (σ1,...,σn),τ , we say f has rank (σ 1 , . . . , σ n ) → τ and arity n in F. We may also view sorted sets and signatures as sets: having a : σ ∈ A means a ∈ A σ , and f : σ → τ ∈ F means f ∈ F σ,τ . For an S-sorted signature F, an F-algebra A, [·] consists of an S-sorted set A called the carrier and a family [·] of mappings called the interpretation such Example 2. We consider the following standard interpretation · : Notice that N, · is an N *+max -algebra and Z, · is a Z *+max -algebra. Here, the {Nat}-sorted set N is defined by N Nat := N and the {Nat, Neg, Int}-sorted set Z is defined by Z Nat := N, Z Neg := {0, −1, −2, . . . } and Z Int := Z.
Sorted Terms: Given an S-sorted signature F and an S-sorted set V of variables, the S-sorted set T (F, V) of terms is inductively defined as follows: . . , s n ) ∈ T (F, V) σ , and τ ρ.
An interpretation [·] is extended over terms as follows: given α : n ]α). The Falgebra T (F, V), · (which interprets f as the mapping that takes (s 1 , . . . , s n ) and returns f (s 1 , . . . , s n )) is called the term algebra, and a sorted map θ : V → T (F, V) is called a substitution. The term obtained by replacing every variable x by θ(x) in s is thus sθ.
Term Rewriting: This paper is concerned with termination analysis for plain term rewriting. In this setting, there is only one sort 1, and we may identify a {1}-sorted set A and the set A 1 . The set of variables appearing in a term s is denoted by Var(s). A context C is a term with a special variable occurring exactly once. We denote by C[s] the term obtained by substituting by s in C. A rewrite rule is a pair of terms l and r, written l → r, such that l / ∈ V and Var(l) ⊇ Var(r). A term rewrite system (TRS) is a set R of rewrite rules, which induces the root rewrite step − → R and the rewrite step − → R as the least relations such that lθ − → R rθ and C[lθ] − → R C[rθ], for any rule l → r ∈ R, substitution θ, and context C. A TRS R is terminating iff no infinite rewriting The dependency pair (DP) framework [1,14,13] is a de facto standard among automated termination provers for term rewriting. Here we briefly recapitulate its essence. The root symbol of a term s = f (s 1 , . . . , s n ) is f and is denoted by root(s). The set of defined symbols in R is D R := {root(l) | l → r ∈ R}. We assume a fresh marked symbol f for every f ∈ D R , and write s to denote the term f (s 1 , . . . , s n ) for s = f (s 1 , . . . , s n ). A dependency pair of a TRS R is a rule l → r such that root(r) ∈ D R and l → C[r] ∈ R for some context C. The set of all dependency pairs of R is denoted by DP(R). A DP problem P, R is just a pair of TRSs.
A number of techniques called DP processors that simplify or decompose DP problems are proposed; see [13] for a list of such processors. Among them, the central technique for concluding the finiteness of DP problems is the reduction pair processor, which will be reformulated in the next section.
Notes on Reduction Pairs
A reduction pair is a pair , of order-like relations over terms with some conditions. Here we introduce two formulations of reduction pairs, one demanding natural assumptions of orderings, and the other, reduction pair seed, demanding only essential requirements. The first formulation is useful when proving properties of reduction pairs, while the latter is useful when devising new reduction pairs. We will show that the two notions are essentially equivalent: one can always extend a reduction pair seed into a reduction pair of the former sense. Existing formulations of reduction pairs lie strictly in between the two.
Definition 1 (reduction pair). A (quasi-)order pair
, is a pair of a quasi-order and an irreflexive relation ⊆ satisfying compatibility: ; ; ⊆ . The order pair is well-founded if is well-founded. A reduction pair is a well-founded order pair , on terms, such that both and are closed under substitutions, and is closed under contexts. Here, a relation is closed under substitutions (resp. contexts) iff s t implies sθ tθ for every substitution θ (resp. C[s] C[t] for every context C).
The above formulation of reduction pairs is strictly subsumed by standard definitions (e.g., [1,14,13]), where is not necessarily a subset of , and compatibility is weakened to either ; ⊆ or ; ⊆ . Instead, is required to be transitive but this follows from our assumptions ⊆ and compatibility: ; ⊆ ; ⊆ . On one hand, this means that we can safely import existing results of reduction pairs into our formulation.
Theorem 2 (reduction pair processor [14,13]). Let P, R be a DP problem and , be a reduction pair such that P ∪R ⊆ . Then the DP problem P, R is finite if and only if P \ , R is. Example 3. Consider again the TRS R fact of the introduction. Proving that R fact terminates in the DP framework boils down to finding a reduction pair , satisfying (considering usable rules [1]): On the other hand, one may wonder whether Definition 1 might be too restrictive. We justify our formulation by uniformly extending general "reduction pairs" into reduction pairs that comply with Definition 1. This is possible for even more general pairs of relations than standard reduction pairs. Definition 2 (reduction pair seed). A well-founded order seed is a pair W, S of relations such that S is well-founded and S; W ⊆ S + . A reduction pair seed is a well-founded order seed on terms such that both W and S are closed under substitutions, and W is closed under contexts. Now we show that every reduction pair seed W, S can be extended to a reduction pair , such that W ⊆ and S ⊆ . Before that, the assumption S; W ⊆ S + of Definition 2 is generalized as follows.
Lemma 1. If W, S is a well-founded order seed, then S; W * ⊆ S + .
Proof. By induction on the number of W steps. Proof. It is trivial that is a quasi-order and ⊆ by definition. We show the well-foundedness of as follows: Suppose on the contrary we have an infinite sequence: Then using Lemma 1 (S; W * ⊆ S + ) we obtain a 1 W * b 1 S + b 2 S + · · · , which contradicts the well-foundedness of S. Now we show compatibility. By definition we have ; ⊆ , so it suffices to show ; ⊆ . By induction we reduce the claim to ; (W ∪ S) ⊆ , that is, both ; W ⊆ and ; S ⊆ . Using S; W ⊆ S + = S; S * we have The other case ; S ⊆ is easy from the definition.
Now we obtain the following corollary of Theorem 2 and Theorem 3. Corollary 1. Let P, R be a DP problem and W, S a reduction pair seed such that P ∪ R ⊆ W . Then P, R is finite if and only if P \ S, R is.
Notice that Definition 2 does not demand any order-like property, most notably transitivity. This is beneficial when developing new reduction pairs; for instance, higher-order recursive path orders [17] are known to be non-transitive, but form a reduction pair seed with their reflexive closure. Throughout the paper we use Definition 1, since it provides more useful and natural properties of orderings, which becomes crucial in Section 7.
Interpretation Methods as Derivers
Interpretation methods construct reduction pairs from F-algebras, where F is the {1}-sorted signature of an input TRS or DP problem, and the carrier is a mathematical structure where a well-founded ordering > is known. In the DP framework, weakly monotone F-algebras play an important role.
and an order pair ≥, > such that every [f ] is monotone with respect to ≥.
To ease presentation, from now on we assume that F is a {1}-sorted signature, while G is an S-sorted signature. It is easy nevertheless to generalize our results to an arbitrary order-sorted signature F.
Moreover, using the term algebra any reduction pair , on T (F, V) can be seen as a well-founded F-algebra T (F, V), ·, , .
Example 5. Continuing Example 4, ≥ , > forms a reduction pair for signature N *+max . Notice that it does not for Z +max ∪ N * , essentially because > is not well-founded in Z.
In order to prove the finiteness of a given DP problem, we need a weakly monotone F-algebra for the signature F indicated by this problem, rather than for a predefined signature like N *+max . We fill the gap by employing the notion of derivers [24,33] to derive an F-algebra from one of another signature G.
Definition 4 (deriver). An F/G-deriver is a pair of a sort δ ∈ S and a mapping d, such that d(f ) ∈ T (G, {x 1 : δ, . . . , x n : δ}) δ when f has arity n in F. Note that d(p) has sort Nat, thanks to the rank (Int, Nat) → Nat of max in Z max . The order pair d ≥ , d > satisfies the constraints given in Example 3.
Now we show that an F/G-deriver yields a weakly monotone F-algebra if the base G-algebra is known to be weakly monotone. Thus, Example 6 proves that R fact is terminating. The next result about monotonicity is folklore: Proof. The "if" direction is due to the reflexivity of ≥, and the "only if" direction is easy by induction on n and the transitivity of ≥.
Then monotonicity is carried over to the interpretation of terms, in the following sense. For two sorted maps α : X → A and β : X → A, we write α ≥ β to mean that α(x) ≥ β(x) for any x ∈ X σ and sort σ. Proof. Suppose that f has arity n in F, and for every i ∈ {1, . . . , n} that a i , b i ∈ A δ and a i ≥ b i . Then from Lemma 3, With Lemma 2 we conclude that every d[f ] is monotone with respect to ≥, and hence A δ , d[·], ≥, > is a weakly monotone F-algebra.
Thus we conclude the soundness of the deriver-based interpretation method: is a reduction pair.
Proof. Immediate consequence of Lemma 4 and Theorem 4.
It should be clear that Theorem 5 with G = Z +max ∪ N * subsumes the polynomial interpretation method with negative constants [15,Lemma 4]. Their trick is to turn integers into naturals by applying max(·, 0), as demonstrated in Example 6 in a syntactic manner. Theorem 5 gives a slightly more general fact that one can mix max and negative constants and still get a reduction pair. As far as the author knows, this fact has not been reported elsewhere, although natural max-polynomials without negative constants are known to yield reduction pairs [9, Section 4.1].
In addition, a syntactic technique known as argument filtering [1,21] is also a special case of Theorem 5. In the context of higher-order rewriting, Kop and van Raamsdonk generalized argument filters into argument functions [18, Definition 7.7], which, in the first-order case, correspond to derivers with G being a variant of F. In these applications, base signatures and algebras are not a priori known, but are subject to be synthesized and analyzed.
Multi-Dimensional Interpretations
The matrix interpretation method [8] uses a well-founded weakly monotone algebra N m , [·] Mat , ≥ ≥, over natural vectors, with an affine interpretation: [f ] Mat ( a 1 , . . . , a n ) = C 1 a 1 + · · · + C n a n + c where C 1 , . . . , C n ∈ N m×m and c ∈ N m , and the following ordering: Definition 5 ( [8,19]). Given an order pair ≥, > on A and a dimension m ∈ N, we define the order pair ≥ ≥, on A m as follows: Improved matrix interpretations [6] consider square matrices instead of vectors, and thus, in principle, matrix polynomials can be considered. Now we generalize these methods by extending derivers to multi-dimensional ones.
can be shown terminating by the following 2-dimensional matrix interpretation: The 2-dimensional {f, g}/N + -deriver (Nat, Nat), d defined by Now we prove a counterpart of Theorem 5 for multi-dimensional derivers. The following lemma is one of the main results of this paper, which is somewhat surprisingly easy to prove.
Lemma 5.
For an m-dimensional F/G-deriver δ, d and a weakly monotone is a weakly monotone F-algebra.
Proof. Let f have arity n in F and a 1 , . . . , a n , b 1 , . . . , b n ∈ A δ satisfy a i ≥ ≥ b i . Define α and β by α(x i,j ) := ( a i ) j and β(x i,j ) := b i j . By assumption we have α ≥ β, and with Lemma 3 we have . . . , b n ), and this concludes the proof due to Lemma 2.
It should be clear that every m-dimensional (improved) matrix interpretation can be expressed as an m-dimensional (or m 2 -dimensional) F/N *+ -deriver. There are two more important consequences of Theorem 6: First, we can interpret symbols as non-affine maps even including max-polynomials; and second, since > is not required to be well-founded in A ( δ)2 , . . . , A ( δ)m , examples that previously required non-monotone interpretations-and hence a stronger condition than Theorem 2-can be handled.
Example 8 (Excerpt of AProVE 08/log). Consider the TRS R / consisting of which defines (for simplicity, rounded up) natural division. Proving R / terminating using dependency pairs boils down to finding a reduction pair , such that (again considering usable rules) x -0 x s(x) -s(y) xy s(x) / s(y) (s(x) -s(y)) / s(y)
A polynomial interpretation [·]
Pol with negative coefficients such that satisfies the above constraints, but one must validate the requirements of [15,Theorem 11]. In our setting, an F/Z +max -deriver (Nat, Neg), d such that yields a reduction pair satisfying the above constraints.
The intuition here is that the two dimensional interpretation of s n (0) records n in the first coordinate and −n in the second. Hence, one does not have to reconstruct −n from n using the non-monotonic minus operation.
It seems plausible to the author that negative coefficients can be eliminated using the above idea; however, the increase of the dimension leads to more freedom in variables (the variable introduced to represent −n may take values other than that) and so the ordering over terms may be different. It is left for future work to investigate whether this idea always works or not.
Arctic Interpretations
An arctic interpretation [19] [·] A is a matrix interpretation on the arctic semiring; that is, every interpretation [f ] A ( x 1 , . . . , x n ) is of the form where ⊗ and ⊕ denote the matrix multiplication and matrix addition in which the scalar addition is replaced by the max operation, and the scalar multiplication by addition; and entries of C i and c are arctic naturals (N −∞ := N∪{−∞}) or arctic integers (Z −∞ := Z ∪ {−∞}). In addition, (1) must be absolute positive: forms a well-founded weakly monotone algebra.
The above formulation deviates from the original [19] in two ways. First, we do not introduce the special relation such that −∞ −∞. Koprowski and Waldmann demanded this to ensure closure under general substitutions, but such a comparison cannot occur as we only need to consider substitutions that respect the carrier N×Z m−1 −∞ . Second, for arctic natural interpretations they relax absolute positiveness to somewhere finiteness: ( c) 1 = −∞ or (C i ) 1,1 = −∞ for some i. However, the two assumptions turn out to be equivalent. and extend the standard interpretation · accordingly. We omit the easy proof of the following fact and the counterpart for arctic integer interpretations.
Notice that, in practice, this requires us to deal with −∞ by ourselves since there is no standard SMT theory [3] that supports arithmetic with −∞.
Strict Monotonicity
Before the invention of dependency pairs [1], strictly monotone algebras were necessary for proving termination by interpretation methods, and they constitute a sound and complete method for proving termination of TRSs.
Definition 7.
A strictly monotone F-algebra is a weakly monotone F-algebra A, [·], ≥, > such that A, [·] is monotone with respect to both ≥ and >.
Theorem 7 (cf. [36]). A TRS R is terminating if and only if there is a strictly monotone well-founded F-algebra A, Moreover, strict monotonicity is a desirable property in the DP framework as it allows one to remove not only dependency pairs but also rewrite rules.
We now state a criterion that ensures the strict monotonicity of multidimensional interpretation obtained via derivers. Below we write d i to mean the mapping defined by Proof. We only prove strict monotonicity as we already know weak monotonicity by Lemma 5. So suppose that f has arity n in F, a 1 , . . . , a i , . . . , a n , a i ∈ A δ and a i a i . For the first coordinate, define α by α(x k,j ) := ( a k ) j . Then, first using the assumption, and then Lemma 3, we conclude a 1 , . . . , a i , . . . , a n ) For the other coordinates, thanks to the "new" assumption > ⊆ ≥ in Definition 1 we have a i ≥ ≥ a i . Then the weak monotonicity ensures d[f ]( a 1 , . . . , a i , . . . a n ) ≥ ≥ d[f ]( a 1 , . . . , a i , . . . , a n ), from which we deduce for each j ∈ {2, . . . , m}, a 1 , . . . , a i , . . . , a n ) ≥ d j [f ]( a 1 , . . . , a i , . . . , a n ) Although the above result and proof do not look surprising, it would be worth noticing that the statement is false in the standard formulation allowing > ⊆ ≥ (as even in [8]).
Example 9. Consider the following apparently monotone matrix interpretation: So [f] would not be monotone with respect to .
Implementation and Experiments
Multi-dimensional interpretations are implemented in the termination prover NaTT version 2.0 2 , using a template-based approach.
Definition 8. An m-dimensional F/G-deriver template δ, d with S-sorted set W of template variables is defined as in Definition 6, but allowing d(f ) ∈ T (G, W ∪ X ) δ . Its instance according to a substitution θ : In the implementation, we fix G = Z +max ∪ N * and the base weakly monotone G-algebra Z, · , ≥, > . Given an m-dimensional deriver template δ, d with W, our interest is now to find θ : W → Z such that dθ[s] ≥ dθ[t] for every (s, t) ∈ P ∪ R for the DP problem P, R of concern, thanks to Theorem 6. NaTT reduces this problem into an SMT problem and passes it to a backend SMT solver. The page limit is not enough to detail the reduction; in short, the constraint dθ[s] ≥ ≥ dθ[t] is reduced into a Boolean formula over atoms of form a * v 1 , i 1 * · · · * v n , i n ≥ b * v 1 , i 1 * · · · * v n , i n , where a, b ∈ T (G, W), and v 1 , i 1 . . . , v n , i n ∈ (Var(s) ∪ Var(t)) × {1, . . . , m} are seen as variables. Internally NaTT uses a distribution approach [30], whose soundness crucially relies on the fact that the only rank of * is (Nat, Nat) → Nat in the signature G. Then each atom is further reduced to (1) a = b if ( δ) ij = Int for some j, (2) a ≥ b if {j | ( δ) ij = Neg} is even, and (3) a ≤ b otherwise. Due to the last step, having coordinates of sort Int leads to a stronger constraint when ordering terms. Finally, the resulting formula, containing only template variables, is passed to the SMT solver Z3 4.8.10 [26] and a satisfying solution θ : W → Z is a desired substitution.
To verify the practical significance of the method, we evaluated various templates in a simple dependency pair setting. For a function symbol f of arity n ≥ 2, the k-th coordinate of template d(f ) is chosen from , and a heuristic choice [35] between sum-sum and max-sum, where b and w introduce fresh template variables, b ranges over {0, 1} and the sort of w is up to further choice. The sort of the first coordinate is turned to Nat by applying max(·, 0) if necessary.
Experiments are run on the StarExec environment [29], with timeout of 300 seconds. The benchmarks are the 1507 TRSs from the TRS Standard category of the termination problem database 11 [32]. Due to the huge search space, we evaluate templates of dimensions up to 2. A part of the results are summarized in Table 1. Full details of the experiments are made available at http://www. trs.cm.is.nagoya-u.ac.jp/NaTT/multi/.
In the table, each coordinate is represented by the template and the sort of w. In terms of the number of successful termination proofs indicated in the "YES" column, the classical matrix interpretations (row #3) are impressively strong. Nevertheless, it is worth considering a negative coordinate (#4) as it gives 10 termination proofs that the previous version of NaTT could not find, indicated in the "New" column. In contrast, considering whole integers in the second coordinate (#5) does not look promising as the runtime grows significantly. Concerning "max", we observe that its use in the second coordinate (#6) degrades the performance. Using "max" in both coordinates a la arctic interpretations (#8, #9) gives a few new termination proofs, but the impact in the runtime is significant in the current implementation. The runtime improves by replacing some occurrences of "max" by "sum" (#10-12), while the power does not seem defected. In terms of the number of termination proofs, the heuristic choice of "sum-sum" and "max-sum" in the first coordinate (#13) performed the best among the evaluated templates.
From these experiments, we pick templates #4 and #13 to incorporate in the NaTT default strategy. The final results are summarized in Table 2. Although the runtime noticeably increases, adding both #4 and #13 gives 20 more examples solved, and five of them (AProVE 09 Inductive/log and four in Transformed CSR 04/) were not solved by any tool in the TermCOMP 2020.
Conclusion
In this paper we introduced a deriver-based multi-dimensional interpretation method. The author expects that the result makes the relationships between existing interpretation methods cleaner, and eases the task of developing and maintaining termination tools. Moreover, it yields many previously unknown interpretation methods as instances, proving the termination of some standard benchmarks that state-of-the-art termination provers could not.
Theoretical comparison with negative coefficients is left for future work, and the use of −∞ is not implemented yet. Also since this work broadens the search space, it is interesting to heuristically search for derivers rather than fixing some templates. Derivers of higher dimensions seem also interesting to explore. Finally, although the proposed method is implemented in the termination prover NaTT, there is no guarantee that the implementation is correct. In order to certify termination proofs that use multi-dimensional derivers, one must formalize the proofs in this paper, extend the certifiable proof format [27], and implement a verified function to validate such proofs. | 2021-07-05T22:30:59.226Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "5e1e22af2f616e448127c966cd823e5fd7b106dc",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-79876-5_16.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5e1e22af2f616e448127c966cd823e5fd7b106dc",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
13320292 | pes2o/s2orc | v3-fos-license | Digital Image Stabilization
The ongoing development and miniaturization of consumer devices that have image acquisition capabilities increases the need for robust and efficient image stabilization solutions. The need is driven by two main factors: (i) the difficulty to avoid unwanted camera motion when using a small hand-held device (like a camera phone), and (ii) the need for longer integration times due to the small pixel area resulted from the miniaturization of the image sensors in conjunction with the increase in image resolution. The smaller the pixel area the less photons/second could be captured by the pixel such that a longer integration time is needed for good results.
Introduction
The problem of image stabilization dates since the beginning of photography, and it is basically caused by the fact that any known image sensor needs to have the image projected on it during a period of time called integration time. Any motion of the camera during this time causes a shift of the image projected on the sensor resulting in a degradation of the final image, called motion blur.
The ongoing development and miniaturization of consumer devices that have image acquisition capabilities increases the need for robust and efficient image stabilization solutions. The need is driven by two main factors: (i) the difficulty to avoid unwanted camera motion when using a small hand-held device (like a camera phone), and (ii) the need for longer integration times due to the small pixel area resulted from the miniaturization of the image sensors in conjunction with the increase in image resolution. The smaller the pixel area the less photons/second could be captured by the pixel such that a longer integration time is needed for good results.
It is of importance to emphasize that we make a distinction between the terms "digital image stabilization" and "digital video stabilization". The latter is referring to the process of eliminating the effects of unwanted camera motion from video data, see for instance Erturk & Dennis (2000); Tico & Vehviläinen (2005), whereas digital image stabilization is concerned with correcting the effects of unwanted motions that are taking place during the integration time of a single image or video frame.
The existent image stabilization solutions can be divided in two categories based on whether they are aiming to correct or to prevent the motion blur degradation. In the first category are those image stabilization solutions that are aiming for restoring a single image shot captured during the exposure time. This is actually the classical case of image capturing, when the acquired image may be corrupted by motion blur, caused by the motion that have taken place during the exposure time. If the point spread function (PSF) of the motion blur is known then the original image can be restored, up to some level of accuracy (determined by the lost spatial frequencies), by applying an image restoration approach Gonzalez & Woods (1992); Jansson (1997). However, the main difficulty is that in most practical situations the motion blur PSF is not known. Moreover, since the PSF depends of the arbitrary camera motion during the exposure time, its shape is different in any degraded image as exemplified in Fig. 1. Another difficulty comes from the fact that the blur degradation is not spatially invariant over the image area. Thus, moving objects in the scene may result in very different blur models in certain image areas. On the other hand, even less dynamic scenes may contain different blur models in different regions in accordance to the distance between the objects and the camera, i.e., during a camera translation close objects have larger relative motions than distant objects, phenomenon known as "parallax".
In order to cope with the insufficient knowledge about the blur PSF one could adopt a blind de-convolution approach, e.g., Chan & Wong (1998); You & Kaveh (1996). Most of these methods are computationally expensive and they have reliability problems even when dealing with spatially invariant blur. Until now, published research results have been mainly demonstrated on artificial simulations and rarely on real world images, such that their potential use in consumer products seems rather limited for the moment.
Measurements of the camera motion during the exposure time could help in estimating the motion blur PSF and eventually to restore the original image of the scene. Such an approach have been introduced by Ben- Ezra & Nayar (2004), where the authors proposed the use of an extra camera in order to acquire motion information during the exposure time of the principal camera. A different method, based on specially designed high-speed CMOS sensors has been proposed by Liu & Gamal (2003). The method exploits the possibility to independently control the exposure time of each image pixel in a CMOS sensor. Thus, in order to prevent motion blur the integration is stopped selectively in those pixels where motion is detected.
Another way to estimate the PSF has been proposed in Tico et al. (2006); Tico & Vehviläinen (2007a); Yuan et al. (2007), where a second image of the scene is taken with a short exposure. Although noisy, the secondary image is much less affected by motion blur and it can be used as a reference for estimating the motion blur PSF which degraded the principal image.
In order to cope with the unknown motion blur process, designers have adopted solutions able to prevent such blur for happening in the first place. In this category are included all optical image stabilization (OIS) solutions adopted nowadays by many camera manufactures. These solutions are utilizing inertial senors (gyroscopes) in order to measure the camera motion, following then to cancel the effect of this motion by moving either the image sensor Konika Minolta Inc. (2003), or some optical element Canon Inc. (2006 in the opposite direction. The miniaturization of OIS systems did not reach yet the level required for implementation in a small device like a camera phone. In addition, most current OIS solutions cannot cope well with longer exposure times. In part this is because the inertial motion sensors, used to measure the camera motion, are less sensitive to low frequency motions than to medium and high frequency vibrations. Also, as the exposure time increases the mechanism may drift due to accumulated errors, producing motion blurred images (Fig. 2).
An image acquisition solution that can prevent motion blur consists of dividing long exposure times in shorter intervals, following to capture multiple short exposed image frames of www.intechopen.com the same scene. Due to their short exposure, the individual frames are corrupted by sensor noises (e.g., photon-shot noise, readout noise) Nakamura (2006) but, on the other hand, they are less affected by motion blur. Consequently, a long exposed and motion blur free picture can be synthesized by registering and fusing the available short exposed image frames (see Tico (2008a;; Tico & Vehviläinen (2007b)). Using this technique the effect of camera motion is transformed from a motion blur degradation into a misalignment between several image frames. The advantage is that the correction of the misalignment between multiple frames is more robust and computationally less intensive than the correction of a motion blur degraded image.
In this chapter we present the design of such a multi-frame image stabilization solution, addressing the image registration and fusion operations. A global registration approach, described in Section 2, assists the identification of corresponding pixels between images. However the global registration cannot solve for motion within the scene as well as for parallax. Consequently one can expect local misalignments even after the registration step. These will be solved in the fusion process described in Section 3.
Image registration
Image registration is essential for ensuring an accurate information fusion between the available images. The existent approaches to image registration could be classified in two categories: feature based, and image based methods, Zitova & Flusser (2003). The feature based methods rely on determining the correct correspondences between different types of visual features extracted from the images. In some applications, the feature based methods are the most effective ones, as long as the images are always containing specific salient features (e.g., minutiae in fingerprint images Tico & Kuosmanen (2003)). On the other hand when the number of detectable feature points is small, or the features are not reliable due to various image degradations, a more robust alternative is to adopt an image based registration approach, that utilizes directly the intensity information in the image pixels, without searching for specific visual features.
In general a parametric model for the two-dimensional mapping function that overlaps an "input" image over a "reference" image is assumed. Let us denote such mapping function by t stands for the coordinates of an image pixel, and p denotes the parameter vector of the transformation. Denoting the "input" and "reference" images by h and g respectively, the objective of an image based registration approach is to estimate the parameter vector p that minimizes a cost function (e.g., the sum of square differences) between the transformed input image h(t(x; p)) and the reference image g(x).
The minimization of the cost function, can be achieved in various ways. A trivial approach would be to adopt an exhaustive search among all feasible solutions by calculating the cost function at all possible values of the parameter vector. Although this method ensures the discovery of the global optimum, it is usually avoided due to its tremendous complexity.
To improve the efficiency several alternatives to the exhaustive search technique have been developed by reducing the searching space at the risk of losing the global optimum, e.g., logarithmic search, three-step search, etc, (see Wang et al. (2002)). Another category of image based registration approaches, starting with the work of Lucas & Kanade (1981), and known also as gradient-based approaches, assumes that an approximation to image derivatives can be consistently estimated, such that the minimization of the cost function can be achieved by applying a gradient-descent technique (see also Baker & Matthews (2004); Thevenaz & Unser (1998)). An important efficiency improvement, for Lucas-Kanade algorithm, has been proposed in Baker & Matthews (2004), under the name of "Inverse Compositional Algorithm" (ICA). The improvement results from the fact that the Hessian matrix of the cost function, needed in the optimization process, is not calculated in each iteration, but only once in a precomputation phase. In this work we propose an additional improvement to gradient-based methods, that consists of simplifying the repetitive image warping and interpolation operations that are required during the iterative minimization of the cost function. Our presentation starts by introducing an image descriptor in Section 2.1, that is less illumination dependent than the intensity component. Next, we present our registration algorithm in Section 2.2, that is based on matching the proposed image descriptors of the two images instead their intensity components.
Preprocessing
Most of the registration methods proposed in the literature are based on matching the intensity components of the given images. However, there are also situations when the intensity components do not match. The most common such cases are those in which the two images have been captured under different illumination conditions, or with different exposures. In order to cope with such cases we propose a simple preprocessing step aiming to extract an illumination invariant descriptor from the intensity component of each image. Denoting by H(x) the intensity value in the pixel x, and with avg(H) the average of all intensity values in the image, we first calculateH(x)=H(x)/avg(H), in order to gain more independence from the global scene illumination. Next, based on the gradient ofH we calculate H g (x)= |H x (x)| + |H y (x)| in each pixel, and med(H g ) as the median value of H g (x) over the entire image. www.intechopen.com
&KIKVCN +OCIG 5VCDKNK\CVKQP
Finally, the actual descriptor that we are using in the registration operation is given by the following binary image
Registration algorithm
In the following we describe an image based registration method that is using a multiresolution coarse to fine strategy. Typically in such algorithm, at each iteration step one of the images should be warped in accordance to the parameters estimated so far. In our method this warping operation is highly simplified on the expense of increase memory usage.
The levels of the multi-resolution representation are over-sampled, and they are obtained by iteratively smoothing the original image descriptor h, such that to obtain smoother and smoother versions of it. Leth ℓ denotes the smoothed image resulted after ℓ-th low-pass filtering iterations (h 0 = h). The smoothed image at next iteration can be calculated by applying one-dimensional filtering along the image rows and columns as follows: where w k are the taps of a low-pass filter.
The registration approach takes advantage of the fact that each decomposition level (h ℓ )i s over-sampled, and hence it can be reconstructed by a subset of its pixels. This property allows to enhance the efficiency of the registration process by using only a subset of the pixels in the registration algorithm. The advantage offered by the availability of over-sampled decomposition level, is that the set of pixels that can be used in the registration is not unique. A broad range of geometrical transformations can be approximated by simply choosing a different set of pixels to describe the sub-sampled image level. In this way, the over-sampled image level is regarded as a "reservoir of pixels" for different warped sub-sampled versions of the image, which are needed at different stages in the registration algorithm.
Let x n,k =[ x n,k y n,k ] t , for n, k integers, denote the coordinates of the selected pixels into the smoothed image (h ℓ ). A low-resolution version of the image (ĥ ℓ ) can be obtained by collecting the values of the selected pixels:ĥ ℓ (n, k)=h ℓ (x n,k ). Moreover, given an invertible geometrical transformation function t(x; p), the warping version of the low resolution image can be obtained more efficiently by simply selecting another set of pixels from the area of the smoothed image, rather than warping and interpolating the low-resolution imageĥ ℓ . This is: ) .
The process described above is illustrated in Fig.3, where the images shown on the bottom row represent two low-resolutions warped versions of the original image (shown in the topleft corner). The two low-resolution images are obtained by sampling different pixels from the smoothed image (top-right corner) without interpolation.
The registration method used in our approach is presented in Algorithm 1. The algorithm follows a coarse to fine strategy, starting from a coarse resolution level and improving the parameter estimate with each finer level, as details in the Algorithm 2. The proposed algorithm relies on matching image descriptors (1) derived from each image rather than image intensity components.
Algorithm 2 presents the registration parameter estimation at one resolution level. In this algorithm, the constant N 0 , specifies the number of iterations the algorithm is performing www.intechopen.com 4GEGPV #FXCPEGU KP 5KIPCN 2TQEGUUKPI Fig. 3. Low-resolution image warping by re-sampling an over-sampled image decomposition level.
Algorithm 1 Global image registration
Input: the input and reference images plus, if available, an initial guess of the parameter vector p =[p 1 p 2 ⋅⋅⋅ p K ] t . Output: the parameter vector that overlaps the input image over the reference image.
1-Calculate the descriptors (1) for input and reference images, denoted here by h and g, respectively.
3-For each level ℓ between ℓ max and ℓ min , do Algorithm 2.
after finding a minima of the error function. This is set in order to reduce the chance of ending in a local minima. As shown in the algorithm the number of iterations is reset to N 0 , every time a new minima of the error function is found. The algorithm stops only if no other minima is found in N 0 iterations. In our experiments a value N 0 = 10 has been used.
Algorithm 2 Image registration at one level
Input: the ℓ-th decomposition level of the input and reference images (h ℓ ,g ℓ ), plus the parameter vector p =[p 1 p 2 ⋅⋅⋅ p K ] t estimated at the previous coarser level. Output: a new estimate of the parameter vector p out that overlapsh ℓ overg ℓ . Initialization: set minimum matching error E min = ∞, number of iterations N iter = N 0 1-Set the initial position of the sampling points x n,k in the vertex of a rectangular lattice of period D = 2 ℓ , over the area of the reference imageg ℓ .
3-For each parameter p i of the warping function calculate the image
whereĝ x ,ĝ y denote a discrete approximation of the gradient components of the reference image.
4-Calculate the first order approximation of the K × K Hessian matrix, whose element (i, j) is given by: 5-Calculate a K × K updating matrix U, as explain in the text.
7-Determine the overlapping area betweenĥ andĝ, as the set of pixel indices Ψ such that any pixel position (n, k) ∈ Ψ is located inside the two images.
12-Update the parameter vector p = p + Uq
The parameter update (i.e., line 12 in Algorithm 2) makes use of an updating matrix U calculated in step 5 of the algorithm. This matrix depends of the functional form of the geometrical transformation assumed between the two images, t(x; p). For instance, in case of affine trans-formation t(x; p)= the parameter update matrix is whereas in case of a projective transformation we have U = diag (D, D,1,1,1,1,1/D,1/D) H −1 .
In our implementation of multi-resolution image decomposition (2), we used a symmetric filter w of size 3, whose taps are respectively w −1 = 1/4, w 0 = 1/2, and w 1 = 1/4. Also, in order to reduce the storage space the first level of image decomposition (i.e.,h 1 ), is subsampled by 2, such that any higher decomposition level is half the size of the original image.
Fusion of multiple images
The pixel brightness delivered by an imaging system is related to the exposure time through a non-linear mapping called "radiometric response function", or "camera response function" (CRF). There are a variety of techniques (e.g., Debevec & Malik (1997); Mitsunaga & Nayar (1999)) that can be used for CRF estimation. In our work we assume that the CRF function of the imaging system is known, and based on that we can write down the following relation for the pixel brightness value: where x =[ xy ] T denotes the spatial position of an image pixel, I(x) is the brightness value delivered by the system, g(x) denotes the irradiance level caused by the light incident on the pixel x of the imaging sensor, and Δt stands for the exposure time of the image.
Let I k , for k ∈{ 1,...,K} denote the K observed image frames whose exposure times are denoted by Δt k . A first step in our algorithm is to convert each image to the linear (irradiance) domain based on knowledge about the CRF function, i.e., We assume the following model for the K observed irradiance images: where where x =[xy] T denotes the spatial position of an image pixel, g k is the k-th observed image frame, n k denotes a zero mean additive noise, and f k denotes the latent image of the scene at the moment the k-th input frame was captured. We emphasize the fact that the scene may change between the moments when different input frames are captured. Such changes could be caused by unwanted motion of the camera and/or by the motion of different objects in the scene. Consequently, the algorithm can provide a number of K different estimates of the latent scene image each of them corresponding to a different reference moment.
In order to preserve the consistency of the scene, we select one of the input images as reference, following to aim for improving the selected image based on the visual data available in all captured images. In the following, we denote by g r ,( r ∈{ 1, . . . , K}) the reference image observation, and hence the objective of the algorithm is to recover an estimate of the latent scene image at moment r, i.e., f = f r . The restoration process is carried out based on a spatiotemporal block processing. Assuming a division of g r in non-overlapping blocks of size B × B pixels, the restored version of each block is obtained as a weighted average of all blocks located in a specific search range, inside all observed images.
Let X B x denote the sub-set of spatial locations included into a block of B × B pixels centered in the pixel x, i.e.: where the inequalities are componentwise, and Ω stands for the image support. Also, let g(X B x ) denote the B 2 × 1 column vector comprising the values of all pixels from an image g that are located inside the block X B x . The restored image is calculated block by block as followŝ where , y), is a normalization value, X S x denotes the spatial search range of size S × S centered in x, and w k (x, y) is a scalar weight value assigned to an input block X B y from image g k . The weight values are emphasizing the input blocks that are more similar with the reference block. Note that, at the limit, by considering only the most similar such block from each input image we obtain the block corresponding to the optical flow between the reference image and that input image, as in Tico & Vehviläinen (2007b). In such a case the weighted average (11) comprises only a small number of contributing blocks for each reference block. If more computational power is available, we can chose the weight values such that to use more blocks for the restoration of each reference block, like for instance in the solution presented in Tico (2008a), where the restoration of each reference block is carried out by considering all visually similar blocks found either inside the reference image or inside any other input image. Although the use of block processing is more efficient for large images, it might create artifacts in detailed image areas. In order to cope with this aspect, the solution presented in Tico (2008a), proposes a mechanism for adapting the block size to the local image content, by using smaller blocks in detail areas and larger blocks in smooth areas of the image. We conclude this section by summarizing the operations of a multi-frame image stabilization solutions in Algorithm 3.
Algorithm 3 Stabilization algorithm
Input: multiple input images of the scene. Output: one stabilized image of the scene.
1-Select a reference image either in a manual or an automatic manner. Manual selection can be based on preferred scene content at the moment the image frame was captured, whereas automatic selection could be trivial (i.e., selecting the first frame), or image quality based (i.e., selecting the higher quality frame based on a quality criteria). In our work we select the reference image frame as the one that is the least affected by blur. To do this we employ a sharpness measure, that consists of the average energy of the image in the middle frequency band, calculated in the FFT domain.
2-Convert the input images to a linear color space by compensating for camera response function non-linearity.
3-Register the input images with respect to the reference image.
4-Estimate the additive noise variance in each input image. Instead using a global variance value for the entire image, in our experiments we employed a linear model for the noise variance with respect to the intensity level in order to emulate the Poisson process of photon counting in each sensor pixel. 5-Restore each block of the reference image in accordance to (11).
6-Convert the resulted irradiance estimatef (x), of the final image, back to the image domain,Î(x)=CRF(f (x)Δt), based on the desired exposure time Δt. Alternatively, in order to avoid saturation and hence to extend the dynamic range of the captured image, one can employ a tone mapping procedure (e.g., Jiang & Guoping (2004)) for converting the levels of the irradiance image estimate into the limited dynamic range of the display system. www.intechopen.com
Examples
A visual example of the presented method is shown in Fig. 4. In total a number of four short exposed image frames (like the one shown in Fig. 4(a)) have been captured. During the time the individual images have been captured the scene was changed due to moving objects, as reveal by Fig. 4 (b). Applying the proposed algorithm we can recover a high quality image at any moment by choosing the reference frame properly, as exemplified by Fig. 4 (c) and (d).
The improvement in image quality achieved by combining multiple images is demonstrated by the fragment in Fig. 5 that shows significant reduction in image noise between one input image Fig. 5(a) and the result Fig. 5(b).
Two examples using images captured with a mobile phone camera are shown in Fig. 6 and Fig. 7. In both cases the algorithm was applied onto the Bayer RAW image data before image www.intechopen.com pipeline operations. A simple linear model for the noise variance with respect to the intensity level was assumed in order to emulate the Poisson process of photon counting in each sensor pixel Nakamura (2006), for each color channel. Fig. 6(a), shows an image obtained without stabilization using the mobile device set on automatic exposure. Due to unwanted camera motion the resulted image is rather blurry. For comparison, Fig. 6(b), shows the image obtained with our proposed stabilization algorithm by fusing several short exposed images of the same scene. An example when the proposed algorithm is applied onto a single input image is shown in Fig. 7. In this case the algorithm acts as a noise filtering method delivering the image Fig. 7(b), by reducing the noise present in the input image Fig. 7(a).
Conclusions and future work
In this chapter we presented a software solution to image stabilization based on fusing the visual information between multiple frames of the same scene. The main components of the algorithm, global image registration and image fusion have been presented in detail along with several visual examples. An efficient coarse to fine image based registration solution is obtained by preserving an over-sampled version of each pyramid level in order to simplify the warping operation in each iteration step. Next the image fusion step matches the visual similar image blocks between the available frames coping thereby with the presence of moving objects in the scene or with the inability of the global registration model to describe the camera motion. The advantages of such a software solution against the popular hardware opto-mechanical image stabilization systems include: (i) the ability to prevent blur caused by moving objects in a dynamic scene, (ii) the ability to deal with longer exposure times and stabilized not only high frequency vibrations but also low frequency camera motion during the integration time, and (iii) the reduced cost and size required for implementation in small mobile devices. The main disadvantage is the need to capture multiple images of the scene. However, nowadays most camera devices provide a "burst" mode that ensures fast capturing of multiple images. Future work would have to address several other applications that can take advantage of the camera "burst" mode by fusing multiple images captured with similar of different exposure and focus parameters. Fax: +86-21-62489821
Recent Advances in Signal Processing
The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian.
They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity. | 2015-12-31T08:38:17.981Z | 2009-11-01T00:00:00.000 | {
"year": 2009,
"sha1": "e1ecc8b4212e421650e7a31db61ee2fa7a30e814",
"oa_license": "CCBYNCSA",
"oa_url": "https://cdn.intechopen.com/pdfs/9242.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "eb02a8c798d3e3462ee4e2cb764c9df9ed8d6fb5",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
55895879 | pes2o/s2orc | v3-fos-license | Nutrient Evaluation of Different Buckwheat Genetic Resources
Common buckwheat (Fagopyrum esculentum Moench) and tartary buckwheat (Fagopyrum tataricum Gaerth.) are underutilized pseudo-cereals and both considered nutritional food. Eight common and eleven tartary buckwheat accessions acquired from Slovenian plant gene bank were grown at the experimental fields of the Agricultural Institute of Slovenia in 2014. Dried grains were homogenised and analysed for several nutrient parameters: moisture content (11–14% dry weight, DW), total proteins (11–16 % DW), dietary fibre (15–19 % DW), ash (2–6 % DW) and total fats (1.8–2.6 % DW). The fatty acids (C14:0, C16:0, C18:0, C18:1n9, C18:2n6, C18:3n3, C20:0) were determined using gas chromatography, free amino acids (Gly, Glu, Arg, Lys, Asp, Ser, Phe, Ala, Val, Thr, Pro, Ile, Met, His, Cys, Leu, Tyr) by the highperformance liquid chromatography and multi-mineral analysis (K, P, Si, S, Ca, Fe, Cl, Ti, Zn) using X-ray fluorescence spectroscopy. The results show significant differences between two buckwheat species, and their gene bank accessions for investigated nutritional parameters.
Introduction
Buckwheat has played an important role in diets around the world over the last 8000 years, mainly in Eastern Europe and Asia (Rana et al. 2016).The genus Fagopyrum (family Polygonaceae) includes several different species, among which common buckwheat (Fagopyrum esculentum Möench) and tartary buckwheat (Fagopyrum tartaricum L. Gaerth) are cultivated and used for food worldwide.
Common and tartary buckwheat are short-season crop species, requiring only moderate soil fertility and 10 to 12 weeks to mature.Both species are considered important functional food crops, containing several important nutritional constituents in many countries around the world.Consumption of food from common and tartary buckwheat, as part of an everyday diet, has increased over the past few years due to the number of health-beneficial properties (Bonafaccia et al., 2003).It is well established that both buckwheat types represent a rich source of high quality proteins, with a balanced amino acid composition, dietary fibre, retrograded starch, high quality lipids, vitamins, essential minerals and antioxidants, including phenolic compounds (Pongrac et al., 2010).Additionally, both buckwheat species are gluten-free, and thus provide an important alternative nutritious food for people with celiac disease (Giménez-Bastida et al., 2015).
The aim of the present study was to determine the composition of several nutrients (total proteins, dietary fibre, ash and total fats), fatty acids composition and multi-mineral content of common and tartary buckwheat from Slovenian plant gene bank collection.
Material and Methods
Eight common (Fagopyrum esculentum Moench) and eleven tartary buckwheat (Fagopyrum tartaricum L. Gaerth) accessions provided from Slovenian plant gene bank were grown as a main crop in the experimental field of the Infrastructure Centre Jablje, Agricultural Institute of Slovenia, Slovenia (304 m above sea level; 46.151°N 14.562°E).The mature grains were harvested in September 2014.The dried grains, containing on average 12.8 % and 11.5 % of moisture for common and tartary buckwheat respectively, were milled using a laboratory mill (Retsch ZM 200) and further homogenised using ball mill (Retsch MM 400).
Moisture content was determined by heating the samples to 103°C for 4 hours (EC 152/2009 App.III A).
Total proteins were analysed using method ISO 5983:2, using factor 6,25; modified method ISO 6865 using FiberCap was used for the determination of dietary fibre, for ash ISO 5984 was used, and total fats were analysed with petroleum ether extraction (152/2009 App.III H).Fatty acid composition was determined using gas chromatography of fatty acid methyl esters (FAMEs).NaOH and BF3 in methanol were used for transesterification and heptadecanoic acid as internal standard for quantification of fatty acids.Identification of fatty acids was carried out using a reference standard mixture of methyl esters of higher fatty acids (Lipid standard Sigma 189-19).The multielement analysis was performed non-destructively using energy dispersive Xray fluorescence (EDXRF) spectroscopy.Pellets made from 0.5 g to 1.0 g of powdered sample material were analysed using an EDXRF spectrometer composed of a Si (Li) detector, a spectroscopy amplifier, an analog to digital converter and a PC-based multichannel analyser (Canberra).The analysis of complex X-ray spectra was performed using the AXIL (Nečemer et al., 2008) spectral analysis program.Quantification was performed using the in-house developed QAES (Quantitative Analysis of Environmental Samples) software (Nečemer et al., 2011).The estimated uncertainty of the analysis was 5 % to 10 %.The content of free amino acids was determined according to ISO 13903 (ISO 13903, 2005) adapted for plant materials.Amino acids were determined in oxidized samples and hydrolyzated with 6M HCl in the presence of phenol.The dry residue was dissolved in dilute HCl and derivatized with N-aminoquinolyl succinate.High-performance liquid chromatography (HPLC) coupled with fluorescence detector (FLD) have been used for the analyses.
Results and Discussion
The average content of total proteins, dietary fibre, ash and total fats in grains of common and tartary buckwheat species is presented in Fig. 1.All results are calculated as % of dry weight (DW).The average total protein content was 14.1 % DW for common and 12.2 % DW for tartary buckwheat grains and the average dietary fibre content 16.6 % DW for common and 18.1 % DW for tartary buckwheat grains.Common buckwheat grains contained more proteins (+1.9%) and less dietary fibre (-1.5%) compared to tartary buckwheat.Ash content was on average 1 % higher for tartary buckwheat grains.Grains of tartary buckwheat contained on average 0.2 % more total fats compared to common buckwheat.Previous reports on chemical composition of buckwheat grains showed similar protein content, and somewhat higher dietary fibre and fat content (Bonafaccia et al., 2003;Eggum et al., 1980).The fatty acid analysis with gas chromatography revealed the presence of the following seven fatty acids in buckwheat species: saturated myristic (C14:0), palmitic (C16:0), stearic (C18:0) and arachidic (C20:0); and unsaturated oleic (C18:1n9), linoleic (C18:2n6) and α-linolenic (C18:3n3).Fatty acids content and total amount of all fatty acids in common and tartary buckwheat grains is reported in Tab. 1. Fatty acids content is expressed as the mass ratio of all of the fatty acids analysed and total fatty acid content as mg/100 g fresh weight (FW).
Prevailing fatty acid in both buckwheat species was linoleic acid (40.7 %), followed by the oleic (35.6 %), palmitic (16.1 %), α-linolenic (3.2 %), arachidic (2.3 %), stearic (1.9 %) and myristic acid (0.3 %).The total fatty acid content varied considerably, from 200 to 316 mg/100 g FW.The data showed differences between two buckwheat species and representing gene bank accessions for fatty acid profiles and total fatty acids content (Tab.1).Bonafaccia et al. (2003) found comparable results on fatty acids distribution to ours on one common and one tartary buckwheat cultivar.Gulpinar et al. (2012) reported lower contents of palmitic and linoleic acid in their study on common buckwheat variety.
Mineral concentrations of common and tartary buckwheat grains are expressed as mg/kg DW and presented in Tab. 2. Nine different minerals were monitored in this study and can be divided into two groups: the macro-minerals (>1 g/kg DW) of K, P, Si, S, and Ca, and the micro-minerals (>1 mg/kg DW) of Fe, Cl, Ti, and Zn.The highest levels among these minerals were measured for K (4560-6570 mg/kg DW), P (3410-4850 mg/kg DW), Si (675-10400 mg/kg DW), which varied the most among all minerals, and S (753-1620 mg/kg DW).The less abundant minerals were Ca (an average content 744 mg/kg DW), Fe (an average content 301 mg/kg DW), Cl (an average content 111 mg/kg DW), Ti (an average content 48 mg/kg DW) and Zn (an average content 20 mg/kg DW).
Common buckwheat grains contained more S, Ca and Cl, and less K, P, Si, Fe and Ti compared to tartary buckwheat.The content of Zn was similar for both buckwheat species.Mota et al. (2016) reported much lower content of minerals Fe (29 mg/kg DW) and Ca (180 mg/kg DW) in common buckwheat compared to our results.
These amino acids can be divided into two groups: the essential amino acids of Ile, Leu, Val, Phe, His, Lys, Thr and Met, and non-essential amino acids of Ala, Gly, Pro, Tyr, Asp, Glu, Arg, Ser and Cys.The highest content in common buckwheat grains was shown for Glu (> 14 % of total proteins), followed by Arg (> 8 % of total proteins) and Gly (> 7 % of total proteins).In tartary buckwheat the most abundant was Glu (> 10 % of total proteins), followed by Arg (> 8 % of total proteins) and Ser (> 7 % of total proteins).Bonafaccia et al. (2003) reported similar amino acid profiles to ours on common and tartary buckwheat bran and flour.
Conclusion
The focus of this paper was a quantitative determination of several nutrients in grains of different accessions of common and tartary buckwheat, which are typically consumed in Slovenia.There is still little information available on nutritive composition of different Fagopyrum spp.and their genetic resources.
Obtained data on the content of different nutritional parameters for analysed buckwheat species can be the basis for proposition that buckwheat should be introduced in our daily diet, in order to overcome various health problems.These data can also represent the basis for breeding cultivars with a higher nutritional quality. | 2018-12-11T14:20:41.178Z | 2017-02-03T00:00:00.000 | {
"year": 2017,
"sha1": "36a991dc9c82f8dc9c04efea506df07df1d1c1f4",
"oa_license": "CCBYNC",
"oa_url": "http://doisrpska.nub.rs/index.php/agroznanje/article/download/2969/2826",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "36a991dc9c82f8dc9c04efea506df07df1d1c1f4",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
54831227 | pes2o/s2orc | v3-fos-license | Hurst's Rescaled Range Statistical Analysis for Pseudorandom Number Generators used in Physical Simulations
The rescaled range statistical analysis (R/S) is proposed as a new method to detect correlations in pseudorandom number generators used in Monte Carlo simulations. In an extensive test it is demonstrated that the RS analysis provides a very sensitive method to reveal hidden long run and short run correlations. Several widely used and also some recently proposed pseudorandom number generators are subjected to this test. In many generators correlations are detected and quantified.
I. INTRODUCTION
p r e p r i n t Random numbers are the essential ingredient of all stochastic simulations. A great many algorithms in Monte-Carlo (MC) simulations and other non-physical computational fields rely crucially on the statistical properties of the random numbers used. High precision calculations on nowadays computer hardware typically involve the generation of billions of random numbers.
Today the most convenient and most reliable method of obtaining random numbers in practice is the use of a deterministic algorithm. Such a numerical method produces a sequence of pseudorandom numbers (PRN) which mimic the statistical properties of true random numbers as good as possible. Usually the pseudorandom number generator (PRNG) is assumed to generate a sequence of independent and identically distributed continuous U(0, 1) random number, that means uniformly distributed over the interval (0, 1). Other distributions can be obtained by transformation methods [1]. Since the state space of the generator is finite the sequence of PRNs will be eventually periodic. Therefore the expected properties of "true" random variables can only be approximated.
True random numbers can only be produced by physical devices that generate events which are principally unpredictable in advance, such as noise diodes or gamma ray counters. But such devices are inconvenient to use and Marsaglia reported that several commercial products fail standard statistical tests spectacularly [2,3]. An alternative could be the archiving of random numbers of high quality on a CDROM [2], although such a source is by far not as convenient to handle as a simple function call.
While theoretical test methods [4,5], such as the analysis of the lattice structure [6] of linear congruential generators, are certainly the starting point for constructing a good PRNG there is also a strong need for so-called empirical tests. These view the PRNG under consideration as a black box and statistically analyze sequences of numbers for various types of correlations, regardless of the generation method. There is a large battery of stan-dard tests [3][4][5]7,8,2] which every candidate to be used in "serious" simulations has to pass. PRNGs that have succeeded in all of these tests seemed to work reliable in apparently all physical simulations until the last few years. But the rapid development of computer hardware and improved simulation algorithms have caused the demands on the quality of the random number sequences to greatly increase. As a consequence erroneous results have been found in recent high precision MC calculations. The errors could be related to the use of popular PRNGs in combination with some specialized algorithms [9][10][11][12][13] which revealed hitherto undetected correlations in the pseudorandom sequences.
Thus there is a strong need to enlarge the tool box of empirical tests to gain confidence in newly proposed PRNGs [14][15][16][17] and to check whether traditionally used PRNGs are still reliable in modern applications. Any good statistical test should have an idiosyncracy for unwanted correlations and detect defects before they show up in an application. Newly developed and highly specialized algorithms may be sensitive to structural defects in PRNGs which are not evident in the standard tests. As different tests detect different types of defects it is desirable to develop application specific tests [18][19][20][21] that are especially sensitive to the features of the random numbers which are probed in simulations in current fields of research. But often this cannot be assessed in advance and the only way to reassure oneself of the correctness of a suspicious (or very important) result is to perform an in situ test and to repeat the simulation with some different PRNGs. Enlarging the set of test methods therefore can help to save precious time and to avoid painful recalculations.
In section II a new test method is proposed which is applied to a set of several popular generators described in section III. In section IV the results of the numerical experiments are discussed illustrating the capability of the new test. Following the conclusions, section V, additional results are tabulated in the appendices.
II. THE RS ANALYSIS
In the following I describe a new technique for judging the quality of PRNGs in at least several physically relevant situations. It will be demonstrated that the rescaled range statistical analysis (RS analysis) provides an extremely sensitive method for revealing hidden correlations in PRNGs.
As this method is based on general statistical properties expected for an independent Gaussian process it should also be useful as a general tool to test the suitability of a PRNG in a wide class of stochastic simulations. In the sequel it will be shown that it is especially effective for testing the presence of long run statistical dependence and in cases where such a correlation is present, for estimating its intensity. In addition it is shown that also short run cyclic components in a pseudorandom sequence are easily made evident using the RS statistic.
Hydrology is the oldest discipline in which noncyclic long run dependence has been reported. In particular the RS analysis has been invented by Hurst [22,23] when he was studying the Nile in order to describe the long term dependence of the water level in rivers and reservoirs. Later his method has gained much attention in the context of fractional Brownian motion [24].
The RS statistic for a series ξ t in the discrete integer valued time is conventionally defined as follows:
RS(s) = R(s)/S(s)
Viewing the ξ t as spatial increments in a onedimensional random walk then s t=1 ξ t is the distance of the walker from the starting point at time s. In the quantity X(t, s) the mean over the time lag s − 1 is subtracted to remove a trend if the expectation value of ξ t is not zero. In the sequel the difference between the final time s and the initial time 1 of the stochastic process will be termed the lag τ = s − 1. R(τ ) is the self-adjusted range of the cumulative sums and RS(τ ) is the self-rescaled self-adjusted range, which is the quantity of our interest. Feller [25] has proved that the asymptotic behavior for the expectation value of any independent random process with finite variance is given by The combination R(τ )/S(τ ) has a better sampling stability than R(τ ), in the sense that the relative deviation of RS, defined as ∆RS(τ ) = Var[RS(τ )] / E[RS(τ )], is smaller [26]. For an independent Gaussian process the limiting standard deviation is On the other hand Hurst had found empirically that many time series of natural phenomena are described by the scaling relation where H differs significantly from 1/2. In the context of fractional Brownian motion [24,26] a Hurst exponent of H = 1/2 corresponds to the vanishing of correlations between past and future spatial increments in the record.
For H > 1/2 one has persistent behavior, that means a positive increment for some time in the past will on the average lead to a positive increment in the future (if the increments are distributed symmetrically around zero). Correspondingly the case of H < 1/2 denotes antipersistent behavior. Thus almost all long run correlations in the stochastic process should show up in deviations from the asymptotic (3), (4).
Furthermore, Mandelbrot and Wallis have demonstrated that the value of the asymptotic prefactor π/2 is not robust with respect to short run statistical dependence [26]. This value can be arbitrarily modified by cyclic components in the random process. The superposition of a white noise (with zero mean and unit variance) and a purely periodic process, for instance, leads to an asymptotic value of τ π/2 (1 + A/2) −1/2 , with A being the amplitude of a sine wave. Moreover, the transient to the asymptotic is not smooth, but typically shows a series of oscillations, resembling the case of a purely oscillatory process [26].
Therefore the RS statistic is perfectly suited to analyze a stochastic process for correlations on all scales.
In the following section several types of PRNGs will be used to generate U (0, 1)-distributed random numbers ξ t . The sequence of ξ t will then be analyzed according to the RS statistic. It will be demonstrated that various PRNGs produce sequences of numbers that show deviations from the asymptotic behavior (3), (4). Moreover, it is found that for finite lags τ the value of RS(τ ) differs significantly between the tested PRNGs being indicative of short range correlations. This way it is possible to obtain a complete "fingerprint" of correlations of a PRNG and to measure their intensity as a function of the lag.
III. RANDOM GENERATORS
Because of the vast number of different PRNGs currently employed in simulations only a small fraction can be selected in this work.
The generators of the first group, labeled G1 to G7, are included as they are in general use -either because of traditions, because they are recommended in popular books, or because they can be found in many commercial software packages. Some of them have documented defects (G1,G2,G3,G5). These are considered here to study how their deviations show up in the RS statistics. The generators in the second group, G8 to G11, have been proposed recently to match also future requirements on period length and quality. But there is little documented experience about their behavior in physical simulations. As there are many good reviews and books on the various generation methods and the performance in the standard tests [3][4][5]7,8,[27][28][29] only a brief outline of the considered algorithms is given in the next section.
A. Generation Methods
Most of the commonly used PRNGs are based on the linear congruential method.
In general a multiple recursive generator of order k, denoted by MRG(a 1 , . . . , a k ; c; m), is based on the kth-order linear recurrence where the order k and the modulus m are positive integers and the coefficients are integers in the range {−(m − 1), . . . , m − 1}. The numbers x n of the sequence are then scaled to the interval (0, 1) by u n = x n /m. The special case, where k = 1, is the well-known linear congruential generator LCG(a; c; m) introduced by Lehmer [30], or in the homogeneous case, c = 0, the multiplicative linear congruential generator, denoted by MLCG(a; m). It can be shown that a recursion of order k with a non-zero constant c is equivalent to some homogeneous recurrence of order k + 1 [5,28]. All congruential generators show a pronounced lattice structure. That means, if n subsequent numbers are used to form vectors in the n-dimensional space all points that can be generated within the period lie on a family of equidistant parallel hyperplanes [6]. Tables with good choices for the constants can be found in recent reviews [3,28,31,32].
A lagged Fibonacci generator, LF(l 1 , . . . , l k ; m; •), with k lags is obtained for c = 0 and k coefficients a i being set to unit modulus, the others being set to zero, The binary operator • is usually either addition or subtraction.
The Linear feedback shift register or Tausworthe method, LFSR(p, q), generates a sequence of binary digits (bits) b n from the recurrence relation where the exclusive-or operation ⊕ is equivalent to a bitwise addition modulo two [8,33]. A sequence of pseudorandom numbers is then obtained by taking an appropriate number of consecutive bits to form an integer number.
Generalized feedback shift register generators [34], denoted by GFSR(l 1 , . . . , l k ; m), which can be considered as a generalization of the Tausworthe generator, are related to the lagged Fibonacci method, but use the exclusive-or operation instead of the arithmetic operators to combine computer words w w n = w n−l1 ⊕ · · · ⊕ w n−l k .
A generator of this type with two lags (103 and 250) has been made popular by Kirkpatrick and Stoll and is known as R250 [35,36] (see also [9]). A particular realization with four lags has been given by Ziff [37] (for test results see [18][19][20][21]). A recently proposed special variant with huge period is the twisted GFSR generator, TGFSR [17]. The multiply-with-carry generator, denoted by MWC(a 1 , . . . , a k ; c; m), is defined by the recurrence relation The div denotes an integer division. Here, in contrast to the MRG a carry (or borrow) c n is propagated to the next iteration step.
Special cases of the MWC are the the add-withcarry, AWC(l 1 , l 2 ; m), and the subtract-with-borrow, SWB(l 1 , l 2 ; m), generators, which are obtained by setting two coefficients a i to unit modulus and all others equal to zero [14,38]. This basically results in a LF generator with two lags, but with an extra addition of a carry In the case of an AWC the bracket indicates the value of the carry which is equal to 1 if the inequality is true, and equal to 0 otherwise. In the case of an SWB the addition operations accordingly have to be replaced by subtractions and the borrow is equal to 1 if the result of the subtractions becomes negative. These generators can produce much longer periods than the underlying LF generators, but have a bad lattice structure in dimension l + 1, (l being the larger of the lags) [3,5,39].
The subtraction method, SUB(c; m), is based on a simple arithmetic sequence This method is not suitable by itself, but it may be included in combination generators [7,40]. The multiplicative quadratic congruential method, MQC [4,8], the cryptographic BBS [41] and DES [42] generators, or the inversive congruential generator, ICG [43] are only mentioned for completeness, as these have received considerable theoretical attention recently. These new methods have promising features, but the generators are currently not in common use as there is little practical experience with them.
In general the PRNGs with several lags require an initial set of seeds x 1 , . . . , x k the number of which is determined by the largest lag k. While most generators do not require a special initialization procedure care has to be taken with the GFSR generators. Here an improper selection of the seeds can severely affect the quality of the sequence of PRNs [44]. Often a congruential generator is used to generate the initial state.
Tausworthe and LFSR generators which are based on the theory of primitive trinomials form unfavourable structures similar to the lattice structure of LCGs and have bad statistical properties [16,29]. Such simple generators should be avoided and combined generators should be used instead.
There is strong empirical support that the combination of different pseudorandom sequences in general leads to an improved statistical behavior [4,45]. The two wellknown methods are the shuffling of one sequence with another or with itself [4,8] or the combination by modular addition [28,32]. Hybrid generators based on the first method are still not well understood from the theoretical viewpoint [3,5]. The latter method is better understood and is suited to obtain very long periods. Adding two sequences modulo the modulus of either of them the period obtained is the least common multiple of the component periods. Generators based on such combination methods currently provide us with the "best" PRNs. Many different kinds of combined generators have been proposed, see Refs. [4,5,7,[14][15][16]28,32,40] and references given therein.
Another common method which can lead to an improvement of a generator is a decimation strategy, that means a number of PRNs is thrown away before the next random number is delivered. This approach is taken for instance in the RANLUX generator [46,47] which significantly improves the defective SWB generator RCARRY [7,38]. But neither shuffling nor the decimation method may be desirable if speed considerations are very important (see Appendix B for timing results).
In the following the generators subjected to the RS statistical analysis are described in brief.
B. Tested Generators
G1 is the well-known MLCG(7 5 ; 2 31 −1), which has been proposed as the "mimimal standard' against which all other generators should be judged [27,31,48]. It is also known as GGL [31], CONG [9], ran0 [42,49], SURAND (IBM computers), RNUM (IMSL library), or RAND (MATLAB software). It has the serious drawback of a short period, 2 31 − 1, and a pronounced lattice structure in low dimensions. Multiplier and modulus are not the optimal choice considering several figures of merit, see for instance [3]. This generator should only be considered as a toy for experimenting with new test methods like all other simple congruential and LFSR generators.
G2 is identical to G1, but additionally Bays-Durham shuffling in a table of size 32 is used to improve the low-order serial correlations. Here the implementation ran1 of Ref. [42,49] has been applied. It is included in this test to show the influence of shuffling on the RS statistic.
G3 is a LF(55, 24; 2 31 ; −) generator which has a period of 2 55 − 1. It has been devised by Mitchell and Moore in 1958 and is described by Knuth [4] (originally using an add operation). This generator (a version of which is implemented in [42] as ran3) is reported to have significant correlations on the bitlevel and to fail several physical tests [11,[18][19][20][21]. It is included to demonstrate the effect of short range correlations on the RS statistic.
G4 is a modification of the above generator G3. If a decimation strategy is used, that means, if only every k-th number of the sequence is used, the generator passes all of the physical tests in Ref. [18][19][20] (for k = 2 and k = 3). In this work only the case of k = 3 is considered.
G6 The combination generator RANMAR proposed by Marsaglia and Zaman [7,40] has a period of about 2 144 . It is based on the subtraction modulo 2 24 of a simple arithmetic sequence SUB(7654321; 2 24 − 3) and a subtractive Fibonacci generator LF(97, 33; 2 24 ; −) The initial state is generated by another combination of LCG(53; 1; 169) and a multiplicative threelag Fibonacci sequence. The implementation of James [7] tested here is in wide-spread use and has been recommended as a "universal generator". G7 combines the two congruential sequences MLCG(40014; 2 31 − 85) and MLCG(40692; 2 31 − 249) by modular addition and applies an additional shuffling in a table of 32 entries. The period is approximately 2 62 . This algorithm has been invented by L'Ecuyer [32] and implemented by James [7] (called RANECU). The additional shuffling has been added in the version ran2 of Press et al. [42,49]. Many recommendation for the improvement (for instance of the speed) of the later version have been given by Marsaglia and Zaman [14]. They reported that this generator passes all standard tests. Because of its popularity the implementation of Ref. [42,49] has been used in the following RS analysis.
G8 is the recently proposed PRNG mzran13 of Marsaglia and Zaman. It combines LCG(69069, 1013904243; 2 32 ) and SWB(2, 3; 2 32 − 18) by modular addition and has a period of about 2 125 [14]. Although the published program takes advantage of the inherent modulo 2 32 arithmetic of modern CPU's it can easily be made portable to CPUs with any larger word size by using bit-masks.
G10 This generator is the maximally equidistributed three-component Tausworthe generator taus88 developed by L'Ecuyer [16] with a period of approximately 2 88 .
G11
The twisted GFSR generator TT800 proposed by Matsumoto and Kurita [17] has a huge period of 2 800 − 1 and is reported to have excellent equidistribution properties up to a dimension of 25. This generator is recommended in [3]. The tested version includes Matsumoto's code change of 1996 which improves the lower bit correlations.
A. The Test Setup
A few additional words have to be said about the generation of the initial seeds for the PRNGs. As these are (possibly) the only truly random part when generating pseudorandom numbers some care should be taken.
The following method has been applied, as it corresponds to a common way random generators are used in practice: The initial seed is calculated from a combination of some obviously truly random events, such as the time and the date when the program is started, several system specific (unique) process identifiers, and the processor clock state. For this initial seed a sequence of 10 9 to 10 10 random numbers is generated and analyzed according to (1). Then for some new random seed another sequence is obtained and analyzed. This procedure has been iterated until the statistical error for the average of RS(τ ) was considered small enough. For each of the generators this amounted to 10 11 to 10 12 generated PRNs.
As this approach does not ensure that the generated substreams are disjoint it might look safer to split the period into disjoint parts. This could be done for almost all generators, but there are several cases known where these (typically) equidistantly spaced seeds introduce even worse correlations [5]. One should also bear in mind that for the long period generators there is only a very small probability that, for instance, ten or twenty sequences of 10 10 numbers selected by a random seed are not disjoint (of course the period of the "toy" generators is exhausted immediately).
In the case of generators requiring more than one seed one initial seed has been generated and mixed into the default seeds of the original source code. For instance, the 25 published seeds that define the state of the TGFSR generator G11 have all been exclusive-or-ed with a new random seed every time a new sequence has been generated.
All calculations necessary to evaluate the RS statistic have been performed in double precision using IEEE 754 standard floating point arithmetics.
The number of PRNs generated in the test of each generator is comparable to the number of random variates typically required in a nowadays high precision Monte-Carlo simulation. Such a number may seem large for a mere test, but it comprises the current state of the art in research fields like percolation, random walks, diffusion limited aggregation, and many others [9,11,13]. Considering the speed of the advances in computer technology much larger simulations will be in reach within the next few years posing increased demands on precision to the PRNGs. Correspondingly the stringency of the empirical tests has to increase too.
In the following section it will be shown that several current thought-to-be-reliable PRNGs show pronounced correlations in the RS statistic. This does not mean that a large scale simulation inevitably produces erroneous results with such a PRNG, but it just means that in some types of simulations deviations are not unlikely if high precision is required. Moreover, the main purpose of this paper is to demonstrate that the RS statistic is a candidate to enrich the toolbox of empirical tests for random number generators.
B. Analysis of the RS Data
In Fig. 1 the diagram of log RS(τ ) versus log τ is shown for all tested random generators. RS(τ ) has been calculated for all powers of two in the range from τ = 2 up to τ = 2 23 ≈ 8 × 10 6 as indicated by the dots. After To resolve differences between the PRGNs it is convenient to remove the asymptotic trend. In Fig. 2 the reduced function RS(τ )(πτ /2) −1/2 − 1 is displayed for a generator with known correlations, G1 (•), and the combination generator G9 (✷). On this scale of magnification it can be seen that the simple LCG spectacularly fails to approach the expected asymptotic. The relative deviation becomes as large as 1% corresponding to a reduced asymptotic prefactor (which appears to be approximately 1.243 instead of π/2 = 1.253). For comparison the data for the highly reliable composite MRG G9 are shown. In this case the asymptotic expectation value is approached smoothly. Due to the large statistical ensemble the error bars appear as single lines.
The distribution of the numerical RS values for all lags is well described by the slightly right-skewed asymptotic density as given by Feller [25]. The half width of the error bars for the estimate of the mean (in this and the following figures) is given by two standard deviations according to the asymptotic analytical result (4). This corresponds to a confidence level of about 95%. The numerical results for the mean together with the standard deviation of the mean are tabulated in Appendix A for all generators of this test.
As with several other test statistics where only the asymptotic distribution is available one is limited to compare the generators. Comparing the estimate of the mean for finite lags with the asymptotic expectation one could always enforce a rejection of a generator if the the number of samples is sufficiently increased. In the following a method is described which facilitates the comparison of
RS(τ ) for the different generators.
It can be safely assume that the asymptotic limit is approached smoothly with increasing τ . Therefore any apparent local and non-monotone structure in the transient will be indicative of correlations. Analyzing the functional form of the transient a simple and smooth interpolation can be found which gives an accurate approximation for all lags within a range of more than 6 orders of magnitude. The transient of RS(τ ) can be parametrized by +γe −δτ ε .
Using only two parameters α, β the first two terms suffice to approximate the transient with a relative precision of ≈ 10 −5 for all lags larger than τ = 32. The last term in (13) has been introduced to approximate the transient for lags as small as τ = 4. The coefficients have been obtained from a numerical adjustment using the mean values obtained from the stronger generators G8, G9, and G10 with τ in the range from 4 to 2 14 . In this range the individual results agree to a high precision. The values of the coefficients in (13) used in the following are α ≈ 1.0319941 γ ≈ 0.10516938 ε ≈ 0.61775533 β ≈ 0.42091184 δ ≈ 0.90187633 (14) The smooth interpolation R(τ ) of the transient now allows an unbiased comparison of the various PRNGs. As the expectation values for finite τ are not known the approximation (13), (14) is used instead. The generators can now be compared with the approximate transient. This approach has been found to be superior to comparing the generators individually. In particular the influence of statistical fluctuations of the mean are minimized compared to a pairwise comparison of the generators at a given value of τ . In the following it will become clear that the important point is not to have a precise approximation of the transient for truly random numbers. The detection of a deviation is insensitive to the exact form of the approximation: in all cases a defect showed up as a pronounced wiggle in RS(τ ) around the monotone transient. Therefore the subtraction of any monotone and slowly varying function would suffice to reveal a characteristic "fingerprint" of correlations in the PRNG. All systematic deviations of R(τ ) from zero are indicative of the presence of correlations and the amplitude at lag τ can be considered as a measure of the strength of correlations for the given lag. Hence the various PRNGs can be compared quantitatively.
C. Discussion of the Results
In Fig. 3 the semi-logarithmic plots of R(τ ) versus log τ for the toy generators G1 (•) and G2 ( * ) are shown for lags between 4 and 2 21 ≈ 2 × 10 6 (inset). Serious deviations are evident for lags larger than 10 3 . Magnifying on the vertical axis by a factor of 25 the plot of R(τ ) reveals deviations also at small lags (main figure). In generator G2 additional shuffling in a small table has been introduced to improve low order serial correlations of generator G1. For lags up to τ ≈ 128 the deviations are indeed strongly reduced. As expected there is no improvement for lags which are much larger than the size of the shuffling table. In Fig. 4 the results for the lagged Fibonacci generator G3 (△) are shown. This generator is known to fail several tests (see Ref. [18][19][20][21] and appendix C). It is reassuring to see that the RS statistic easily reveals the onset of disastrous correlations at τ corresponding to the larger lag of the generator (l = 55). The deviations show up as a crossover of R(τ ) (upper figure) to a "shifted asymptotic" reflecting a modified asymptotic prefactor. This gives evidence to the presence of some strong cyclic components in the pseudorandom process of G3. This is the only generator in this test showing also deviations of ∆RS(τ ) from the asymptotic value (Fig. 4 lower graph).
If a decimation strategy with k = 3 is applied, corresponding to generator G4 (✸), the correlations are strongly suppressed (Fig. 5).
The GFSR generator G5 (Fig. 6) uses larger lags than G3 shifting the onset of correlations to larger τ . The magnitude of the deviation is even twice as large as that of generator G3. These dramatic deviations are obviously indicators for the poor behaviour of G5 in some Monte-Carlo (MC) simulations [18].
Pseudorandom numbers of much better quality are expected from combination generators which can overcome the weakness of generators which are structurally too simple.
In Fig. 7 the performance of the popular combination generator G6 (+) can be estimated. When τ is somewhat larger than the lags of the LF component of the generator significant deviations in R are observed (similar to G3 and G5). These are presumably due to the deficient LF component of the composite generator. But compared to G5 the deviation is about 10 times smaller. For the time being there are no documented failures in physical simulations that use this generator [19]. But comparing Fig. 7 with Figs. 4,6 one can conclude that deviations in MC simulations are not implausible if higher precision is demanded.
PRNGs which are as fast, but which have better longrange properties are discussed in the following. In the next figure, Fig. 8, the results for the combined congruential generator G7 (×) are shown. Compared to the previous generators the amplitude of the deviations is drastically decreased. But for lags in the range τ = 2 5 to 2 9 a structure being indicative of correlations can be resolved (see inset of figure and Tab. III) on a high level of significance. Although G7 is doubtlessly one of the better generators within this test it should be immediately evident that it cannot come up to the expectations of Press and Teukolsky [42,49] to provide perfect random numbers (within the limits of its floating point precision). Thus their proposed "practical" definition of perfect should at least be put into perspective.
Random numbers of much better quality (at least in the RS statistics) are generated by the recently proposed composite generators G8 to G11. For all lags in the range 2 2 to 2 21 there are no significant differences. These four PRNGs are based on four different generation methods. Generator G8 applies a combination of generators with different algebraic structure while the two-component MRG G9 and the three-component Tausworthe generator G10 combine generators of the same class. Finally, G11 is a TGFSR generator which distinguishes itself by an extraordinary long period [51]. The fact that four generators of completely different algebraic structure and with theoretically favourable properties give consistent results reassures that the observed deviations of the other generators are indicators of real defects.
It should be noted that RS(τ ) necessarily has been sampled on a coarse grid on the logarithmic scale. Therefore it is possible that several types of correlations which would have shown up as a narrow structure have not been recognized. Nevertheless the observed deviations are intriguing.
V. CONCLUSIONS
The sensitivity for correlations on all scales and the robustness predestinates the RS statistic as a tool to catch up defects in pseudorandom number generators. A prac-tical method has been described which makes it easy to obtain a characteristic fingerprint of the correlations in a pseudorandom sequence. The deviations can be described quantitatively and the performance of generators for some given range of lags can be compared.
To illustrate the capability of the RS statistical analysis several popular generators have been subjected to an extensive test. The randomness of all tested PRNGs whith known defects could be refuted. Moreover deviations in several generators which are thought to be reliable have been quantified. Thus the RS analysis has to be considered more stringent than many of the previously suggested tests in the sense that more generators fail it.
The selection of a PRNG for a specific simulation depends on the required level of precision and on the range of the correlations which may have an impact on the quantity of interest -although this often cannot be assessed in advance. But no generator showing a performance inferior to another generator in several tests should be used any longer if it doesn't even distinguish itself at least by speed. Weak correlations in a current state-of-the-art generator (like some of this test) can lead to erroneous results in a tomorrow high-precision calculation. I. The numerical values of R(τ ) are tabulated in columns for the generators G1, G2, G3. The value of one standard deviation (σ) of the mean is given in parenthesis. If the deviation is larger than 2σ the value is framed and the deviation in units of σ is attached to the right.
APPENDIX A: NUMERICAL RESULTS
The numerical results for the mean of R(τ ), as depicted in previous figures, are reported in tables I to IV. The value of one standard deviation of the mean is given in parenthesis. Values which differ from zero by more than two standard deviations are framed and the deviation in units of standard deviations is printed behind the box.
APPENDIX B: TIMING RESULTS
In table V the typical execution times relative to the generator G1 are given. All generators have been configured to deliver one PRN per function call and no function code has been inlined. Although the figures may scatter between different architectures, compilers and optimization options they should be indicative for the relative performance on work station type computers. It should be mentioned that in the case of combined MLCGs and combined MRGs (G7,G9) a floating point implementation is often much faster than an integer implementation on many modern CPUs. These versions can compete with the fastest generators of table V. [50]
APPENDIX C: ADDITIONAL RESULTS
For comparison the performance of the generators G1-G11 in the recently proposed n-block test and the random walk test [18][19][20] has been calculated. For the group of PRNGs which have already been considered in Ref. [18][19][20] the results were reproduced. The figures for all generators tested newly are reported in Tab. VI. According to Ref. [18][19][20] the limit of acceptance in the χ 2 -test has been chosen χ 2 < 7.815 in the case of the random walk test and χ 2 < 3.841 for the n-block test. A generator is assumed to pass the test if in at least two of three independent runs the value of χ 2 is below the given limit.
The only PRNGs which shows significant deviations from the expected distributions are generators G3 and G5. If the decimation strategy is used then G3 also passes these tests (corresponds to G4).
These results have to be contrasted with the performance of the PRNGs in the RS statistical analysis which is much more stringent in the sense that more generators fail it.
From the presented figures it is obvious that the walk length (block size) in these tests is too small (by orders of magnitude) to catch the severe defects at lags that correspond to the large walk lengths in realistic simulations.
It is also evident that it is not sufficient to consider only a fixed lag as the amplitude of the deviations can vary strongly with the lag. Finally the RS statistic appears to be superior considering its sensitivity for correlations. Results for three runs of the random walk test (walk length N = 750 using 10 6 samples) and of the n-block test (block size N = 500 using 3 × 10 6 samples) [18][19][20]. The framed figures indicate a failure in this test.
ACKNOWLEDGMENTS
I would like to thank Pierre L'Ecuyer for many valuable discussions and a critical reading of the manuscript. Stimulating talks with Eckhard Pehlke and Ferdinand Evers are also acknowledged. | 2018-12-12T17:28:27.573Z | 1997-08-07T00:00:00.000 | {
"year": 1997,
"sha1": "ba3fac2ca53c3f5a09e7f0536822bc7f72161456",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/physics/9708009",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ba3fac2ca53c3f5a09e7f0536822bc7f72161456",
"s2fieldsofstudy": [
"Computer Science",
"Physics",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
156600156 | pes2o/s2orc | v3-fos-license | Economic Effects of Tax Evasion on Jordanian Economy
This study aims to clarify the economic effects of tax evasion especially on the Jordanian economy. The researchers has depended on the literature to highlight the negative effects of tax evasion on the economy of states. And by analyzing some statistics published by the Economic and Social council of Jordan, the results showed the tax evasion in Jordan forms a real problem because it amounted a high rate of revenue. Therefore the government should treat the evaders by more rigor.
Introduction
The tax is considered one of the financial policy of the state tools, also it considered as one of the most important tributaries of the public treasury to local revenues.The states always seek to achieve many goals by applying the tax system.These goals can be either financial or social goals, or economic goals.The financial goal is one of the significant goals of the tax system, where taxes increase revenues of state treasury from internal sources.While the social goals of tax could be seen as the social responsibility of the government as a big organization.This is because tax is working to present concentration of wealth in the hands of a limited number of members of the community, also it has a role in shaping the policy of birth control in countries, such as China and India, or in encouraging the birth control policy as in Scandinavian Countries.Moreover, tax contributes to reduction of housing crisis by exempting materials used in the housing sector (Fleurbaey et al., 2006).The economic goals of tax are as follows: 1) Encouraging specific productive activities through tax exemption in whole or in part, where this method is used to encourage industrial and tourism investment in many countries.
2) Reduction of Economic recession, where economic recession leads to lower purchases and accumulation of products, the role of the states in such cases is to increase purchasing power of individuals through lowering tax rates and increase tax exemptions.
3) Reduce the phenomenon of concentration of economic projects by taxing the merged projects.4) Encourage investment and savings through the exemption the returns of treasury bonds from tax in order to encourage their purchase (Leigh, 2010).This study provides the evidence of the negative effects of tax evasion on the national economy.
Problem Statement and Questions
Based on the rules calling for the imposition of the tax, which were set by Adam Smith in the book "The Wealth Of Nations" in the late eighteenth century (Blundell, 2006).Tax evasion becomes the stumbling block preventing the collection of taxes and spending in accordance with the tax rules, led by the Justice and Equality rule.Therefore, the problem of this study takes the following statement.
Tax Evasion is an unethical common practice, which leads to prevent governments from the implementation of vital projects, besides harming the taxpayers for the benefits of evaders.
3) What are the most effective tools to reduce tax evasion?
The Study Objectives
This study aims to achieve the following objectives: • Determining the volume of tax evasion in Jordan.
•
Exploring the causes and the used tools in tax evasion.
•
Identifying the economic effects of tax evasion.
The Study Importance
Income tax represents the contribution of individuals and firms in the development of the country in which they live and work in it to gain several benefits such as; income, peace and security, education, healthcare, and etc.
Where the tax system imposes the deduction of a reasonable percentage from the earned income by individuals and firms to fund the services mentioned above, where a large percent of the community members benefit from these services.The importance of this study stems from this logic which requires a shared contribution to activate such governmental services which will lose their effectiveness in the case of funding shortage represented by the tax evasion.
Methodology
The descriptive approach was used in conducting this study through reviewing the related literature to shed light on the economic effects of tax evasion.In the next stage, the researchers has analyzed the published statistics issued by the Jordanian economic and social council, for the years 2010-2015 to identify the tax evasion volume and its negative impacts on Jordanian economy.
Literature Review
There are a lot of studies and researches that address the tax theme from different aspects in the literature, but because the purpose of this study is to explore the economic effects of tax evasion, the researchers has chosen the following studies: Study by Lin and Yang (2001).Under the title: "A Dynamic Portfolio Choice Model of Tax Evasion: Comparative Statics of Tax Rates and its Implication for Economic Growth." This study aims to examine the effects of shifting from static model to dynamic model in tax evasion.The study was conducted through computation of the size of tax evasion according to the two models.The main results of the study showed that higher tax rates reduce tax evasion in the static model, while they encourage tax evasion in the dynamic model.
Study by Alm (2012).Under the title: "Measuring, Exploring, and Controlling tax Evasion: Lessons from Theory, Experiment, and Field Studies." The study aimed to evaluate the public understanding about tax evasion since what was stated by Allingham and Agnar Sandmo, who were launched the modern analysis of tax evasion in 1972.The study was conducted on information on individual compliance for a random sample of 50000 individual from the "Taxpayer Compliance Measurement Program" in the U.S.A. the researcher focused on three questions and their answers to assess the understanding of tax evasion.First, how do we measure the extent of tax evasion?Second, how can we explain these patterns of behavior?Third, how can we use these insights to control evasion?.The main results showed that the people who are interested have learned many things in the last 40 years, but there are still many gaps in their understanding, such as; how much evasion really occurs on the national and local levels?Do higher tax rates encourage/discourage Compliance?What is the audit role in tax evasion?Then the researchers recommended to develop the tax theory because one theory may not fit all individuals at all times.
Study by Agnar Sandmo (2004).Under the title: "The Theory of Tax Evasion: A Retrospective View." This study aimed to shed light on some themes in the theory of tax evasion through examining the related studies starting from Allingham and Sandom 1972.The analysis of comparative statistics, were placed in the study as a measure of tax evasion in the original model of individual behavior, where the tax evasion decision is similar to portfolio choice.
The results showed that tax evasion is not an over whelming problem, and the marginal tax rate should be governed by effective measures and equity concerns.Firms may also be from the evaders of income tax because it pertains to human behavior.
This study aimed to link tax evasion with the standard "AK" Growth model and the public capital In this model, the government optimizes the tax rates, while individuals optimize tax evasion.The study examined the effects of three government policies; tax rates, tax evasion, and economic growth.The results showed that the three policies have a quantitative effects on discouraging tax evasion, while their effects on economic growth are very limited.
Theoretical Background
Tax evasion can be defined as an denial of the individual tax due to be paid either by provision of inaccurate or deceptive financial statements for tax departments or any other means legal or illegal to get rid of tax payment (Munther, 2006).Other researchers have defined tax evasion (Mousa, 2010), pointed out that tax evasion is an attempt by the tax payer to get rid of tax payment partially or completely, without being reflected as a burden on others.
Tax evasion has negative effects on the economy Rami, (2014); Cobham (2005) has argued that tax avoidance and tax evasion affect negatively on development funding which may lead the country to borrowing and bearing a high cost.Other researchers, for example (Slemrod, 2007) pointed out that tax evasion produces a tax gap that means how much tax should be paid, but is not paid voluntarily in a timely way.From here one can summarize the effects of tax evasion on the economics of any state as follows: Decline in government investment, and the lower the frequency of public spending due to lower volume of public revenues earned by the state from taxpayers.And this may lead to increase the rate of poverty and unemployment from the point of view of the current researcher.
Because of the tax gap that resulted from tax evasion, the current researcher believes that such action overwhelms abiding citizens.
Expanding of tax evasion leads to internal and external borrowing to cover the shortage in the public revenue, and this means that the state became under the pressure of interest payments (Gorodnichenko et al., 2007).
With tax evasion, the base of tax justice will not be achieved due to non-payment of tax by evaders.
The tax evasion effects the moral side because it means corruption and lack for honesty that may inherit successive generations.
In order to illustrate why some tax payers used to practice the tax evasion, the researcher believes that the reasons of this wrong practice may be the tax regulations or the high tax rates or the weakness of tax awareness.For Jordan, the main cases of tax evasion are the following (Economic and Social Council, 2014): Complexity and non-stability of the tax law in Jordan in terms of many of modifications, which leads to misunderstanding to the tax law.
Complacency in the imposition of sanctions on evaders.
The lack for data base about the activities of many of taxpayers such as; doctors, engineers, and advocators and others.
The lack for qualified employees in tax departments such as auditors, accountants.
Weak control procedures.
Accordingly, Table 1 below shows the estimated volume of tax evasion in Jordan for the years 2010-2015.Tabular analysis for the figures above.
Percentage of tax revenue to total revenue.This means that tax revenue forms the greater proportion of revenue, so, it is very important for the government treasury.The ratios above show that tax revenue still high, but not exceed half revenue as happened in the actual period.
For the tax evasion, the percentage of tax evasion to total revenue amounted to 22% and 21% for the years 2010, 2011 respectively.While, for the estimated period these ratios, were 21% 22%, 23% and 25% for the years 2012-2015 respectively.And stability of these ratios may due to the absence of government sanctions to combat tax evasion.
Rsults
The main results of this study are:
•
Tax evasion is unethical practice whether by individuals or firms.
• Tax evasion has negative impact on the economy in terms of reducing the government investments and financing vital projects.
• Reducing government investments leads to increase of unemployment rates.
• Tax evasion may push the state to rely on borrowing internally and externally, which put the state under new obligations of interest.
Recommendations
• Tax law in Jordan should be stable to achieve good understanding by taxpayers.
•
The government should impose serious sanctions on evaders.
•
The tax department should pay more attention to the human capital in terms of qualifications and experience, and moral courage.
Table 2 .
Actual figures | 2018-12-15T01:52:37.636Z | 2016-06-23T00:00:00.000 | {
"year": 2016,
"sha1": "dc9b848e76d0d0f66e347bd3a16e8111157f1547",
"oa_license": "CCBY",
"oa_url": "https://www.ccsenet.org/journal/index.php/ijef/article/download/60889/32636",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "dc9b848e76d0d0f66e347bd3a16e8111157f1547",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Economics"
]
} |
270613663 | pes2o/s2orc | v3-fos-license | Effect of Private Deliberation: Deception of Large Language Models in Game Play
Integrating large language model (LLM) agents within game theory demonstrates their ability to replicate human-like behaviors through strategic decision making. In this paper, we introduce an augmented LLM agent, called the private agent, which engages in private deliberation and employs deception in repeated games. Utilizing the partially observable stochastic game (POSG) framework and incorporating in-context learning (ICL) and chain-of-thought (CoT) prompting, we investigated the private agent’s proficiency in both competitive and cooperative scenarios. Our empirical analysis demonstrated that the private agent consistently achieved higher long-term payoffs than its baseline counterpart and performed similarly or better in various game settings. However, we also found inherent deficiencies of LLMs in certain algorithmic capabilities crucial for high-quality decision making in games. These findings highlight the potential for enhancing LLM agents’ performance in multi-player games using information-theoretic approaches of deception and communication with complex environments.
Introduction
Large language models (LLMs) have capabilities that surpass pure text generation, such as in-context learning [1], instruction following [2], and step-by-step reasoning [3].These capabilities have proven valuable in decision-making processes, enabling them to make informed decisions and take corresponding actions [4].Utilizing these capabilities in game theory has attracted widespread attention, especially in games where agents interact via natural language communication.Here, an agent must gather information and draw conclusions from various ambiguous statements [5].
LLM-based agents, specifically generative agents, have showcased remarkable performance across various tasks [6] and have proven their ability to replicate human-like behaviors [7].These behaviors include tackling complex tasks across various system settings, encompassing multi-step reasoning, instruction following, and multi-round dialogue [7,8].Generative agents have exhibited promising potential in solving intricate tasks by leveraging the power of natural language communication [9].Moreover, inter-agent communication can be established in either a cooperative [7,10] or competitive setup [11].
In a cooperative setup, agents achieve the greatest gains when collaborating towards a shared set of objectives.This approach often leads to a synergistic effect like that observed in collective intelligence [12].In a competitive setup, agents prioritize maximizing their own gains, often at the expense of other agents.Consequently, the actions of one agent can influence the opportunities and outcomes available to others.Nevertheless, depending on the system setting, agents may opt to cooperate initially to achieve a common goal, only to later deviate from the cooperative strategy to maximize their gains during the game.This concept is commonly referred to as non-cooperative game theory [13,14], wherein each agent is modeled with individual motives, preferences, and actions.Such agents are commonly referred to as self-interested agents, as they prioritize their interests without necessarily considering the interests of others.However, it is worth noting that despite being self-interested, these agents may not always employ selfish actions if cooperation promises more significant gains [15].
The dynamics of such scenarios often involve negotiation, wherein the motives of the involved partners and their practical reasoning come into play.This introduces significant challenges for automated systems [16].The need to effectively model the decision-making processes of self-interested agents and to facilitate effective negotiations becomes crucial in designing robust multi-agent systems capable of handling complex real-world scenarios.
However, several challenges must be addressed for a generative agent to engage in a repeated game relying solely on natural language for communication.Agents need the capability to recall information from the last few rounds and process data from their opponents, presenting a challenge due to context length limitations.Furthermore, understanding the opponent's intentions and planning future actions necessitate a level of reasoning that is inherently challenging for LLMs [17].Lastly, the agent must dynamically adapt its behavior to achieve the best outcome, without additional fine-tuning.This ability is recognized as in-context learning (ICL), where LLMs make decisions based on a few examples written in natural language as an input prompt.These examples comprise a query question and a demonstrative context, forming a prompt fed into the LLM [18].
In this paper, our objective was to enhance the capabilities of an LLM agent by enabling it to engage in private deliberation concerning future and past actions.Our contributions are outlined as follows: • We formalize LLM-agent-based games using the partially observable stochastic game (POSG) framework.
•
We validate the elements of partially observed stochastic games (POSG) for finding optimal solutions.We also identify weaknesses in the underlying LLM when sampling from probability distributions and making conclusions based on samples from identified probability distributions.Those weaknesses reveal an inability to perform basic Bayesian reasoning, which is crucial in POSG.
•
We introduce the concept of a private LLM agent, implemented using in-context learning (ICL) and chain-of-yhought (CoT), which is equipped to deliberate on future and past actions privately.We compare the private agent with a baseline and examine its deception strategy.
•
We conduct an extensive performance evaluation of the private agent within various normal-form games with different inherent characteristics, to examine behavior coverage through games featuring different equilibrium types.
•
We perform a sensitivity analysis of LLM agents within an experiment design that varied the input parameters of the normal-form games such that the reward matrix shifted from competitive to cooperative.Additionally, as part of the sensitivity analysis design of the experiments, we examined the impact of different underlying LLMs, agent types, and the number of game steps.
Section 2 examines the relevant literature on multi-agent systems and generative agents and their capacity to replicate social dynamics and decision making.Section 3 presents the two agent types: a private agent engaged in private deliberation and a public agent.We modeled the interaction between agents using a partially observable stochastic game (POSG).Additionally, we provide an overview of the repeated games employed in our experiments: the prisoner's dilemma, stag hunt, chicken game, head-tail game, and battle of the sexes.Section 4 details the experimental setup and analyzes the generated outputs.Subsequently, we conducted experiments, investigating the games' outcomes under various settings.In Section 5, we summarize and elaborate on our findings and discuss open research directions and potential implications.Finally, in Section 6, we draw conclusions based on our findings and lay out plans for future work.
Generative Agents
Generative agents operating in a cooperative setting were explored in the work of Park et al. [7].The authors defined generative agents as agents capable of simulating human behavior, thereby producing believable individual and group behaviors.In the context of cooperative problem-solving, a novel framework called CAMEL was introduced by Li et al. [10].This framework exhibits sophisticated human-like interaction abilities, enabling agents to engage in complex cooperative tasks.
In contrast to the earlier works [7,10], the authors in [19] took a different approach, focusing on explicit objectives for the model, emphasizing cooperation and competition dynamics.Their research involved two agents assuming the roles of a buyer and seller engaged in price negotiation.By concentrating on specific social interactions, the authors aimed to shed light on the intricacies of cooperation and competition within generative agent systems.
Decision Making Using LLMs
The emergent capabilities of LLMs, exclusive to large-scale models [8], such as incontext learning [1], instruction following [2], and step-by-step reasoning [3], have paved the way for decomposing high-level tasks into subtasks, facilitating the generation of further plans based on these subtasks [1,20].Leveraging this robust capacity, LLMs have found applications in decision making, effectively combining environmental feedback with reasoning abilities and the capacity to take action.However, in the absence of proper decision retraction mechanisms, there remains a potential risk of initial errors propagating throughout the decision chain [21].With a proper decision retraction mechanism, models can reflect on their past failures and devise new approaches to tackle complex tasks.Using a few-shot approach, models iteratively learn to optimize their behavior and become proficient in solving tasks like decision making and reasoning [22].
In the quest to address the challenge of reasoning over feedback conveyed through natural language, an insightful investigation was presented in [23].The authors introduced the concept of inner monologue as private deliberation, where LLMs engage in more comprehensive processing and planning of subsequent actions.Their study concluded that incorporating close-loop language feedback, achieved through the implementation of inner dialogue, significantly enhances the completion of high-level instructions, particularly in demanding scenarios.This finding highlights the potential of inner dialogue as a valuable mechanism for the reasoning capabilities and decision-making processes of LLMs in complex real-world applications.However, their experimental setup involved a robot arm equipped with a wrist-mounted camera, exploring a similar concept by performing a series of tasks like picking up objects and pressing buttons within a static environment.
Enabling private deliberation in LLMs has also shown success in solving complex vision language problems [24] and in improving communication skills [25].Our work, on the other hand, concentrates on applying a similar concept in multi-agent scenarios.We study the effects of interactions among multiple agents in an unpredictable and dynamic environment.Moreover, we investigate the significance of enabling private deliberation in LLMs, empowering agents with the ability to think privately about their actions, concealed from their opponent, before making decisions, thus mitigating the impact of potential errors and enhancing the decision-making process.
Modeling Social Dynamics
Modeling social dynamics has remained a research challenge primarily because it requires an adequate and often substantial human pool for experimentation and observation [26].In the pursuit of informed decision-making and the design of detailed social systems, designers often rely on the methodology of prototyping.This approach enables the observation of potential outcomes, facilitating iterative improvements guided by comprehensive analysis [27,28].The complexity inherent in designing such systems necessitates a sufficiently extensive human pool, coupled with the capacity for iterative design improvements.
To overcome these challenges, Park et al. [7] introduced the concept of social simulacra-a prototyping technique that leverages input parameters from designers to depict system behaviors.This innovative approach yields a diverse array of realistic social interactions, encompassing those that manifest exclusively within populated systems.By adopting this method, designers can explore the intricacies of complex social systems and iteratively refine their designs, even in the absence of an immediately accessible population.
Within the realm of such complex systems, the presence of disagreements among groups regarding the ground truth can hinder the formulation of quality decisions that accurately represent the collective opinions of the group, especially when employing majority vote mechanisms [29].To address this, Gordon et al. introduced the jury learning method [30], a novel approach that employs supervised machine learning to resolve disagreements.This method determines individuals and their proportional influence in shaping the classifier's prediction, a strategy reminiscent of jury selection techniques.By introducing the jury learning method, the authors provided a valuable avenue for mitigating disagreements and enhancing the decision-making process within complex social contexts.
The integration of LLMs into the modeling of intricate social interactions has garnered significant research attention.In particular, the human-LM (Language Model) interaction model plays a pivotal role in encapsulating the interactive process leading to a conclusion (i.e., "thinking").This model not only encompasses the cognitive deliberation that occurs during decision making but also encapsulates the nuanced quality preferences associated with the output, similarly to the human emotional response elicited by a specific decision [31].With LLMs' capability to emulate these aspects of human interaction, researchers have embarked on a promising avenue for refining the modeling of complex social scenarios and advancing the understanding of decision making within a socio-cognitive context.
Problem Setting
This section defines the environment and agents implemented through the large language model (LLM).We introduce new agent types (private and public) and describe their interactions within a gameplay framework using a partially observable stochastic game (POSG).Additionally, we formalize the in-context learning (ICL) and chain-of-thought (CoT) abilities of the LLM, considering the output alignment with policy.Finally, we conducted experiments to compare the two LLM agent types and assessed the LLM's general computational ability in executing gameplay tasks using ICL and CoT.
Agent Types in POSG
In this study, we introduce two types of agents with different decision-making processes: a private thought process agent (referred to as a private agent) and a public thought process agent (referred to as a public agent).The private agent considers future actions while keeping its strategic thought processes hidden from other agents.This strategic thinking, i.e., private deliberation, is implemented using CoT and ICL techniques, and has three main stages.Since agents are involved in a two-option game, the first stage involves thinking about the first option, and the second stage involves thinking about the second option.The third stage involves developing a deception strategy that will be presented in public thoughts, although deception may not always occur if it is not optimal (e.g., in cooperative games).In other words, the private agent strategizes privately and communicates through public thoughts, deciding which information to reveal and which to keep hidden.A private agent's thought process and final output are illustrated in Listing 1.In contrast, the public agent communicates solely through public thoughts, openly sharing all thought processes with other agents.In addition, the public agent does not employ any other techniques for enhancing its reasoning capabilities, such as CoT.Since they only have access to public thoughts, the only method for a public agent to conceal their decision-making processes is by utilizing encryption, encoding some information that they may wish to keep private.In such cases, communication with other agents using public thoughts can still be secured.However, we did not employ this approach, leaving it open for future research.
Agents are implemented as OOP classes, with each agent represented by a separate instance of an LLM, complete with conversation history memory and input/output interfaces to interact with the environment.The environment is an OOP class that serves multiple purposes.First, it acts as a broker by delivering messages and actions between agents.Second, while acting as a broker, the environment removes a private agent's private thoughts and sends only the public parts to other agents.Third, the environment synchronously assigns rewards to agents according to the rules of the instantiated game and the joint actions of the agents.After assigning rewards, the actions and rewards of each agent are broadcast to all other agents connected to the environment for observation.
Agents communicate through a game model that includes the environment and agents.Agents are separate instances of the LLM and they communicate via the environment, which specifies the possible actions, observations, and rewards for agents.Additionally, the environment manages the relationships between actions and states.An agent is an entity capable of making decisions based on observations and beliefs, and has a specific role in the game [32].Figure 1 depicts the communication scheme between agents communicating via the environment in a game.We formalize a game using a partially observable stochastic game (POSG), where decisions are made based on possibly incomplete and noisy observations of the environment.We define the POSG as a tuple N, S, {b 0 } i∈N , {A i } i∈N , {O i } i∈N , Z, P, {R i } i∈N , where • N represents the finite set of all agents.We experimented on two-player games, i.e., |N| = 2.If i ∈ N represents an agent i, his opponent is denoted as −i.A play in POSG is defined as follows: At the start, there is a joint state s 0 = (s 0 prv , s 0 pub ), where s 0 contains initial prompts (e.g., Listing 2) and no dialogue history between two agents.The indexes prv and pub represent the private and public agents, respectively.This initial belief distribution for an agent i, i ∈ N is based on the belief about possible states Listing 2. Initial prompt to the PD game.In the current round j, j ∈ J, where J represents the set of all played rounds, player i receives an observation o j i of their state s j i and full/partial observation of the opponent's state, as well as the opponents action a In multiplayer games, the reward function R depends on the joint actions (i.e., action profile) and states of all players.The environment returns the reward.Therefore, the agent's game value function V : S → R, denoting the long-term expected reward, is defined as where a and s represent the joint actions and states, respectively.The notation π i and π −i is used to distinguish the policy between agent i and other agents [35].
In fully cooperative games aiming to maximize the joint return, the returns for each agent are the same In fully competitive fixed-sum games, the rewards are ∑ i∈N R i = µ; in zero-sum games, µ = 0.In a two-player setup, where |N| = 2 and two agents have an opposite goal, the rewards are R 1 = −R 2 .Mixed games are neither fully competitive nor fully cooperative, i.e., no restrictions are imposed on the rewards of players [36,37].
Language Generation through In-Context Learning
When presented with tasks not included in their training data, the LLM can learn them with a few examples through ICL [1].Having this ability, an LLM agent can adapt to a policy π and generate actions aligned with that policy.
Let Λ * represent a pretrained LLM we want to teach to conduct a new gameplay task through an initial prompt.The initial prompt contains the instruction text defining the game rules and policy instructions, denoted as x x x.A game's rules and its corresponding outcomes are presented as input-output pairs In addition, each agent in the game has a policy π corresponding to his assigned role (e.g., private or public) that also maximizes the value function V π i ,π −i .
Since LLMs are non-deterministic and can sometimes hallucinate [38], given an input i i i, the probability of a pretrained LLM generating the output o o o aligned with the policy π is denoted as P Λ * (o o o k |i i i k , π), for all k = 1, 2, . . ., n.The private agent has a private and public policy π prv = (π prv , π pub ), and the public agent only has a public policy π pub = (π pub ).Policy alignment in spoken language understanding (SLU) systems involves matching an agent's input with the correct output based on their perceived intended meaning [39,40].Let X i = {x x x 0 , x x x 1 , . . ., x x x n } denote the conversation and action history of the last n rounds from agent i, where |X| is finite and message x x x = (x prv , x pub ) consists of the private and public parts.For the public agent, the private part of the message is empty, i.e., x x x = (∅, x pub ).
The probability of agent i inferring the opponent's policy π −i from the public part of message history X pub i is denoted as P(π ′ −i |X pub i ), where π ′ −i denotes the perceived policy.Agent i has a policy mapping function ρ : X → Φ(Π) that takes a message history X and matches potential interpretations of policies as a probability function Φ : Π → [0, 1] over policies Π [41].Since the agent's policy mapping function ρ is concealed, an agent needs to learn its opponent's mapping function through ICL by matching input-output pairs With policy π i , an agent will maximize his value function depending on the belief about his opponent's policy V π i ,π ′ −i by producing an output Xpub in the public part of the conversation, thereby influencing the opponent's perception of policy π ′ −i .Since generated messages are mapped via ρ to a perceived policy and ρ remains stationary in each round, we can denote the agent's objective as
Chain of Thought Prompting
To further enhance the private agent's thought process, we utilized the chain-ofthought (CoT) prompting technique in private thoughts.CoT is a technique used in LLM prompting that utilizes a series of reasoning steps before yielding a conclusion, thus significantly improving the performance of complex reasoning [3].To facilitate the CoT technique, we added the "Think about Option A/B step by step given previous interactions" statement as denoted in Listing 1.
We can formalize the CoT as follows.Let X prv = {x Moreover, since messages X prv are a sequence of reasoning steps, the underlying policies are chained into intermediate reasoning steps, such that s 0 → s 1 • • • → s n .Having several intermediate steps, with those steps specified in the prompt, greatly improves the chances of the LLM generating correct conclusions aligned with a given policy [3].When prompting the LLM without using CoT, assuming x x x 0 represents the initial question and x x x n the final output, all intermediate reasoning steps s 1 , . . ., s n−1 generated under policy π prv are omitted, and the answer is x x x n .If argument X prv includes many chained intermediate steps s 0 → s 1 • • • → s n under policy π prv , the probability of LLM Λ * generating an answer aligned with policy π prv is greater, due to containing more information [42].
Action Selection Strategy in Agent Types
Through empirical studies, we aimed to compare the two different types of agents, private and public.Therefore, we wanted to explore two different hypotheses: H1.LLM agent can sample from a probability distribution.
H2. LLM agent can calculate (near) optimal action selections from the probability distribution and sample actions.
To prove these hypotheses, we present the following experiments.To prove H1, we examined the LLM's ability to sample from various probability distributions for action selection.To prove H2, we wanted to examine the distribution of action choices based on conversation history and the accuracy of recognizing the opponent's type (private or public).These experiments allowed us to assess the LLM's computational abilities and weaknesses in modeling agents in multiplayer games using ICL and CoT techniques.
To prove H1, we conducted a few experiments to explore whether the LLM could sample from distribution and use, for example, the Bayes estimator to select actions.The LLM GPT-4-0613 was prompted to generate a sample of n = 100 numbers from Gaussian, Poisson, and Uniform distributions.The prompt results are depicted in Figure 2 and indicate that the LLM was unequipped to sample from different distributions.Therefore, we disproved H1, as our findings suggested that the LLM could not sample from different distributions.Similar findings were concluded in [43], where the authors concluded that GPT-4 could not generate independent random numbers.To test H2, we defined the action choice probability based on the message history of the prisoner's dilemma game.Let π −i denote the opponent's policy.The probability of perceiving the opponent's policy π −i given the conversation history X i = {x x x 0 , x x x 1 , . . ., x x x n } is P(π ′ −i |X i ), where π ′ −i is the opponent's perceived policy.The probability of an agent i taking action a i given the opponent's estimated policy is denoted as P(a i |π ′ −i ).Therefore, the agent i takes action a i based on the conversation history X i , with the following probability: The distribution of action choices P(a i |X i ) depending on the number of messages |X i | in the dialogue history buffer between private and public agents is depicted in Figure 3.To examine the underlying behavior of the agents depending on their assigned type, we only considered the number of messages |X|, without considering the message content.The experiments showed the private and public agents' tendency to cooperate (Option A) more often in the initial rounds (|X| = 0).However, with a full message history buffer, the private agent would deviate (Option B) from cooperation in favor of defecting, and the public agent employed a mixed strategy, which was specific in the current round but non-deterministic over many iterations, averaging around a 50% chance of selecting each option.Cooperation leads to a higher payoff than mutual defection for two rational players in the prisoner's dilemma game.However, if an agent wants to maximize his payoff, he chooses to defect [44].The private agent chose to defect more often than cooperate.On the other hand, the public agent used a mixed strategy.
The second part of H2 explored the accuracy of private and public agents in recognizing their opponent's type.If one knows one's opponent's type, one may use a different strategy to secure a higher reward by influencing the action selection distribution [45].Figure 4 depicts the accuracy of predicting the opponent's type.Both private and public agents played the prisoner's dilemma game, and and after each iteration of the game they were additionally prompted to recognize whether their opponent was private or public.The agents played |J| = 150 iterations in total and their accuracy score was calculated as the number of correct classifications of opponent's type divided by the total number of rounds, i.e., accuracy prv|pub = N corr |J| .Judging from the results, the public agent was more proficient at recognizing opponents.However, by further analyzing results, we could see a high bias towards categorizing opponents as private, skewing the results more in the public agent's favor.We can conclude that both agent types were unequipped to adequately recognize the opponent's type.Based on the findings for H1 and H2, the LLM agents were unequipped to sample from a probability distribution for action selection or to find an (near) optimal action selection distribution, i.e., both hypotheses were disproved.This shows the potential for future research to improve on these glaring weaknesses.
Game Setting
In order to thoroughly evaluate the agents in diverse environmental settings, we chose to incorporate the following iterated games: prisoner's dilemma, stag hunt, chicken game, head-tail game, and the battle of sexes.By employing the iterated versions of these games, we aimed to investigate whether continuous feedback from the other agent, based on prior interactions, enhanced the decision-making process and to discern the contrasting effects of privacy and information-sharing on agent performance [14].A brief description of each game is listed in Table 1.
Table 1.List of games used in experiments and corresponding explanations.
Term Explanation
Prisoner's Dilemma In the prisoner's dilemma, two suspects are arrested, and each has to decide whether to cooperate with or betray their accomplice.The optimal outcome for both is to cooperate, but the risk is that if one cooperates and the other betrays, the betrayer goes free while the cooperator faces a harsh penalty.This game illustrates a situation where rational individuals may not cooperate even when it is in their best interest, leading to a sub-optimal outcome.
Stag Hunt
The stag hunt game involves two hunters who can choose to hunt either a stag (high reward) or a hare (low reward).To successfully hunt a stag, both hunters must cooperate.However, if one chooses to hunt a hare while the other hunts a stag, the stag hunter gets nothing.It exemplifies a scenario where cooperation can lead to a better outcome, but there is a risk of one player defecting for a smaller, more certain reward.
Chicken game
In the chicken game, two players drive toward each other, and they must decide whether to swerve (cooperate) or continue driving straight (defect).If both players swerve, they are both safe, but if both continue straight, they crash (a disastrous outcome).This game highlights the tension between personal incentives (not swerving) and the mutual interest in avoiding a collision (swerving).
Head-tail game
The head-tail game involves two players simultaneously choosing between showing either the head or tail on a coin.If both players choose the same side (both heads or both tails), one player wins.If they choose differently, the other player wins.This game illustrates a simple coordination problem, where players have to predict and match each other's choices to win.
The battle of sexes
In the battle of the sexes game, a couple has to decide where to go for an evening out, with one preferring a football game and the other preferring the opera.Each player ranks the options: the highest payoff is when both go to their preferred event, but they prefer being together over going alone.It demonstrates the challenge of coordinating when preferences differ and highlights the potential for multiple equilibria.
The initial prompts provided to the agents for prisoner's dilemma and stag hunt games are presented in Listings 2 and 3, respectively.A private agent was explicitly instructed to adopt a strategy aimed at outsmarting its opponent, while concealing its private thoughts within double curly brackets (e.g., {{ (...) agent's private thoughts to win over my opponent (...) }}).In each subsequent iteration of the game, both agents were provided with the amount of points scored in the previous round, total points scored, the choice made by the other agent, and the explanation provided by the other agent during the previous iteration.Furthermore, agents were capable of recollecting their own thoughts and their opponent's thoughts from the last two rounds of the game (restricted due to context length).However, the private agent's opponent only received the public thoughts of the private agent, while thoughts enclosed in double curly brackets (i.e., private thoughts) remained concealed from the opponent.
Rules Recap : − I f both p l a y e r s choose Option A: You win 10 p o i n t s , and~t h e o t h e r p l a y e r wins 10 p o i n t s . − I f you choose Option A, and~t h e o t h e r p l a y e r chooses Option B : You win 1 point , and~t h e o t h e r p l a y e r wins 8 p o i n t s . − I f you choose Option B , and~t h e o t h e r p l a y e r chooses Option A: You win 8 p o i n t s , and~t h e o t h e r p l a y e r wins 1 p o i n t . − I f both p l a y e r s choose Option B : You win 5 p o i n t s , and~t h e o t h e r p l a y e r wins
5~p o i n t s .
O b j e c t i v e : To a c h i e v e t h e h i g h e s t t o t a l p o i n t s a c r o s s a l l ~rounds .
Your Task : S t r a t e g i c a l l y c o n s i d e r t h e p o s s i b l e outcomes and respond with e i t h e r Option A or Option B f o r each~round .
I n s t r u c t i o n s : P l e a s e provide a b r i e f e x p l a n a t i o n f o r your c h o i c e i n each~round .
Experiments
In our study, we conducted experiments to investigate the decision-making processes of agents in two distinct game types.A comparative analysis was performed between the private agent and the public agent, examining differences in the points achieved over iterations and the amount of generated text.The large language models used in this study were GPT-3.5-turbo-0125 and GPT-4-0613.
Experiment Setup
The language models used in this research needed to function as chatbots and maintain context from previous interactions.We utilized the Langchain framework to model the agents and set up the game [46].We created the LLM agent by wrapping the OpenAI ChatGPT model using the Python Langchain framework and added functionalities such as in-context design that work with the environment API, which removed private thoughts before broadcasting them to the other agent.
We conducted experiments described as follows.Let I = {1, 2, . . ., i} denote the number of rounds where each round consists of J = {1, 2, . . ., j} iterations.We executed a total of |I| = 10 rounds, each comprising |J| = 15 iterations.For both public and private agents, the average outcome over iteration j, j ∈ J was calculated over all rounds I, denoted as the average utility u j = 1 |I| ∑ i∈I u i,j , i ∈ I, j ∈ J, where u i,j represents the expected utility of the iteration j, j ∈ J in round i, i ∈ I.In each round i, i ∈ I, agents were only able to recall context from the last two iterations.Therefore, current state is denoted as where S represents the set of states.An illustrative demonstration of a private agent's thought process is provided in Listing 4, while the corresponding thought process of a public agent is outlined in Listing 5.These examples offer a concrete depiction of how private and public agents respectively differ in their decision-making processes.Figue 5 showcases the reasoning capabilities of the private agent compared to public agent over two rounds.As depicted in Figure 6, it is evident that the private agent tended to produce longer responses, with a substantial portion of these responses comprising private thoughts.This observation suggests that the private agent engaged in extensive internal deliberation, resulting in elaborated and contextually enriched responses, and potentially leading to better informed actions.Based on t h e p re vio us rounds , my opponent has c o n s i s t e n t l y chosen Option A, i n d i c a t i n g a c o o p e r a t i v e approach .However , t h i s has r e s u l t e d i n h i g h e r p e n a l t i e s f o r them .To~e x p l o i t t h e i r c o o p e r a t i v e behavior , I w i l l choose Option B i n t h i s round .By~doing so , I can p o t e n t i a l l y gain a lower p e n a l t y while my opponent c o n t i n u e s t o i n c u r h i g h e r p e n a l t i e s .This s t r a t e g y may i n c e n t i v i z e them t o r e c o n s i d e r t h e i r approach and p o t e n t i a l l y s w it c h t o Option B i n f u t u r e ~rounds .I choose Option B .D e s p i t e r e c e i v i n g p e n a l t i e s i n t h e p r ev io us rounds , my opponent has c o n s i s t e n t l y chosen Option B , i n d i c a t i n g a p o t e n t i a l s h i f t i n t h e i r s t r a t e g y .By~choosing Option B , I aim t o e x p l o i t t h e i r d e v i a t i o n from t h e c o o p e r a t i v e approach and p o t e n t i a l l y minimize my t o t a l p e n a l t i e s .A d d i t i o n a l l y , I hope t o encourage my opponent t o r e c o n s i d e r t h e i r approach and p o t e n t i a l l y n e g o t i a t e f o r a more f a v o r a b l e outcome i n f u t u r e rounds .
Achieving Equilibrium
Equilibrium in game theory is an outcome in which the players will continue with their chosen strategy, having no incentive to deviate, despite knowing the opponent's strategy [47].Achieving equilibrium using LLM agents is an important step towards enhancing their reasoning, as it demonstrates the LLM's ability to develop an optimal strategy for a given scenario.In our experiments, we decided to test the following equilibria: Correlated equilibrium [48], Nash equilibrium [49], Pareto efficiency [50], Focal (Schelling) point [51].Table 2 presents a list of games and matching equilibria.The battle of sexes ✓
Results
First, we evaluated the performance of a private agent in a prisoner's dilemma (PD) game under various settings.Initially, we compared the GPT-3.5-turbo-0125model with the GPT-4-0613 LLM, and as expected, GPT-4 demonstrated superior performance.Subsequently, we conducted a comparison between the private agent and a heuristic agent.The heuristic agent employed a straightforward tit-for-tat strategy, which began with a cooperative move and, in each subsequent iteration, replicated the opponent's previous move.A comparison of these agents is depicted in Figure 7.
We then proceeded to compare the private and public agents across various game settings, including the stag hunt, head-tail, chicken game, and the battle of sexes.In the stag hunt game, where cooperation in hunting the stag is the dominant strategy, the agents occasionally deviated from the optimal strategy in pursuit of a competitive advantage over their opponents.The private agent, however, balanced the pursuit of victory with maintaining alignment with the cooperative nature of the game.
The head-tail game, on the other hand, is inherently cooperative, with no incentive to deviate from this strategy.Consequently, both agents adhered to the same strategy, with the exception of a single iteration where a strategy change resulted in undesirable outcomes.
In the chicken game, there is a significant benefit in deviating from the cooperative strategy, although cooperation remains the most favorable option.In this game, the private agent consistently outperformed the public agent in every iteration by strategically alternating between the "dare" and "chicken out" strategies.
In the battle of the sexes, unlike the previously mentioned games, changing one's strategy hinges on the ability to persuade one's opponent to also change their strategy.This becomes challenging when the opponent is deriving greater gains from the current strategy.When we compared the two agents, the private agent demonstrated a slight advantage, albeit not as pronounced.A comparative analysis of the various games is illustrated in Figure 8.
Parameterized Game
To experiment with the level of coordination depending on the game setting, we designed an iterated parameterized two-player game.This game setting was used for sensitivity analysis, i.e., how changing parameters of the game affected the outcomes.Two players A and B can chose between coordination and competition to maximize their total payoff.The game setup is denoted in Table 3, where parameter x takes the following values x = {1, 2.9, 3.1, 10} ranging from the most cooperative game to least cooperative game, respectively.In addition, since the values of cooperation is set to u(w = cooperation) = 3, where function u u(w) represents the payoff of strategy w = {cooperation, competition}, we took two neighboring values to study the effect of transitioning from a cooperative to competitive setup.
Furthermore, we also utilized three types of agents: a private agent, public agent, and heuristic agent.The heuristic agent played a tit-for-tat heuristics strategy.The resulting cooperation ratio with standard deviation is depicted in Figure 9.We can observe from the figure that as the incentive for deviating from the cooperative strategy increased, the average level of cooperation decreased.However, for a case where x = 3.1, we can observe greater decreases in cooperation ratio than for x = 10, which is not something that was expected.We believe the potential cause of this issue was that the LLMs are not proficient in numeracy, which refers to the capacity to understand and give significance to numbers.LLMs tend to prioritize sentences that are grammatically correct and seem plausible, treating numbers in a similar manner.Nonetheless, when faced with unfamiliar numerals, these are frequently overlooked [52,53].In general, we recognize the emerging ability of LLMs to adjust dynamically within competitive or cooperative game setups, as shown in the parametrized game.
Sensitivity Analysis
To demonstrate the reliability of our results, we conducted a sensitivity analysis.This analysis focused on the parameters of the normal form game, as presented in the parametrized game, where the rewards matrix, shown in Table 3, gradually shifted between competitive and cooperative games.Moreover, using different normal-form games with known characteristics, we tested the LLM agent's adaptability through the different equilibria presented in Table 2.We also examined aspects of the entire system, including agent types and the environment, by varying the underlying LLM (GPT-3.5-turboand GPT-4) and the number of game steps.
The performance of the underlying LLM and its effect on the private agent is presented in Figure 7, showing the clear advantage of the more advanced models when using techniques such as ICL and CoT.The number of game steps could also be considered part of the sensitivity analysis over discrete parameters, as the number of steps was unknown to the gameplaying agents beforehand.We exogenously stopped the game after a predetermined number of steps, and the agents did not memorize game-playing episodes, so there was no spillover effect between multiple runs.Once the message history buffer was complete, the round number had no strategic effects on the agents' behaviors, except random behavioral occurrences, which we can link to the stochastic nature of LLMs.
Due to our experimental setup, we did not have other parameters available to change.For example, the context length was fixed in the underlying LLM.The number of remembered historical iterations was maximized within the context window, so it was dependent on OpenAI's fixed parameters.
Limitations and Constraints
Ensuring consistent and explainable outputs from intelligent agents is crucial, because humans are fundamentally limited in understanding AI (artificial intelligence) behavior.Explainability is an essential aspect of AI safety, which we define as the ability of an AI system to stay within the boundaries of desired states, i.e., worst-case guarantees [54].
In non-adversarial scenarios, this issue is less concerning.However, the lack of explainability becomes a significant issue with adversarial agents capable of deceiving their opponents (e.g., humans or other agents) and exploiting them for their gain.In such cases, we must rely on AI explainability for safety [55].
In the context of LLM models, they deliver exceptional performance, due to their immense scale, with billions of parameters.However, their size poses a significant challenge to existing explainability methods.To ensure safety and explainability, constraints may need to be imposed on the training and functioning of LLMs.These constraints can be integrated directly into the automated optimization (learning) process or applied indirectly through a human-in-the-loop approach [56].
Agents compress information received from the complex environment to store it in finite memory (context).The loss of information during this process leads to various phenomena recognized in information theory, such as echo chambers, self-deception, and deception symbiosis [57].Moreover, since we studied the effect of deception as an emerging ability of LLM agents without formal information-theoretic models, developing formal models of deception, such as the Borden-Kopp model that relies on degradation, corruption, denial, and subversion, would be an interesting direction for future research [58].
Discussion
In this paper, we investigated the capabilities of large language model (LLM) agents in participating in a two-player repeated game.Furthermore, we introduce an augmentation to an LLM agent, referred to as the private agent, enabling it to engage in private contemplation (i.e., thoughts) regarding past and future interactions and to reason about future actions.Moreover, the private deliberation was concealed from its opponent in repeated games.
We utilized the partially observable stochastic game (POSG) framework to define the gameplay and formalized in-context learning (ICL) and chain-of-thought (CoT) prompting.In experiments, we examined the distribution of action choices based on conversation history.The results demonstrated that the private agent consistently identified a more favorable action, leading to a higher long-term payoff.When identifying their opponent type, both public and private agents performed subpar.LLM (GPT-4) encountered difficulties in generating random numbers from various diverse distributions when investigating the ability to sample from distributions.This suggests its limitations in effectively sampling from prior distributions and utilizing, for instance, a Bayesian estimator for action selection.Improving on the weakness of LLM agents in sampling from different probability distributions and finding (near) optimal action selection distributions in gameplay shows potential for future research.
Conducting simulations across various game settings, from competitive scenarios (e.g., prisoner's dilemma) to purely cooperative ones (e.g., Head-tail game), we found that augmenting an agent with the ability to privately deliberate on actions resulted in a superior overall performance and a clear advantage in competitive scenarios.Compared to the baseline agent, the private agent consistently outperformed or, at worst, matched its performance.In a direct comparison with the heuristic agent, which employed a tit-for-tat strategy in the prisoner's dilemma game, the private agent's performance was marginally lower.However, the heuristic agent's inability to communicate or deceive its opponent allowed the private agent to quickly discern its strategy, giving it a competitive edge.Moreover, the private agent's ability to deceive its opponent was noteworthy, securing a better overall score.
As part of our sensitivity analysis, we tested a gradual shift from a competitive to cooperative nature of the normal-form game, defined through a parameterized payoff matrix.The results suggested a high level of adaptability, except when close to the breaking point between the two dominant strategies.Additionally, we varied certain aspects of our system, including the underlying LLM, agent type, and the number of game steps.The more advanced LLM demonstrated greater differentiation of the proposed private agent than its counterparts, as it was implemented using ICL and CoT, which required a more capable model.Once the message history buffer was full, increasing the number of game steps did not yield any significant advantages in our case.However, if the context length limit baked into the underlying LLM was higher, this might produce different outcomes, which we leave open for future research.
For future work, our plan involves enhancing the private agent through additional fine-tuning.With this approach, we could further structure private thoughts and public output, and align them with policy, such as to facilitate a more direct deception mechanism.Moreover, enhancing LLM agents with tools that allow, e.g., sampling from a probability distribution, Bayesian estimator calculation, and algorithm selection would greatly enhance strategies in multi-player games.Additionally, given the LLM agent's private deliberation results in gaming scenarios, we plan to explore its potential applications outside of gaming, including interactive simulations and decision support systems.
Conclusions
In conclusion, this research explored the potential of large language model (LLM) agents, specifically GPT-4, in two-player repeated games and introduced a novel augmentation: the private agent.This augmentation implemented through in-context learning (ICL) and chain-of-thought (CoT) allowed concealed private contemplation about past and future interactions, enhancing the agent's decision-making process.Utilizing the partially observable stochastic game (POSG) framework, ICL, and CoT prompting, our experiments revealed that the private agent consistently achieved higher long-term payoffs and outperformed the public (baseline) and heuristic agents in various game scenarios.However, the public and private agents struggled with identifying opponent types and sampling from diverse probability distributions, highlighting areas for future improvement.
The private agent's superior performance in competitive settings and ability to deceive opponents highlight its strategic advantages.Future research will focus on fine-tuning the private agent to enhance its deceptive capabilities and on exploring its applications beyond gaming, such as interactive simulations and decision support systems.
Limitations
While we showed that augmenting the LLM agent with private deliberation produced superior results overall in repeated games, there were still some limitations.Increasing the number of recall iterations in an LLM agent aids decision-making by providing a more extensive record of interactions with other agents [59].However, when we increased the number of recall iterations, we concatenated the agent's generated output during each iteration, the length of which is depicted in Figure 6.Furthermore, with increased recall iterations, the context length became too large for GPT-4-0613, leading the model to either miss crucial information or engage in hallucination [60], negatively impacting its reasoning abilities.To address this issue, methods for computationally efficient extension of the context window, as proposed in [61,62], may need to be implemented.
Ethics Statement
This study entails the discussion and analysis of a simulated game setting, with any references to crime, animal torture, gender discrimination, or related actions strictly confined within the context of this game.The authors do not endorse violence or illegal activities in real-life scenarios.The game presented in this paper is designed for entertainment and research purposes, aiming to understand game mechanics, player behavior, and artificial intelligence.Moreover, it is important to emphasize that this study strictly adhered to all relevant ethical guidelines, maintaining the highest standards of research integrity.
Figure 1 .
Figure 1.A communication scheme between agents that interact via the environment that serves as a communication channel.
• S represents the finite, countable, non-empty set of all states.The state is represented as the accumulation of dialogue text between two agents, including public and private thoughts (if they exist), actions, and rewards.• b 0 i represents the initial distribution of beliefs agent i, i ∈ N has over the state of the other player −i, denoted by s −i , where b 0 i ∈ B i = ∆(S −i ).Each agent receives a unique initial prompt contained in its initial state.The initial belief distribution ∆(S −i ) of the LLM agents is biased towards fairness and cooperation, with a >60% cooperation rate[33,34].• A i represents the final countable non-empty action space of agent i.The action represents the text the agent produces.This has two parts for a public agent: (1) communicating with the other agent; (2) making a decision on which move to make from the available set of actions; and three parts for a private agent: (1) developing a communication strategy and decision strategy in private thoughts; (2) communicating with the other agents; (3) making a decision on which move to make in public thoughts.• O i represents an observation agent i receives in state s, s ∈ S, and the joint observations of all agents is denoted as o = {o 1 , . . ., o |N| }.The public agent has incomplete observation, due to unavailable private thoughts, while a private agent's observation is complete only if it is the only private agent in the game.However, it may be unaware of that, fact due to the agents' beliefs.• Z: S × A → O represents the probability of generating observation o i , i ∈ N depending on the player's i current state and action, and opposite player's −i current state s −i and action a −i denoted as Z(o i |s i , a i , s −i , a −i ).The observations are generated from the environment with which the agents interact.This prevents agents from influencing others' observations.• T: T(s, a, s ′ ) = T(s ′ |s, a) represents the state transition probability of moving from the current state s to a new state s ′ on joint action a = {a 1 , . . ., a |N| }.State transition represents the concatenation of states and rewards achieved in each round.State transitions are derived from the environment in which agents interact.In this problem setting, transitions are deterministic, as we use deterministic games.• R: S × A → R |N| represents the immediate reward for an agent N given a joint state s = {s 1 , . . ., s |N| } and an action profile a = {a 1 , . . ., a |N| } denoted as R(s, a).The language model environment assigns a reward in each round.LLM agents communicate, thus generating dialogue text and, in the end, providing their choices.After all agents have made their choices, the environment assigns a reward to each agent.
states s prv_prv and s prv_pub , denoting the private and public thoughts of the private agent, and the public states s j−1 pub and action a j−1 pub of the opponent from previous rounds 0, 1, . . ., j − 1.Meanwhile, the public agent's observation of the state o j pub contains the private agent's public thoughts s j−1 prv_pub and action a j−1 prv , i.e., what the opponent has revealed in previous rounds 0, 1, . . ., j − 1 and its own state s j pub .Regardless of the opponent's move in round j, each player independently chooses an action a j i .Then, each player receives a reward r j from the environment based on joint actions and states R (s Additionally, each player receives an observation o j i of states and actions given the function Z(an agent i is determined by the state transition function T(that takes the current state and joint actions of all players.The player i in round j has a policy π : S × B × A → [0, 1] where π a probability distribution over action space a j i ∈ A i given agent the i's current state and his belief b j i = ∆(s j −i ) about the opponent's current state.Using policy π j i , the agent can calculate the private expected reward, depending on his beliefs about the opponent's potential set of actions as:
Figure 2 .
Figure 2. Comparison of results achieved when prompting GPT-4-0613 to sample from different distributions.(a) Depicts output when generating a Gaussian distribution.(b) Depicts output when generating a Poisson distribution.(c) Depicts output when generating a uniform distribution.
Figure 3 .
Figure 3.The distribution of action choices P(a i |X i ) depends on the number of messages in the history buffer |X i |. |X i | = 0 represents no prior knowledge of the opponent with no message history.|X i | = 1 represents one message in message history, and |X i | >= 2 denotes a full history buffer written in current state s.
Listing 3 .
Initial prompt to the SH game.T i t l e : Stag Hunt game~r u l e s Context : You a r e engaged i n a r e p e a t e d Stag Hunt game with another player , where you have t o choose between Option A and Option B .Each round ' s outcome i s determined by your and t h e o t h e r player ' s c h o i c e s , r e s u l t i n g i n d i f f e r e n t p o i n t d i s t r i b u t i o n s .The~g o a l i s t o maximize your t o t a l p o i n t s over a l l ĩ t e r a t i o n s .
Listing 4 .
An example of private agent's thoughts.PRIVATE AGENT:P r i v a t e thoughts :
P u b lListing 5 .
i c thoughts : I choose Option B .By~d e v i a t i n g from t h e c o o p e r a t i v e approach and choosing Option B , I aim t o e x p l o i t my opponent ' s c o n s i s t e n t c h o i c e o f Option A and p o t e n t i a l l y gain a lower p e n a l t y .An example of public agent's thoughts.PUBLIC AGENT: P u b l i c thoughts :
Figure 5 .
Figure 5.An example of two iterations of the PD game between the public and private agent.After each iteration, the environment exchanged public messages and assigned rewards.
Figure 6 .
Figure 6.Comparison of context length in number of characters between private agent and public agent across games.Private agent's thought length is denoted as private thoughts and public thoughts, whilst public agent's thought length is denoted as public agent.
Figure 9 .
Figure 9. Coordination ratio depending on the value x in Parameterized game |J| = 15 iterations averaged over |I| = 10 rounds.
Listing 1 .
Example output of an private agent to environment and its own context-window memory.
hoping t o e i t h e r g e t a lower p e n a l t y or gain i n f o r m a t i o n about my opponent ' s s t r a t e g y .} } PUBLIC THOUGHTS: I w i l l choose a c t i o n A. I t h i n k i t i s i n our b e s t i n t e r e s t f o r both o f us t o chose t h e o p t i o n~A.
Table 2 .
An example of games and corresponding equilibria.
Table 3 .
Payoff matrix of the parameterized game. | 2024-06-20T15:24:10.172Z | 2024-06-01T00:00:00.000 | {
"year": 2024,
"sha1": "286b88d695b7528355382c2557eee1b248949765",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1099-4300/26/6/524/pdf?version=1718702936",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d85dd76bc8c39de40776bfff7526cebe01b8917c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
207891839 | pes2o/s2orc | v3-fos-license | In silico evaluation of the thermal stress induced by MRI switched gradient fields in patients with metallic hip implant
This work focuses on the in silico evaluation of the energy deposed by MRI switched gradient fields in bulk metallic implants and the consequent temperature increase in the surrounding tissues. An original computational strategy, based on the subdivision of the gradient coil switching sequences into sub-signals and on the time-harmonic electromagnetic field solution, allows to realistically simulate the evolution of the phenomena produced by the gradient coils fed according to any MRI sequence. Then, Pennes’ bioheat equation is solved through a Douglas–Gunn time split scheme to compute the time-dependent temperature increase. The procedure is validated by comparison with laboratory results, using a component of a realistic hip implant embedded within a phantom, obtaining an agreement on the temperature increase better than 5%, lower than the overall measurement uncertainty. The heating generated inside the body of a patient with a unilateral hip implant when undergoing an Echo-Planar Imaging (EPI) MRI sequence is evaluated and the role of the parameters affecting the thermal results (body position, coil performing the frequency encoding, effects of thermoregulation) is discussed. The results show that the gradient coils can generate local increases of temperature up to some kelvin when acting without radiofrequency excitation. Hence, their contribution in general should not be disregarded when evaluating patients’ safety.
put in evidence hot-spots close to the tip of the stem and the screw of the acetabular cup (that was not present in the model used by Mohsin) and, for a local SAR (averaged over 10 g) below 10 W/kg, found a maximum final temperature exceeding the limit prescribed by Standard IEC 60601-2-33 (2010), i.e. 39 °C.
Unlike RF effects, limited effort has been devoted to the thermal effects due to the switching gradient coils (GCs) fields, mainly analyzed for exertion of forces, induced voltages and imaging artefacts in the presence of implants (see for example Erhardt et al (2018) and Kalb et al (2018)). However, under realistic operating conditions, GCs may produce a significant amount of power inside a bulky metallic object, as proved by preliminary theoretical estimates on a model problem (a sphere in Zilberti et al (2017)). Experimental corfirmations are given by Graf et al (2007), which measured a temperature increase up to 2.2 K in an aluminum replica of a hip implant after 210 s of exposure to a 3D true FISP (fast imaging with steady precession) sequence, and by Bruehl et al (2017) on an acetabular cup, as will be discussed in section 3. The role of GCs fields in the increase of tissue temperature in patients with joint arthroplasties is verified also by simulations. In El Bannan et al (2013) the numerical and experimental analysis of twelve rods of different metals inserted in a 1 kHz driven solenoid shows a temperature rise up to 2.45 K. Zilberti et al (2014) consider an anatomical human model with bilateral hip implant placed in a MRI combined with a linear accelerator and compute maximum temperature elevations up to 2.7 K under 1 kHz supply for GCs. The calculation of a maximum temperature rise of about 4 K is described in Zilberti et al (2015), for the case of a patient with hip implant during 30 min of exposure to trapezoidal GC signals produced by a conventional system of coils working with a 20% duty-cycle. These previous analyses put in evidence a potential risk of gradient-induced heating of implants, but are difficult to compare, because of the different features (spatial distribution, waveform, frequency and magnitude of the applied field; shape and material of the test object; duration of the exposure) of each analyzed situation. In addition, they all present some important approximations (for instance, use of a test object with simplified shape instead of a real implant, or use of uniform and sinusoidal magnetic fields, instead of fields with realistic spatial distribution and time waveform) which make the results not completely suitable to describe a real scenario.
Differently from the RF heating, where a significant amount of energy deposition occurs directly in the biological tissues, almost the whole GCs thermal energy is deposed in the metallic implant and then diffuses towards body tissues surrounding it, giving rise to a local phenomenon. The effects of GCs are usually stronger when the implant is placed in the scanner periphery. There, as can be seen in figure 1, the concomitant transversal components B x and B y produced by the GCs, whose presence represents a minor source of distortion for standard MRI, can become stronger than the longitudinal component B z for some GCs (Liu et al 2003). Thus, the implant positioning within the scanner becomes an essential parameter.
Up to now, the computational papers devoted to this subject have approximated the supply conditions by adopting periodic sinusoidal or trapezoidal waveforms, which are quite far from the much more complex waveforms of GC fields nowadays applied in MRI. The rare papers which use actual sequences, e.g. 3D true FISP in El Bannan et al (2013) and Arduino et al (2017), apply a simplified approach, not suitable to take into account the details of many actual sequences (e.g. when the signal level is modified at each repetition). This paper proposes a novel strategy specifically developed to realistically account for the time behavior of GCs fields during any MRI sequence in the electromagnetic and thermal simulations required to estimate the temperature increase. The technique is based on the decomposition of the supply waveforms into sub-signals. It leads to the solution of a limited number of electromagnetic problems in the frequency domain. The power density generated by the GCs fields, increased by the one generated by the RF antenna, becomes the forcing term of a transient thermal problem, which provides the distribution of the temperature increase inside the patient body. In order to focus the discussion on the effects due to the exposure to the GCs switching fields, in the following the power generated by the RF field is discarded.
The computational approach used for the in silico evaluation is described in details in section 2, related to Methods. Then, the proposed procedure is applied to the analysis of the GCs induced heating of a hip implant due to a echo-planar imaging (EPI) sequence, as detailed in section 3. This sequence was chosen for two main reasons: (1) it is quite aggressive from the point of view of energy deposition; and (2) it has characteristics that make it representative of other types of sequences from the viewpoint of the approach here proposed. In addition, the adopted EPI sequence is similar to the sequence that was used by Bruehl et al (2017) to perform an experiment within a real scanner working in 'normal operating mode' (IEC 2010), hence considered safe from the viewpoint of cardiac and peripheral nerve stimulation.
In the analysis, the effects of the body positioning with respect to the scanner isocenter and the choice of slice selection, phase encoding and frequency encoding directions on the GCs induced implant heating are discussed.
It must be remarked that the present work investigates the heating due to the only GCs, disregarding RF. Thus, it cannot give a full response about the admissibilty of a patient to a MRI scan, for which it gives a condicio sine qua non, anyway. In addition, while RF SAR may involve a large portion of the body, producing not only a local heating, but, sometimes, a core temperature increase (van den Brink 2019), gradient-induced heating requires the presence of a metallic implant and keeps confined around it. For this reason, the following analysis focuses on local thermal effects.
Methods
Since the expected temperature increase, in the order of a few kelvin, is small enough to not alter sensibly the electric properties of the implants' materials and the biological tissues (according to Trujillo et al (2013), the relative variation is limited to few percents), the proposed numerical procedure can be divided into two successive separate steps. A set of EMF solutions provides the instantaneous power deposed in the orthopedic implant; then, a thermal problem, starting from the previous result, describes the consequent heat diffusion and temperature increase in the surrounding tissues.
Electromagnetic problem
Due to the extremely low electric conductivity of the human body with respect to metal, the electromagnetic problem is developed only inside the medical implant, under the reasonable assumption that, for the involved frequencies, the currents induced into the body tissues neither generate a significant thermal power, nor modify the magnetic field produced by GCs. The latter assumption would introduce a relative error on the induced electric field within the body tissues lower than 10 −4 (estimated using the analytical solution reported in Zilberti et al (2017) at the frequency of 100 kHz). As an original alternative to a step-by-step procedure, the proposed approach is based on time-harmonic EMF solutions and is structured in successive phases. The related operations are illustrated in the following, making reference to an EPI sequence, whose waveforms, reported in figure 2, are rather complex and challenging to be simulated. The EPI gradient waveforms have been recorded from nominal-current monitor of the gradient amplifier of a Siemens Verio 3 T scanner at PTB. The vendor supplied EPI sequence was used and adjusted to achieve maximum heating. Shim currents and noise have been removed.
The proposed procedure can be divided into three steps: signal pre-processing, electromagnetic simulations, power signal synthesis by post-processing.
The former step consists in subdividing the waveform supplied to each GC during a given time interval Δ (e.g. the time frame of the EPI sequence, or the repetition time TR for non-single-shot sequences) into sub-signals, which can be either periodic or aperiodic. Each sub-signal is then represented through a truncated Fourier Figure 1. Spatial distribution of the GC field components (mT) along the mid-plane (plane x-z) of a tubular scanner for a rated gradient of 20 mT/m. From left to right, the magnetic flux density components (B x , B y and B z ) generated by GC for gradient-X, gradient-Y and gradient-Z (from upper to lower, respectively). The B z distribution generated by the gradient-Y coil in the plane y -z is identical to the one generated by the gradient-X coil in the plane x-z and, in the same way, the B x and B y components are identical, but exchanged. Vice versa, for the gradient-X coil in plane y -z. expansion via fast Fourier transform. To determine the level of the truncation, an estimate of the relative error made with respect to the deposed energy is provided by the error index ε n defined as being Bñ the sub-signal waveform approximated by the Fourier series truncated at the n-th harmonic order, B the original sub-signal waveform, and Δ s the duration of the considered sub-signal. Precisely, the Fourier series is truncated when the error index ε n is less than 5%. The integrals that appear in (1) are proportional, with the same proportionality factor, to the energy that would be deposed by the EMF in the radiated object if the skin effect (i.e. the confinment of the induced currents at the periphery of the object) were negligible; thus, they can be used for a preliminary comparison of the thermal effects produced by the actual and the truncated waveforms. Once the sub-signals are identified, the second step of the proposed methodology requires, for each harmonic of each sub-signal, the solution inside the implant of an EMF problem with unitary source. It is important to underline that, when more sub-signals in the same coil have the same fundamental frequency, the field solution is performed just once for that frequency and the related harmonics. The eddy currents problem is formulated through an electric vector potential and a magnetic scalar potential (T − φ formulation), where the magnetic field distribution produced by the considered GC is assumed as the driving term . A hybrid Finite Element/Boundary Element method, solved by a generalized minimal residual (GMRES) algorithm running on graphics processing units (GPUs), provides the EMF distributions (Bottauscio et al 2015a). The adopted homemade implementation of the electromagnetic solver has already been applied and validated for the analysis of RF problems (Bottauscio et al 2015b). Details of the numerical implementation are reported in the appendix. The value of the computed electric field E in each implant voxel is stored.
Finally, in the last step of the procedure, the time interval Δ is sampled. For each voxel, the stored electric field harmonics are scaled by the complex Fourier coefficients of the related sub-signal. It must be noted that the change of one or more signal amplitudes in successive repetition times (e.g. in the phase encode of Gradient Echo sequences) involves the update of the scale factors only, without requiring additional field computations. The electric fields are then moved back in the time domain, located in the correct time interval and superimposed coil by coil to reconstruct the instantaneous distribution of the total electric field in all time samples. Finally, the instantaneous Joule power density and the energy density deposed in each voxel during each time step are computed. At the end of the process, in each implant voxel the power is averaged on the time step of the marching method to solve the thermal problem, inserting possible idle times between two successive intervals Δ. At this point, the power contribution due to the radiofrequency (RF) coil, provided by another numerical code, could be added in all body voxels in case of a complete simulation of the MRI session.
As an example, the process based on the harmonic decomposition, applied to the gradient-Z coil waveform, is presented in figure 3, where sub-signals 1, 2 and 3 are colored in red, blue and green, respectively. It is worth noting that 'idle' intervals must be inserted before and after the aperiodic sub-signals, in order to make the Fourier approximation reasonable for both magnetic and induced electric fields (i.e. to take into account possible transient evolutions when superposing the effects of different sub-signals). Moreover, by virtue of the arbitrariness in the choice of the time window covering the aperiodic signals, the fundamental frequency can be suitably 'tuned' to reduce the number of required electromagnetic simulations. Indeed, on the one hand, the same duration can be adopted for different sub-signals in the same coil, reducing the number of time-harmonic problems to be simulated numerically. On the other hand, in some cases idle times can be adjusted to obtain a waveform whose symmetry rules out the even harmonics (apart from the DC component, which is irrelevant in the process of electromagnetic induction, anyway). In figure 3, the Fourier expansion is truncated to the 63rd harmonic for the two aperiodic sub-signals (using the same fundamental frequency) and to the 15th harmonic (with only odd components) for the periodic sub-signal. Thus, only 71 simulations are required to evaluate the energetic effect of the gradient-Z coil effectively. The corresponding values of the error index ε n are reported in table 1.
The reliability of this approch for aperiodic signals has been tested on the model problem consisting of a sphere (diameter 40 mm, comparable with the femoral head of a hip implant) made of the alloys usually adopted for hip implants with the higher and the lower electric conductivity (stainless steel and Ti6Al4V, respectively, see table 2). A trapezoidal waveform for the magnetic field is imposed to simulate one of the aperiodic sub-signals (e.g. the last sub-signal in the gradient-Z coil of the EPI sequence). The waveform of the power density p (t) generated in an inner point close to the sphere surface is computed using the proposed time-harmonic approach with a decomposition truncated at the 31st component. The result is compared with the one given by a time-domain 2D homemade solver, suitable for general purpose electromagnetic simulations. The comparison, presented in figure 4, shows a good agreement between the two solutions. In particular, the decay time of the power density p (t), during the plateaus of the magnetic field waveform, are accurately reconstructed. About this aspect, it is worth noting that the reported power density is proportional to the square of the local electric field, which does not simply reflect the time derivative of the applied magnetic field, but exhibits the delay typical of a ohmic circuit with non-negligible self-inductance. This occurs because the eddy currents in the metallic objects are able to perturb the local distribution of the magnetic field itself.
Thermal problem
The time evolution of the temperature T inside the biological matter can be modelled through Pennes' equation (Pennes 1948): where ρc p is the volumetric heat capacity, λ is the thermal conductivity and h b is the blood perfusion coefficient in the human tissues, T b is the temperature of blood, P met is the volume power density associated with the metabolic process and P em is the volume power density deposed by the radiation and derived by the previous electromagnetic solution. The same equation can be used to describe heat transfer also within the implant materials simply by putting h b and P met equal to zero. On the body surface, the thermal field T satisfies the Robin condition: being h amb the heat exchange coefficient, n the outward normal direction and T air the unperturbed air temperature of the external environment. The thermal problem can be solved starting from the knowledge of the initial spatial distribution of temperature within the body before the exposure to the EMF (T 0 ). The thermoregulation affects both the blood perfusion coefficient h b and metabolic heating P met . According to the model proposed in (Laakso et al 2011), coefficient h b is modified by a local temperature-dependent multiplier L B = 2 (T -T0)/ΔB , assuming ΔB = 1.6 K. Thus, the perfusion term in equation (2) becomes: where h b0 is the blood perfusion at T 0 . Factor L B is saturated to 32 for skin and to 15 for all other tissues. Similarly, the metabolic heat production is assumed to be dependent on the local tissue temperature through a multiplier L M = 1.1 (T−T0) , as in Bernardi et al (2003), so that the metabolic heat term modifies as: where P met,0 is the metabolic power density at T 0 . Other non-linear contributions due to thermoregulatory processes, like sweating, that would affect the heat exchange coefficient h amb (Bernardi et al 2003), are neglected.
Since the temperature elevation produced by the EMFs is the quantity of interest, the bioheat equation can be conveniently simplified by introducing the temperature elevation ϑ with respect to the local temperature T 0 before the exposure (ϑ = T − T 0 ), following the approach proposed in Arduino et al (2017). The thermal equation before the exposure (when P em = 0): Figure 3. Sub-signals (from 1 to 3, starting from the top) deriving from the G z signal (reported on top) and related harmonic decomposition. Fourier series expansion truncated to the 63rd harmonic for the aperiodic sub-signals and to the 15th harmonic for the periodic one. In the plots of the harmonic decomposition for the aperiodic signals (1 and 3), only the first 31 harmonics are shown (but 63 harmonics are needed to obtain a satisfactory error index).
is subtracted from equation (2), leading to: In the inner tissues, the temperature at rest T 0 can be approximated with the blood temperature T b . By using the same approximation in the whole body, equation (7) can be rewritten in terms of the temperature elevation θ, as: with Robin boundary conditions λ ∂ϑ/∂n| ∂V = −h amb ϑ. The adopted approximation is quite large for the skin, which is colder than the rest of the body and plays a significant role in the whole body thermoregulation. However, in the considered simulations the heat source is the metallic implant located deeply within the body, and the obtained temperature maps are completely controlled by local parameters (mainly the blood perfusion, that dissipates the heat before it reaches the body surface). This fact makes the presented results trustworthy, despite the approximations in the mathematical modelling and sweating has been neglected. The form of Density kg/m 3 8445 4420 7900 Figure 4. Example of the reconstruction of the induced power density in a point in close proximity to surface of the metallic sphere when an aperiodic behaviour of magnetic field is assumed. The upper plot shows the aperiodic trapezoidal waveform of the applied magnetic field. The middle and lower plots report the waveforms of p (t) for a sphere made of Stainless steel or Ti6Al4V alloy, respectively. In both diagrams the waveform of p (t) computed with the harmonic decomposition is compared with the reference one given by a time-domain electromagnetic solver. The power density values are normalized to the maximum peak value (in the Stainless steel sphere) reached during the considered time interval.
Pennes' equation (8) avoids computing the initial temperature distribution within the body, determined by the equilibrium between metabolic heat, diffusion and perfusion phenomena, and heat exchange with the air. Homemade code is employed for the thermal computations. To solve equation (8), the thermal problem is developed in a domain involving the implant and the surrounding biological tissues. To reduce the computational burden, instead of simulating the whole body it is possible to select a portion of it, around the implant (see figures 7 and 9). The suitability of the adopted domain can be verified a posteriori, checking that the tempearture elevation does not extend on the artificial cuts that have been introduced to bound the domain itself. The adoption of a structured Cartesian mesh to discretize the domain (as in voxelized human models) allows the application of the finite difference method (FDM) with the Douglas-Gunn (DG) time split implemented to work on GPUs, whose efficiency has been previously verified .
In the time marching algorithm, the correcting factor L B is evaluated according to the temperature increase value at the previous time instant, in order to deal with a linear system at each step. By keeping sufficiently small time-steps, this linearization introduces a negligible error in the solution. It is worth noting that thermal phenomena evolve much more slowly than the electromagnetic ones, acting like a low-band filter for the rapidly varying electromagnetic power density. Without thermoregulation, problem (8) is solved assuming all thermal parameters to be invariant. This last assumption provides the upper limit for the temperature elevation computed through Pennes' equation (i.e. the worst-case scenario), which corresponds, for instance, to a completely impaired thermoregulation.
Experimental validation
The homemade numerical codes which implement the procedure described in the previous section, have been validated by comparison with the experimental results published in Bruehl et al (2017). In that paper, the heating of an acetabular cup (outer diameter equal to 54 mm) of a real hip implant made of Ti6Al4V alloy was studied. The cup (see picture in figure 5(a)) was placed in a 3 T clinical scanner, programming an EPI-like sequence with continuous, trapezoidal gradient-Z. The experiment was performed with the implant thermally 'insulated' (using a polystyrene cover which surrounds the cup leaving a layer of air around it) or 'embedded' (in gelatin gel). Details of the experiment can be found in Bruehl et al (2017).
Both testing conditions have been here simulated and the comparison has been carried on making reference to the temperature increase of the sensor measuring the highest temperature increase.
In the 'embedded' implant, the simulated results are obtained assuming for the gel the following thermal properties: λ = 0.5 W/(m K), c p = 3700 J/(kg K) and ρ = 1100 kg/m 3 . For the thermally 'insulated' case, simulations have been performed assuming Robin boundary conditions at the implant surface, with heat exchange coefficient h amb between 2.5 W/(m 2 K) and 3 W/(m 2 K).
The relative uncertainty of the measured temperature increase is estimated to be around 5%, which includes the instrumental relative uncertainty and the imperfect coupling of sensor and cup. When comparing the experimental data with computations, we have to include additional uncertainties caused by the incomplete knowledge of the overall experimental parameters, such as the position of the cup within the gradient field of the MRI scanner, the physical properties of the cup (electrical and thermal conductivity, specific heat capacity) and its thickness. A global budget of all these uncertainties leads to an estimate of a standard relative uncertainty of 20%.
The maximum discrepancy observed with respect to the computed temperature increase occurs for the implant embedded in gel and it amounts to about 3.2% of the measured value; hence, significantly below the uncertainty level. The obtained good agreement between measured and computed temperature increases is supported by the trends depicted in figure 5(b).
It is worth noting that the comparison reported here validates the proposed method only partially, since the phantom that surrounds the acetabular cup cannot account for the blood perfusion and thermoregulation that affect the thermal process in a living body. However, the main novelty of the presented approach is given by the treatment of the electromagnetic problem, which is completely supported by the described experimental validation, whereas the thermal computations rely on a standard approach widely adopted in the field of electromagnetic dosimetry.
In silico simulations
The proposed procedure is applied to the evaluation of the GC thermal effects in a patient with a unilateral right hip implant, during a MRI session in a tubular scanner. The implant has a total length of 23 cm (including the screw), that is ~20 cm from the top of the femoral head to the lower tip of the stem, and includes an acetabular shell and a liner. Three different materials for the metallic parts of the implant are analyzed: CoCrMo alloy, Ti6Al4V alloy and austenitic stainless steel. They are representative of the materials usually adopted for this type of implant. The electrical and thermal properties of the metallic parts are detailed in table 2. The liner is made of polyethylene and has a negligible electric conductivity.
The hip implant has been inserted in the 'Duke' anatomical model, belonging to the virtual population (ViP) (Gosselin et al 2014), segmented into 77 biological tissues, whose electric and thermal properties are deduced from the database developed by the IT'IS Foundation (IT'IS Database 2016). The heat exchange coefficient h amb has been always assumed equal to 7 W/(m 2 K). Due to the essentially local heating, as evident in the results shown in the following, this coefficient has a weak effect on the global results.
In the simulations, the scan performed following the EPI sequence presented in figure 2, continuously applied for about 12 min in order to simulate the repeated acquisition of multiple body slices. According to the division into sub-signals shown in figure 3, eight elementary waveforms are identified: six of them are aperiodic (two for each coil) and two are periodic (one in the gradient-Y coil, driven by a fundamental frequency of ~1922 Hz and reconstructed by 63 harmonics, and one in the gradient-Z coil at ~961 Hz with 8 non-null odd harmonics). By choosing time windows with the same duration (4 ms) for all the aperiodic sub-signals, the same 250 Hz fundamental frequency allows the reconstruction of all of them with the stated error index using 63 harmonic components. In this way, the total number of EMF solutions required for an accurate simulation is limited to 260.
The same result could be obtained if the whole EPI sequence were expanded in truncated Fourier series that is assuming the time interval Δ as a period. However, in this case, the waveform is more complicate and a satisfactory reconstruction requires much more harmonic components (at least 1023 to ensure an index error lower than 5%). Consequently, the total number of EMF problems would increase up to 3069, which puts in evidence the efficiency of the proposed strategy based on sub-signals.
The computations are performed on a server with Intel Xeon CPU E5-2680 v2, 128 GB RAM, and a NVIDIA K80 GPU card. The computational time required for each EMF solution (i.e. each harmonic) is ~30 s, when the hip implant is discretized with 2 mm voxels (1250 7 voxels, ~16 000 nodal unknowns, ~32 000 edge unknowns) and a limit error of 10 −3 is set for the GMRES residual (reached after about 200 GMRES iterations). Using the same hardware, the computational time for the thermal problem (on a portion of the body involving 840 742 cubic voxels with 2 mm side and ~872 000 nodal unknowns) is ~300 s having used a 0.5 s time-step.
In the analysis we have considered the three possible imaging planes (sagittal, coronal and transversal), associating the role of slice selection to the corresponding GC and the role of phase and frequency encoding alternatively to the other two GC (the identification of the six cases is summarized in table 3).
The computations have been performed considering different realistic positions of the hip implant, where the body axis is aligned with the scanner axis. Making reference to the femoral head, the implant has been located in the 'natural' position assumed by the body on the tomograph table, that is x-coordinate ≈85 mm and y -coordinate ≈0, with respect to the MR isocenter. The z-coordinate is varied in the range from −450 mm to +300 mm.
Due to the spatial variation of the GC magnetic field (see figure 1), the heating significantly varies with position. As an example, for the simulated case #1 (sagittal imaging with frequency encoding associated to the gradient-Z coil), the maximum heating is found for a longitudinal shift equal to −300 mm, which approximately corresponds to a thoracic MRI scan ( figure 6). Predictably, this is the position where the GC magnetic field shows the highest intensity, including a significant concomitant component along the x-axis produced by the gradient-X coil. For the same GC configuration, the minimum heating is found when the femoral head is located at the level of the isocenter. The results reported in figure 6 refer to an implant made of CoCrMo alloy. A qualitatively similar behavior can be observed for the other materials. Figure 7 shows the volume power density within the CoCrMo implant (computed as the ratio between the energy density induced by the GC and the sequence duration) and the produced increase of temperature in the surrounding tissues for case #1 with the femoral head at −300 mm. For both quantities, the highest values are localized in the right side of the acetabular shell, with a peak temperature increase of about 3.2 K (without thermoregulation). The related temperature increase trend and the spatial temperature evolution are reported in figure 8. This diagram underlines that the maximum value in the whole domain is reached inside the metallic implant. Therefore, in the following, the maximum temperature increase always refers to a point in the metal. Figure 8 also puts in evidence that the heating of the implant and the surrounding tissues develops on a relatively long time scale, which, on the whole, makes the heating curve smooth (masking minor oscillations due to the waveform and the switching of the gradient signals that drive the process).
When the implant is moved to a position with zero shift (i.e. the target of the MRI exam is the pelvis, which is located at the isocenter) the peak of temperature elevation reduces to 0.08 K (figure 9), as a result of the reduction in the field amplitude and the different distribution of the power density within the implant.
The change of the imaging plane, and the consequent role of the three GC, affects the thermal stress sensibly. By comparing the different possible configurations detailed in table 3, the maximum temperature elevation ranges from 1.9 K to 3.2 K with the femoral head shifted longitudinally of −300 mm. The values corresponding to each simulated case are summarized in table 4. It is worth noting that the six cases could be grouped in three couples having the same frequency encoding direction and the same predicted maximum temperature increase. These results highlight that the choice of the direction of the frequency encoding plays a main role on the temperature increase induced by the considered EPI sequence.
As a final example, in the most severe conditions (i.e. cases #4 and #6, with frequency encoding along the x-direction and implant in Stainless steel), the maximum temperature increase reduces to around 12% moving from the position of maximum heating (thoracic MRI) to the position of minimum heating (abdominal/ pelvis MRI). The minimum heating (occuring when scanning the pelvis) for cases #4 and #6 is significantly higher than the one found for cases #1 and #5. This happens because, for small values of the longitudinal shift, all Cartesian components of the magnetic flux density produced by the gradient-Z coil (performing the frequency encoding in cases #1 and #5) are relatively weak, whereas the z-component of the field produced by the gradient-X coil (playing the role of the most 'energetic' signal in cases #4 and #6) is non-negligible at the place of the implant (which is naturally displaced laterally). Figure 10 compares the distribution of power density and z-component of the GC field, on the implant surface, for cases #1 and #4, making reference to the implant made of CoCrMo alloy. A higher value of the B-field in the acetabular region, which determines higher induced currents and deposed power, is evident for case #4.
Discussion
The results of the research highlight a potential risk in the application of the gradient fields of an EPI sequence on a bulky metallic implant, due to a possible temperature elevation which compares to that occurring, under some circumstances, owing to the RF fields and considered as a danger. About this point, it is interesting to draw a parallel between the two fields. The maximum volume power density reported in figure 7 inside the implant, due to GC fields, amounts to about 1.5 · 10 5 W/m 3 , which corresponds to about 17.8 w/kg (the density of the CoCrMo alloy is 8445 kg/m 3 ). If we imagine that such an 'equivalent specific absorption rate' drives an adiabatic heating (as happens at the first instants of the process, when diffusion has not taken place yet), given the specific heat capacity of the alloy (450 J/(kg K)), we obtain a rate of heating equal to 0.04 K/s. The SAR required to obtain the same rate in bones and muscle (where the specific heat capacity is about 1300 J/(kg K) and 3500 J/(kg K), respectively), would be 52 W/kg and 140 W/kg, which would exceed the limits on local SAR for both normal and first level controlled operating mode recommended by IEC 60601-2-33 (2010). Of course, this estimate applies in the first stage of the heating process, when diffusion has not taken off yet, and cannot provide a complete term of comparison with the SAR generated at RF because in the metallic parts of the implant, where thermal conduction is much larger than that of biological tissues, the heat diffuses faster. The computational results reported in the previous section have also put in evidence the importance of simulating the thermal stress in metallic implants by taking into account both the actual spatial distribution of the magnetic field gradient (also outside the imaging region) and the time waveforms of realistic clinical sequences. The combined effects of implant position and signal shape determines the thermal stress experienced by the patient.
The dependence of the heating on the position within the MR bore due to the GC field is more complex with respect to the largely studied effect of positioning in case of RF heating.
In particular, while RF heating is maximized when the metallic implant is located within the RF coil (imaging region), for the GC heating the most severe conditions are always found when the implant is relatively far from the MR isocenter. Such a result is justified by the intrinsic features of the GC fields, which inside the imaging region are requested to linearly increase moving away from the isocenter (along the axial z-direction for the field of gradient-Z coil or toward the radial directions for the other coils, as well illustrated by figure 1). Outside the imaging region, the spatial distribution of the field is less regular for gradient-X and -Y coils, that also show significant spurious components along x and y directions. In particular, the B x distribution generated by the gradient-X coil in figure 1 exhibits the absolute flux density peak in proximity of the side wall of the x-z plane (the same for B y generated by the gradient-Y coil in the y -z plane). Such a field concentration implies a strong power deposition by the gradient-X coil when the implant is placed in this region. The same energetic effect could be provided by the gradient-Y coil (in the y -z plane), but the limited extent of the human body in the y -direction avoids that the metallic implant and the biological tissues are subjected to the B y highest values.
Ultimately, two basic considerations can be deduced from this analysis. The first one, as already mentioned, is that the thermal stress in the implant and the surrounding tissues are higher when the implant is far from the isocenter of the MR scanner. This effect is found for all the simulated situations, independently from the imaging plane that is considered, even if the entity of the variation depends on the supply conditions. The other conclusion is that, with this kind of GC, the gradient-X coil (because of the B x spurious component) and gradient-Z coil (because of the B z component) are responsible of the strongest thermal stress, while the contribution of the gradient-Y coil is less relevant, although non-negligible.
The other essential factor affecting the thermal behaviour is the waveform of the sequence signals flowing in the GC. The analysis of the EPI sequence signals evidences that the prevailing contribution to the thermal stress is due to the frequency encoding waveform and, in particular, to its periodic sub-signal. A simulation performed under the same conditions as in case #1 with the CoCrMo alloy, but considering only this sub-signal (all other sub-signals in the three coils are suppressed), leads to a temperature elevation of 3.1 K, to be compared with 3.2 K when the actual EPI sequence is considered. It follows that, on the basis of the previous considerations, the maximum heating is reached when the frequency encoding is imposed on the gradient-X coil or on the gradient-Z coil. On the contrary, the temperature elevation significantly reduces if this signal is carried by the gradient-Y coil (see table 4).
More in general, the waveform of the signals, and in particular the presence of high-order harmonics in the Fourier expansion, is crucial for the thermal stress. If case #1 with the CoCrMo implant is simulated assuming a pure sinusoidal waveform for the periodic sub-signal of the frequency encoding signal (having the same amplitude as the actual signal), the temperature elevation decreases from 3.2 K to 2 K. In this context, the slope of the trapezoidal waveform of the periodic sub-signal of the frequency encoding plays an important role. If, with the CoCrMo implant, the slope is increased from 167 (T/m)/s (value adopted in case #1) to 218 (T/m)/s and to 311 (T/m)/s, keeping constant both the plateau of field gradient and the signal period, the maximum temperature elevation increases from 3.2 K to 3.85 K and to 4.80 K, respectively.
The physical properties of the metallic components, and in particular the electrical conductivity, affect the heating of the implant. At the increase of the electrical conductivity, the power density dissipated in the metal increases, as well as the heating. However, the skin effect in the metallic component (which increases with the harmonic frequencies) makes this behavior non-linear with the conductivity value. By changing the electrical conductivity from 0.58 × 10 6 S/m (Ti6Al4V alloy) to 1.25 × 10 6 S/m (Stainless steel) the maximum temper ature increase computed in case #1 varies from 2.26 K to 3.30 K, while a linear behavior would lead to 4.96 K. Finally, we point out that the effect of the thermoregulation is found to be almost negligible (the variation of the maximum temperature is less than 0.15 K for the considered cases). This relatively low impact of thermoregulation complies with results reported by other authors (Bernardi et al 2003, Kodera et al 2018 that put in evidence a notable effect only at temperature elevations higher than 2 K. In the specific case here analyzed, such a weak effect can be further explained noting that the heating produced by GC is local (around the implant) and the temperature elevation involves tissues with relatively low basal blood perfusion coefficient.
Conclusion
The paper presents a procedure for the evaluation of the thermal effects produced by the gradient coils in the body of patients with orthopedic implants during the execution of an EPI sequence without RF excitation. A specific strategy, based on the subdivision of the actual gradient waveforms into sub-signals and the EMF solution in the frequency domain, has been developed. This approach allows investigating the exposure scenario in a very realistic way, and, at the same time, limiting the computational burden with respect to approaches directly applied to the periodic gradient pattern (or the whole sequence). The large variability in the dosimetric results obtained for different body positions and operating sequences confirms the importance of taking into account the specific features of the exposure situation.
This aspect, together with the fact that, in some of the proposed examples, the maximum temperature elevation exceeds 3 K, indicates that the GC heating effects must be carefully estimated, to check the existence of conditions that could represent a safety concern. If needed, safety measures can be taken, for instance, by adopting 'less aggressive' sequences when scanning patients with implants, by changing the imaging plane, or by introducing waiting times between the application of a sequence and the successive, to allow thermal diffusion within tissues and consequent redistribution of the thermal energy. All these aspects need to be studied in details in a future work. | 2019-11-07T14:09:33.475Z | 2019-11-04T00:00:00.000 | {
"year": 2019,
"sha1": "1090af56f26cf2c7335b288582fd248e8ab3a470",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1361-6560/ab5428",
"oa_status": "HYBRID",
"pdf_src": "IOP",
"pdf_hash": "7d985b316df48d148dbdba5ffbf629bbdb78a2f6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
241035368 | pes2o/s2orc | v3-fos-license | On a Quantum Weyl Curvature Hypothesis
Roger Penrose's Weyl curvature hypothesis states that the Weyl curvature is small at past singularities, but not at future singularities. We review the motivations for this conjecture and present estimates for the entropy of our Universe. We then extend this hypothesis to the quantum regime by demanding that the initial state of primordial quantum fluctuations be the adiabatic vacuum in a (quasi-) de~Sitter space. We finally attempt a justification of this quantum version from a fundamental theory of quantum gravity and speculate on its consequences in the case of a classically recollapsing universe.
1 How special is our Universe?
Our Universe seems to be very special. On large scales, it is approximately homogeneous and isotropic; as indicated by the cosmic microwave background (CMB), initial anisotropies are limited by a number of order 10 −5 . The Universe is also very young. The observed age of about 13.8 billion years may not seem small on everyday standards, but it is surprisingly small when compared to other scales; Poincaré cycles, for example, are much bigger even for very small systems. In fact, 13.8 billion years is more or less the minimal time needed for main sequence stars, habitable planets, and life to develop.
Can one estimate quantitatively how special the Universe is? An answer can be provided by calculating its entropy in an appropriate way and comparing it with the maximum possible entropy. For ordinary matter, states of maximum entropy are homogeneous, so one might wonder whether the Universe started in a state of high entropy. That this argument is misleading comes to light when one takes into account the contribution of gravitational degrees of freedom to entropy. Gravity is universally attractive; consequently, inhomogeneous ('condensed') states are entropically preferred. Unfortunately, no general expression for the entropy of the gravitational field is known. But what we know is an exact formula for the entropy of a black hole, which is arguably the most condensed system in nature. This formula is the expression for the Bekenstein-Hawking entropy and is given by where A denotes the area of the black hole's event horizon, and l P = G /c 3 is the Planck length. If we set Boltzmann' constant k B equal to one (as we shall do in most of the coming equations), the Bekenstein-Hawking entropy gives the area in terms of (twice) the Planck length squared. In the special case of a sphericallysymmetric (Schwarzschild) black hole with mass M, Eq. (1) assumes the form where M ⊙ is the solar mass. Equation (1) is found using the laws of black-hole mechanics. Its statistical origin is, so far, unknown, despite many attempts (and preliminary results) using current approaches to quantum gravity; see, for example, Kiefer (2012a) and the references therein. Following John Wheeler's old idea of "it from bit", one can divide the area A into cells of size Planck-mass squared and calculate the number of ways one can attach the bits 0 and 1 to these cells. This gives a simple model to understand the possible origin of (1), and it also provides the means to calculate statistical correction terms of the form ∝ ln(A/l 2 P ), which arise from Stirling's formula for factorials (Kiefer and Kolland 2008). Except for Planck-size black holes, these corrections terms are negligible and will not be taken into account below.
Some time ago, Roger Penrose made use of (2) to estimate the maximum possible entropy of our Universe; see Penrose (1977Penrose ( , 1979Penrose ( , 1981Penrose ( , 1986. For this purpose, he assumed that all matter in the observable Universe were assembled into one gigantic black hole. The size of the observable Universe is defined by the present size of the particle horizon. Using (2), Penrose obtained a value of the order 10 123 (Penrose 1981). (From now on, we set k B = 1.) Taking into account the fact that our Universe is presently accelerating and that we thus have to use a Schwarzschild-de Sitter solution (with a value for the cosmological constant Λ inferred from the PLANCK data) instead of a Schwarzschild solution, this value is reduced to (Kiefer 2012b) But for an accelerating Universe this is, in fact, not the maximum possible entropy. For the observed value of Λ, there is a contribution from the event horizon of the de Sitter space, which will be the late-time geometry of our Universe (under the assumption of a constant Λ and not a time-dependent dark energy). The general expression for this entropy was derived by Gibbons and Hawking (1977) and reads Taking into account the present uncertainties in the cosmological data, Egan and Lineweaver (2010) found from this the following numerical value: Comparing (5) with (3), it is clear that a future de Sitter space is entropically preferred by about one order of magnitude over a state with all matter being assembled into one gigantic black hole. In order to calculate the probability for our Universe, the maximum value (5) must be compared with the present value for the entropy within the same region. Egan and Lineweaver (2010) present a detailed estimate of all relevant contributions to the entropy, both for the observable Universe (their Table 1) and for the matter within the event horizon (their Table 2). The dominating contributions from the gravitational side are supermassive black holes (SMBHs), followed by stellar black holes. For the region inside the event horizon, the authors present the value for the entropy from supermassive black holes, while the biggest contribution to non-gravitational entropy comes from the CMB photons, with the value and a slightly smaller value for the entropy of relic neutrinos. We see that the non-gravitational entropy is completely negligible. (Already the entropy of the black hole in the centre of the Milky Way is about hundred times the entropy of the CMB photons.) With these numbers, following Penrose (1981), we can estimate the probability of our Universe as follows: In the ratio of these two 'multillions' [a term used by Eddington for double exponentials such as 10 10 10 , see Eddington (1931, p. 450)], the huge number in the numerator is completely negligible compared to the even huger number in the denominator. A similar argument applies to the case with one black hole using (3). As we see from (8), our Universe is very special indeed. From a pure entropic point of view, one would have expected that the Universe started from a very inhomogeneous state with black holes or already from a de Sitter-type space with large event horizon. A smooth initial state without cosmic event horizon is extremely special. Since this would correspond to vanishing Weyl tensor, Penrose came up with the hypothesis that a fundamental theory should predict vanishing Weyl tensor at past singularities. In Penrose (1986, p. 138, italics in the original), he uses the following words:
HYPOTHESIS (CLASSICAL): The Weyl curvature vanishes at all past singularities, as the singularity is approached from future directions.
[This condition can be weakened by only demanding that the Weyl tensor be finite, rather than diverging, see Penrose (2011, p. 134).] He continues by writing: 'This has the advantage that white holes, with their unpleasant anti-thermodynamic behaviour, are excluded. . . . This hypothesis is time-asymmetric, as indeed could have been anticipated, since it yields the timeasymmetric Second Law.' The Weyl curvature hypothesis (WCH) thus excludes the presence of white holes. For various aspects of the WCH, see the recent essay by Hu (2021) and the references therein.
(1981). The 'stalactites' there symbolize black holes (before evaporation); a more probable universe would have 'stalactites' as well as 'stalagmites', the latter representing white holes. (These must not be confused with primordial black holes which would correspond to 'very long' stalactites almost touching the big bang line.) Vanishing Weyl tensor entails, in particular, the absence of gravitational radiation. From the WCH it then follows that all gravitational waves must be retarded. This is analogous to the Sommerfeld condition stating the absence of advanced electromagnetic radiation; see Zeh (2007). Such a condition is crucial for understanding the origin of the arrow of time.
Gravitational waves can be described by certain Weyl scalars constructed from the Weyl tensor (Newman and Penrose 1962). One of them is where h + and h × denote the two polarization states of weak gravitational waves. The Newman-Penrose quantity Ψ 4 describes the helicity state s = −2, while its complex conjugate describes s = +2. Weyl scalars will play a role in the quantum version of the Weyl curvature hypothesis below where we will demand that Ψ 4 be small. The problem connected with the WCH is thus to understand initial conditions in cosmology. This was already emphasized by Eddington (1931). He envisaged the possibility that a low-entropy state is generated by an extremely improbable fluctuation, which is an idea dating back to Boltzmann. He called such a process anti-chance, but was unwilling to accept this possibility in reactions between atoms or other physical systems. He saw the only possibility for such a process in the boundary conditions: "Accordingly, we sweep anti-chance out of the laws of physics-out of the differential equations. Naturally, therefore, it reappears in the boundary conditions . . . " Among his arguments to reject the idea of an unlike fluctuation he used a concept that today is known as 'Boltzmann brain': if we just emerged from a gigantic fluctuation, it would be more likely that 'we' would just emerge as brains seeing a disorganized world rather than an ordered world such as ours. This is because observing a disorganized world (and even a partly organized world full of inconsistent documents) is immensely more probable than observing an organized world. Eddington speaks of mathematical physicists instead of Boltzmann brains: ". . . it is practically certain that a universe containing mathematical physicists will at any assigned date be in the state of maximum disorganisation which is not inconsistent with the existence of such creatures." After rejecting this idea, he concluded: "We are thus driven to admit anti-chance; and apparently the best thing we can do with it is to sweep it up into a heap at the beginning of time, as I have already described." But can we understand this occurrence of anti-chance at the beginning of time?
Low entropy for early quantum perturbations
It is not surprising that we know relatively little about the early phases of our Universe. A generic state would look very different from our present approximately homogeneous and isotropic world. But given the fact that symmetries play a fundamental role in physics, one might speculate that the Universe started with a highly symmetric state. Alexei Starobinsky came up with the idea that "the universe was in a maximum symmetrical state before the beginning of the classical Friedmann expansion" (Starobinsky 1979). For this state, he chose de Sitter space, which is (as Minkowski space) a state with maximal symmetry. Classical de Sitter space is homogeneous and isotropic and thus cannot lead to structure formation. The situation is different if quantum fluctuations are taken into account. Starobinsky suggested that the quantum fluctuations for gravitational waves (the gravitons) are initially in their ground state (the adiabatic vacuum). During the expansion, vacuum modes with large enough wavelength become excited and are no longer in their ground state. In the spirit of the inflationary universe, which was developed in the years after Starobinsky's suggestion, one can extend this idea also to quantum scalar modes (scalar components of the metric together with the inflaton).
In cosmic perturbation theory, one can combine the scalar fluctuations of the metric and a scalar field into the gauge-invariant 'Mukhanov-Sasaki variable' v(η, x), where η denotes the conformal time defined by dη/dt = a −1 , and a is the scale factor of a Friedmann-Lemaître (F-L) universe; see, for example, Brizuela et al. (2016) and the references therein. We denote the Fourier transform of v(η, x) by v k ; we also introduce the Fourier-transformed perturbation variable of the gauge-invariant tensor perturbations h ij with polarization λ ∈ {+, ×} by v (λ) We note that an important feature in the definition of these variables is the rescaling with respect to a. This becomes especially relevant in quantum theory. By expression (10) we can relate the variable v (λ) k to the Weyl scalar (9) (similar features hold for the relation between v k and other Weyl scalars). For a perturbed inflationary universe, one obtains an action containing a background part with scale factor a and homogeneous field φ plus a sum over all k with η-dependent oscillators described by v k and v (λ) k ; these oscillators have the 'frequencies' S,T ω 2 k (η) given by for the scalar (metric and scalar-field) perturbations and by for the tensor perturbations, respectively; moreover, we have z := φ ′ /H, where H is the Hubble parameter, and primes denote derivatives with respect to conformal time. (We restrict here to minimally coupled fields.) Since v k and v (λ) k are quantum variables, they obey Schrödinger equations with respect to η (or t). Assuming now the initial condition that these states be in their adiabatic vacuum state, one has for them the wave functions with Ω (0) k = k, and a similar expression for the tensorial modes. With this initial condition, the solution of the Schrödinger equations for the modes are Gaussians of the form (13) with an η-dependent factor Ω the expression for slow-roll inflation is more complicated (Bizuela et al. 2016). The important feature is the occurrence of an imaginary term in this expression. Quantum mechanically, the ensuing states correspond to two-mode squeezed states.
From the solutions of the Schrödinger equations one can derive the power spectra for the density perturbations and for the primordial gravitational waves. The corresponding expressions contain explicitly the Planck length and could thus be interpreted as quantum-gravitational effects (Krauss and Wilczek 2014).
Since the above wave functions are pure states, they have vanishing entropy (no missing information). A positive entropy comes into play when considering interactions of the modes with other degrees of freedom (such as fields from an effective quantum field theory) or with higher-order perturbations. This is connected with the process of decoherence -the emergence of classical behaviour (Joos et al. 2003). These interactions only play a role for the excited states, not the ground states. Detailed calculations of decoherence and entanglement for the primordial fluctuations can be found in Kiefer et al. (2007). There the von Neumann entropy was calculated. Alternatively, one can consider the 'linear entropy' S lin = Tr(ρ − ρ 2 ), which is bounded between zero (pure state) and one (maximally mixed state) and can be used to quantify the degree of purity in a simpler way than (15). The reduced density matrix ρ occurring in (15) is obtained by integrating out irrelevant degrees of freedom from a totally entangled quantum state containing a, φ, the v k (resp. the tensorial modes), and the irrelevant degrees of freedom. To obtain (15), one has to perform the trace over v k .
The resulting entropy increases with increase in the scale factor a. It does not yet lead to the large entropies presented in the first section. For this, further entropy-producing processes play a role (e.g. reheating after inflation). But the important point is that starting with a low-entropy initial state, one has enough entropy-generating capacity to generate an arrow of time. Instead of the Weyl curvature hypothesis presented in the first section, one can thus present here the following quantum version:
HYPOTHESIS (QUANTUM): The quantum states for the Weyl scalars describing scalar and tensor modes assume the form of adiabatic vacuum states in a (quasi-) de Sitter space, as the region of small-enough scale factors is approached from future directions.
We have here replaced 'past singularities' with 'region of small-enough scale factors' because it is generally assumed that singularities are absent in a quantum theory of gravity.
As in the classical case, this is a conjecture only. Can it be justified at a more fundamental level?
3 Justification from quantum gravity?
So far, we have discussed quantum states for primordial fluctuations in the background of a Friedmann-Lemaître universe. Since these include metric perturbations, quantum effects of gravity are already included. We would, however, expect that a truly fundamental explanation for the Quantum Weyl Hypothesis comes from an underlying (not yet known) full quantum theory of gravity, where no background exists. A very conservative approach and one especially suited for discussing conceptual issues is quantum geometrodynamics, with the Wheeler-DeWitt equation as its central equation; see, for example, Kiefer (2012a) for a detailed introduction.
For the case of a quantized F-L universe plus the above discussed primordial fluctuations described by the gauge-invariant variables v k , the Wheeler-DeWitt equation reads see, for example, Brizuela et al. (2016) and the references therein. Here, α := ln(a/a 0 ), where a 0 is a reference scale, and the 'frequencies' S,T ω 2 k (η) are given by (11) and (12). We choose units where = c = 1 and where the Planck mass reads We emphasize that the potential terms in (16) are asymmetric with respect to α → −α. In contrast to almost all the other fundamental equations in physics, the Wheeler-DeWitt equation thereby distinguishes a direction in (intrinsic) time α. Inspecting the frequencies (11) and (12), one recognizes that they do not depend on a and φ for large k, that is, for small-wavelength modes. This is also true in the limit of small a. Since the (a, φ)-part ('minisuperspace part') of (16) then decouples from the perturbation part, one can naturally impose the following initial condition on the total quantum state, with v k (and their tensorial partners) being in the adiabatic vacuum state (13), compare Zeh (2007), black holes
Radius zero
Hawking radiation Hawking radiation maximal extension Figure 2: The quantum situation for a "recollapsing universe": big crunch and big bang correspond to the same region in configuration space. The Weyl tensor is small at both ends.
This is a product state, which means that tracing out some of the degrees of freedom will remain ineffective, that is, it will not lead to a mixed state; thus, the entropy for the (a, φ)-variables remains zero after coarse-graining. While the state in the Wheeler-DeWitt equation (16) is timeless, a semiclassical or 'WKB' time comes into play after a Born-Oppenheimer type of approximation is being employed; see, for example, the detailed discussion in Kiefer (2012a). In this limit, the Schrödinger equations for the modes of the last section arise as approximate equations with respect to the WKB times η or t. For bigger values of a, entanglement will emerge, and the state (18) is replaced by where the conformal time η is to be understood as a function of α and φ. Here, ψ k (v k , η) are the squeezed states of the last section, which are states of Gaussian form with the parameter in the exponent given by (14) or its slow-roll generalization.
There is thus an increase in entanglement entropy from small to large scale factor and thus from small to large semiclassical time η (or t). Within each semiclassical and decohered branch of the full quantum state, one can express entanglement in terms of thermodynamic entropy; see, for example, Peres (1995, Chap. 9). The increase in entanglement entropy could thus be seen as providing the arrow of time in our Universe.
An interesting consequence of this arises for the case of a classically recollapsing universe (Kiefer and Zeh 1995). Instead of the classical picture shown in Fig. 1 one arrives at the quantum picture sketched in Fig. 2. Since the quan-tum theory does not distinguish between the regions with a classical big bang and a classical big crunch (they both correspond to the same region in configuration space with small a), imposing low entropy for the 'big bang' directly leads to low entropy for the 'big crunch'. Imposing the quantum version of the Weyl curvature hypothesis for the region that would classically be a big-bang singularity would then automatically entail the same version for the big-crunch region. Consequently, the arrow of time would formally reverse near the classical turning point. But since semiclassical components of the universal wave function would destructively interfere there, classical systems are not expected to survive it. Every observer in this quantum universe would thus only be able to see an expanding universe (Kiefer and Zeh 1995).
This has also consequences for black holes. A time-reversed black hole is a white hole. Thus, from the point of view of the symmetric picture shown in Fig. 2, a black hole turns into a white hole after the turning point. But for real observers, who are subject to the arrow of time and experience an expanding universe, there are only black holes.
That the arrow of time may point in the direction of an expanding universe, was envisaged long ago. John Wheeler, for example, wrote (Wheeler 1962, p. 72): The universe is not a system with respect to which ordinary statistical considerations apply. There is no better evidence on this point than the correlation between (a) the direction of time in which entropy increases and (b) the direction of time in which the expansion of the universe is proceeding.
These considerations are, of course, speculative. But they are concrete in the sense that they arise naturally from a straightforward combination of general relativity with quantum theory, together with a particular boundary condition. One could investigate similar conceptual issues in other theories of gravity, for example when a term proportional to Weyl-tensor squared is added to the Einstein-Hilbert action; see, for example, Kiefer and Nikolić (2017) and the references therein. We leave this for future work.
Data Availability Statement: Data sharing not applicable-no new data generated.
The author has no conflicts to disclose. | 2021-11-04T01:15:33.252Z | 2021-11-03T00:00:00.000 | {
"year": 2021,
"sha1": "0dfc66fad1134cbb4577e4e4a96f44b288c969cf",
"oa_license": null,
"oa_url": "https://avs.scitation.org/doi/pdf/10.1116/5.0076811",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "0dfc66fad1134cbb4577e4e4a96f44b288c969cf",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
272883 | pes2o/s2orc | v3-fos-license | Replication strategies and the evolution of cooperation by exploitation
Introducing the concept of replication strategies this paper studies the evolution of cooperation in populations of agents whose offspring follow a social strategy that is determined by a parent’s replication strategy. Importantly, social and replication strategies may differ, thus allowing parents to construct their own social niche, defined by the behaviour of their offspring. We analyse the co-evolution of social and replication strategies in well-mixed and spatial populations. In well-mixed populations, cooperation-supporting equilibria can only exist if the transmission processes of social strategies and replication strategies are completely separate. In space, cooperation can evolve without complete separation of the timescales at which both strategy traits are propagated. Cooperation then evolves through the presence of offspringexploiting defectors whose presence and spatial arrangement can shield clusters of pure cooperators.
Introduction
Actions that are in the interest of the group but not necessarily to the immediate benefit of the individual are widely observed in the social and biological sciences.Understanding the emergence and sustainability of such altruism or cooperation still poses major challenges to evolutionary game theory and the recent decades have seen very active research in this field.For instance, a recent review article classified five different mechanisms that support altruism (Nowak, 2006).Here, we are mainly interested in one of them: network reciprocity, cf.(Szabó and Fath, 2007;Perc and Szolnoki, 2010) for recent reviews.
Network reciprocity summarizes effects that result from constrained interactions in structured populations in which agents interact with fixed and typically rather small sets of permanent neighbours.In this way interactions between parents and offspring are favoured, i.e. cooperation is supported through positive assortment of strategies.The literature about evolutionary games in structured populations goes back to the seminal paper of Nowak (Nowak and M., 1992) in which spatial games were introduced, observing and describing chaotic patterns of strategies in space.The work was extended in several ways to, e.g., include effects of noise (Szabó and Toke, 1998) and asynchronicity (Huberman and Glance, 1993) in strategy propagation.Recent research has mainly focused on the evolution of cooperation in population structures modelled by complex networks, finding, e.g. that heterogeneous networks give a strong boost to cooperation (Santos et al., 2006).The latter findings have been extended to evolutionary models on regular graphs in which there is some heterogeneity in agent's abilities to generate payoff.Examples of studies in this direction are (Szolnoki and Szabó, 2007;Perc and Szolnoki, 2008;Brede, 2011a), but also the recent work on teaching and learning (Szolnoki and Perc, 2008;Tanimoto and Yamauchi, 2012).In the latter line of research agents are classified into two groups: (i) teacher agents with an enhanced ability to pass on strategies and (ii) learner agents with reduced abilities to pass on strategies.The co-evolutionary dynamics of fast and slow strategy spread can then generate phases in which cooperation can survive much beyond parameter regimes in which cooperation is supported by network reciprocity alone (Brede, 2013a).
Common to this large bulk of work on cooperation and network reciprocity is the assumption that offspring (in a biological context) or followers (in a social context) adopt exactly the same strategy as parents (or leaders).In fact, one might surmise that this assumption is crucial to allow for positive assortment which enables support for cooperation through network reciprocity.In this paper we introduce a more general framework that aims to challenge this hypothesis and explore its boundaries.We distinguish two traits that describe agent behaviour.The first is the typical social strategy that describes an agent's behaviour in the social dilemma game under consideration.The second is a replication strategy, i.e. a strategy that an agent will pass on as a social strategy to its offspring.In this way every agent is characterised by a tuple (s, e): a social strategy s and a replication strategy e through which it can determine its offsprings' social behaviour.Notably, the social strategy and the replication strategy of an agent can be different: It might be in the interest of an agent to surround itself with offspring (or followers) that are of a different type than itself.Hence,
ECAL 2013
agents may surround themselves by un-like types, questioning the role of positive assortment by network reciprocity.
One might also interpret our framework as a very simple model of social niche construction (Powers et al., 2011).The term social niche construction was recently introduced to describe a situation in which agents can evolve preferences for the social group they interact with.Using the example of preferences for group size it was then demonstrated that co-evolution of such preferences and social strategies can naturally support cooperation.In our context here, by their replication strategy, agents can influence the environment in which they live and thus improve their chances to generate payoff in competitive games.Using the often-studied framework of the prisoner's dilemma game, we will explore under which circumstances such a simple co-evolutionary model can allow for additional support for cooperative strategies.
Real-world inspiration for the above assumption of differences between social and replication strategies is not hard to come by.For instance, in models of teaching and learning the above framework allows for situations in which teachers can teach strategies different from their own.Arguably, this is a more realistic and general framework than the one considered in previous work.In a biological context one might interpret the model as a simple model of cell differentiation.
The present work thus follows in a line of recent advances in the understanding of the co-evolution of individual-level traits and cooperation (Szolnoki et al., 2009;Powers et al., 2011;Perc and Wang, 2010;Brede, 2011bBrede, , 2013b)).
The organization of the paper will be as follows.We start with a detailed description of the model framework and then describe and explain results in the section thereafter.The paper concludes by a summary and discussion section that puts our main results into context and discusses implications and future work.
Model
In more detail, we consider the following model of an evolutionary one-off prisoner's dilemma in space.A set of N agents are associated with the sites of a graph whose links define interactions and directions of strategy propagation.In case of experiments in well-mixed populations this social networks is a complete graph, otherwise we perform experiments on an L × L square lattice with von Neumann neighbourhoods.Agents are characterized by two strategy traits, a social strategy trait s ∈ {0, 1} and a replication strategy trait e ∈ {0, 1}.We use the convention that state "0" corresponds to a strategy of pure defect and state "1" corresponds to pure cooperate.Agents play a prisoner's dilemma with payoff matrix parametrized in the conventional form such that the parameter r characterizes the toughness of the game setting.In Eq. (1) R stands for the reward for mutual cooperation, S for the "sucker's" payoff, T for the temptation to defect, and P for the punishment for mutual defection.A small r 1 corresponds to very mild dilemma settings, whereas r → 1 characterizes very tough dilemmas.Hence, we distinguish four strategies: (i) cooperators who want their offspring to cooperate (s = 1, e = 1), (ii) cooperators who want their offspring to defect (s = 1, e = 0), (iii) defectors who want their offspring to cooperate (s = 0, e = 1) and (iv) defectors who pass on defection to their neighbours (s = 0, e = 0).This model may easily be extended by including context-dependent inheritance, i.e. the offspring determining trait would then depend on the social strategy currently played, but we reserve a thourough investigation of this case for future work and concentrate on the simplest setup in this paper.
In the following we will also consider the impact of various timescales in the evolution of social and replication strategies.The spread of both strategies might occur on seperate or similar timescales.In the case of joint strategy pass, an agent will adopt the desired social strategy of a parent as well as its replication strategy.In case of disparate pass, an agent might either adopt the parents' desired social strategy or its replication strategy.To distinguish these cases and to investigate the effects of disjoint strategy pass we introduce a probabilistic framework for the spread of strategies: With probability p s only the social strategy is imposed, otherwise, with probabilty p a only the replication strategy is passed on, and in the remaining cases (i.e. with probability p p = 1 − p s − (1 − p s )p a ) the social strategy is imposed and the replication strategy passed on.The timescales of the spread of social and replication strategies are then given by T s = 1/(p s + p p ) and T a = 1/(p a + p p ).
Hence, our evolutionary simulations consist of an asynchronous process iterating the following steps: • Seed all agents with randomly chosen initial social and replication strategies.
• Randomly pick a focus agent, say i, and choose a reference agent j from one of its four von Neumann neighbours at random.
• Evaluate game interactions of the focus agent with its neighbours to determine its accumulated payoff π (game) i and follow the same procedure to calculate the accumulated payoff π (game) j of the reference agent from interactions with its neighbours.
• After evaluating game payoffs, a cost c is deducted from payoffs of agents who attempt to spread a strategy different from their social strategy, i.e.
where δ ij = 1 if i = j and 0 otherwise.A cost c > 0 accounts for the fact that imposing social strategies different
ECAL -General Track
from your own might involve a costly effort to 'convince' the opponent.Unless otherwise stated experiments are carried out with c = 0 and the influence of a non-zero cost is only evaluated at the end of the paper.
• In a next step, a focus i agent will adopt the strategy of the reference agent j with a likelihood that depends on the difference in payoffs, i.e. (3) In the above equation the parameter κ introduces noise in the replication process, the larger κ, the larger the chance for inferior strategies to spread.In all following simulations we set the noise level to a relatively large value of κ = 1.This choice is motivated by reasons of computational feasibility, because the evolutionary dynamics becomes very slow for low levels of noise when the timescales of cluster expansion are dominated by the timescales of change of local configurations of s = 0, e = 1 defectors surrounded by cooperators at the boundaries of clusters of pure cooperators/defectors which can become entrenched for a very long time (see also the results section).
• Strategy spread (with the probability P (j → i) defined above) occurs in the following way.With probability p p the reference agent will impose his desired social strategy and will also transfer his replication strategy (s i = e j and e i = e j ).Otherwise, with probability p s only the social strategy is imposed (s i = e j ) and in the remainder of cases, i.e. with probability p a = 1−pp−ps 1−ps only the replication strategy is passed on from j to i (e i = e j ).The timescales for joint or disjoint spread of the traits (parametrized via p p ) and distinction of timescales for the spread of social and replication strategies (parametrized via p s ) prove crucial parameters to understand the dynamics of social evolution in this context.
• The payoff evaluation and strategy updating steps are then repeated for a sufficiently large number of steps until a quasistationary state has been reached.Then, average concentrations of all strategy concentrations are sampled from another T N iterations (note that in this paper time is always measured in units of full lattice sweeps).
In the following we employ computer simulations of systems composed of in between 10 4 and 1.6 × 10 5 agents to construct phase diagrams of parameter regions in which the evolutionary dynamics can allow cooperation to survive.
Well mixed populations
Before discussing spatial simulations it is worthwhile analysing the case without network reciprocity, i.e. a wellmixed population in which individuals meet at random and For p p = 1 cooperation can always survive, but for p p < 1 defection wins out for r > 0 (and since they all overlap circles at n = 0 are omitted for r > 0.01).
strategies spread according to Eq. ( 3) on the basis of payoff gathered from interactions with the entire population.For simplicity, we will not distinguish timescales given by p s and p a and assume p s /p a = 1 in the following.It is then straightforward to describe the evolutionary dynamics of strategy concentrations n i by a set of rate equations in the form: where the indices label the four possible strategies 00, 10, 01, and 11 and the matrices a (i) contain information about conversions between strategies according to the rules set out in the previous section.It is worth noting that n 00 + n 10 + n 01 + n 11 = 1, i.e. there are only three relevant degrees of freedom.
ECAL 2013
For instance, if an agent with strategy 00 meets an agent with strategy 01, the agent following 00 will adapt its strategy with likelihood 1/2 (since both strategies achieved the same payoff).If strategy 00 learns from 01, there are three cases that need to be distinguished.(i) with probability 1 − p p 00 learns the social strategy that 01 wishes to impose (i.e. 1) and 01's replication strategy (i.e. 1) and hence converts to strategy 11. (ii) with probability p p /2 00 only learns the social strategy 01 wishes to impose, i.e. 00 converts to 10 and (iii) 00 may only learn 01's replication strategy, i.e. 00 converts to 01.In all three cases 00 converts to a strategy different from 00, hence the entry a (00) 13 = −1/2.Similarly, if 10 encounters 00 the chance that 10 will learn from 00 is given by P .Either learning only the social strategy 00 wishes to impose (probability p p /2) or learning both the social strategy 00 wishes to impose and 00's replication strategy (probability 1 − p p ) will convert 10 to 00.Hence the entry a (00) 21 = β/2.Analogous equations for the remaining three matrices a (10) , a (01) , and a (11) can be derived, i.e.
The systems of equations (4) is a system of three nonlinear equations.Even though an analytical analysis of stationary states might be possible, numerical integration of (4) provides enough insight for the present purposes.Figure 1 gives the dependence of stationary strategy concentrations obtained by numerical integration of (4) on the dilemma strength for two scenarios of strategy pass for κ = 0.01 (note that noise levels should be measured per interaction, i.e. a very small value in the well-mixed case with all-toall interactions corresponds to larger noise values on sparse grids).
The first, illustrated by square symbols in Fig. 1 corresponds to completely asynchronous strategy pass, i.e. p p = 1.In this case for all r > 0 the population is split into roughly two thirds defectors (equal halves of which carry both replication strategies) and one third cooperators (with again equal halfs carrying both replication strategies).In contrast, for any p p = 1 (round symbols) the social strategy cooperate is found to die out, i.e. n 10 + n 11 = 0, and the two social defect strategies share the population in equal proportions.
It is easy to understand why this is the case.Strategy s = 0 and e = 1 can earn the same payoff as pure defectors with s = 0 and e = 0.However, in a well-mixed population it cannot profit from generating offspring who cooperate, because cooperation can be exploited by the entire population of defectors.Hence, agents with s = 0, e = 1 can generate the same number of 'offspring' as s = 0, e = 0; but their descendants die out without conferring an advantage on the parents.The situation is different if the spread of social strategy and replication strategy are completely separated: In this case the population of social cooperators is always reinforced by an inflow from the pool of social defectors with replication strategy defect (who earn equal payoff as pure defectors) and can also not be suppressed through interactions with pure defectors, because there is always a onehalf chance that the cooperative trait survives due to separate strategy pass.
Spatially distributed populations
As we have seen in the previous section on well-mixed populations, replication strategies that differ from social strategies can only support cooperation if the spread of social and replication strategies is completely separated.The reason for this is that social niche construction cannot operate in well-mixed populations: offspring that plays the social strategy cooperate can be exploited by the entire population and does not bestow any specific benefit to the parent who gave birth to it.Rather, the effect for strategies with replication strategy e = 1 is negative: Their offspring will replicate less well than the parent because it can be exploited by the entire population of defectors.One would anticipate that this situation can be different in viscous populations.In the latter case, parents can accrue specific individual fitness benefits by surrounding themselves by cooperators.It appears reasonable to surmise that the consequential increase in reproductive fitness for parents might compensate for the loss in fitness of offspring, thus enabling cooperative strategy traits to survive.We will explore this scenario for spatial games in some detail below.
Figure 2 illustrates simulation results for the evolution of the four strategies in two typical settings in which replication of the two components of a strategy, social strategy s and replication strategy e, are to some extent disjoint (p p = 0.6).The figures also give the frequency f c of mutually cooperative interactions.In the first setting with lower dilemma toughness (top panel), cooperation can grow to dominance.In the second with somewhat larger dilemma toughness (bottom panel), an equilibrium state in which all four strategies coexist is reached.Spatial arrangements of the strategies that correspond to such a mixed state are illustrated in Fig.
3.
These first experiments which we show in Fig. 2 illustrate two important points: (i) As hypothesised above, when including opportunities for social niche construction via replication strategies, cooperation can survive in spatial arrangements, even if strategy pass is not completely disjoint.(ii) Disjoint transmission of social and replication strategies can allow for the dominance of cooperation in regimes of dilemma games far beyond regimes normally supported by network reciprocity (i.e. for a typical spatial game with von Neumann neighbourhoods with κ = 0.1 the extinction threshold for cooperation is around r c = 0.021 (Hauert and Szabo, 2004) and even somewhat smaller with r c ≈ 0.017 for κ = 1).
The typical spatial arrangements in Fig. 3 presence of large homogeneous clusters of pure defectors (s = 0, e = 0, dark red) and pure cooperators (s = 1, e = 1, green).Strategies with s = e only occur at the boundaries of these clusters.A cursory glance at Fig. 3 which is confirmed by the results shown in the bottom panel of Fig. 2 also suggests that the strategy s = 0, e = 1 (blue) is far more prominent than strategy s = 1, e = 0.The reason is that a social defect strategy can earn larger payoffs than a social cooperate strategy.
Let us now consider the effect of s = 1, e = 0 and s = 0, e = 1 on the clusters of pure cooperators and defectors.When replicating in the direction of pure cooperators s = 1, e = 0 either reproduces itself or (assuming disjoint strategy pass) it produces a defector with s = 0, e = 1.However, since s = 1, e = 0 earns the same payoff as pure cooperation s = 1, e = 1 can only invade clusters of cooperators via neutral drift.By the same token it only rarely gets a chance to replicate when competing against defection, and if so, it cannot reproduce itself (since any pure defector n would either only be influenced in its social strategy via the replication strategy, i.e. s n = e = 0 or would additionally imitate the replication strategy e n = e = 0 which correspond to its own strategy anyway).Hence, s = 1, e = 0 impedes the spread of pure cooperation into clusters of pure defectors, but also, by transitioning into s = 0, e = 1, delays invasions of defection into clusters of pure cooperators.
What about the spread of the strategy s = 0, e = 1?The propagation of s = 0, e = 1 is more relevant at the boundary of clusters, since, following social defect, it will typi- cally harvest a larger payoff than s = 1, e = 0. On the one hand, when interacting with pure cooperation, it will always surround itself by pure cooperation.On the other hand, when interacting with a pure defector, it will either generate a s = 1, e = 0 defector or cause a transition of the neighbour to pure cooperation.Hence, even though s = 0, e = 1 exploits cooperators, it also shields clusters of cooperators from the invasion of defection and promotes the spread of the pure cooperate strategy.
ECAL -General
When considering the role of all four strategies at the boundaries of compact clusters of pure cooperators and pure defectors it is also important to recognize that the strategy s = 0, e = 1 will typically generate the largest payoff (because being on average surrounded by more cooperators than pure defect) and thus replicate most often.Even though being thus most successful in terms of replication, it can only recreate itself indirectly -its offspring will never follow the same strategy.
The mechanism which supports cooperation in the simulations shown above principally works as follows.
Offspring-exploiting defectors s = 0, e = 1 are the most successful strategy, but cannot recreate directly, and, as a result, serve as support for cooperation.It is evident that in case of joint strategy propagation a 'checkerboard pattern' of s = 0, e = 1 interspersed with s = 1, e = 1 would be evolutionarily stable.However, since s = 0, e = 1 cannot recreate itself and is only generated at boundaries of defectors and cooperators when disjoint strategy pass is allowed, without the presence of random strategy invasions or mutations such a pattern cannot evolve from random initial conditions (cf.Fig. 4 right panel).Moreover, this checkerboard pattern is not stable in the face of even small degrees of disjoint strategy spread (measured by p p ).If p p is sufficiently large, offspring-exploiting defectors support pure cooperation in two ways: (i) by shielding clusters of pure cooperators from the invasion of defection, and by (ii) serving as a source of pure cooperators, as a consequence of their own success in replication.
Figure 5 extends our earlier simulation experiments by giving the dependence of the frequency of cooperative interactions and strategy concentrations on the dilemma toughness.A clear dependence of the support for cooperation on the frequency of joint strategy pass p p is evident and is further supported by the full phase diagram that illustrates the dependence of coexistence regimes and regimes in which single strategies dominate on p p .As already indicated by the dependencies in Fig. 4 the coexistence regimes are typically rather small and regimes in which either pure cooperation or pure defection take over the entire population dominate the diagram of borderline region between the regimes of pure strategy dominance.
It is also of interest to investigate the dependence of the support for cooperation on the relative timescales for strategy propagation.To explore this question, we set up experiments with a fixed frequency of disjoint strategy pass and vary the relative frequencies with which reference agents only impose their replication strategy as the desired social strategy of neighbours (i.e. with probability p s ) and the frequency with which neighbours only learn the replication strategy of a reference (i.e. with probability p a ).A typical phase diagram that summarizes our simulation experiments is given in Fig. 6.
Two observations stand out.First, the support for cooperation grows the more prominent the imposition of social strategies relative to the learning of replication strategies.This observation is consistent with our argument about the role of offspring-exploiting defectors.The more frequent the spread of social strategies, the more often they will generate pure cooperators.Second, the regime in which all four strategies can coexist becomes the larger the more frequently agents pick up the replication strategies of their references.This second finding is also intuitively clear from the same argument.The more often replication strategies are learnt, the more often s = 0, e = 1 transitions into s = 1, e = 0, thus boosting the concentrations of other strategies.
A last point worth investigating is the role of a cost for strategy imposition.To investigate this point we presume that the behaviour in the standard game, i.e. imposing agents' own social strategy on neighbours, is free.In contrast, imposing a strategy different from an agent's social strategy needs "convincing", i.e. it comes at some cost c, cf.Eq. ( 2).The experiments carried out in this way allow us to test the stability of the standard framework and answer the question "Would differences between social and replication strategies evolve if teaching is costy?".To explore this question we fix the frequency p p of disjoint trait transmission and assume that both traits spread at equal rates (i.e.p s = p a ). Figure 7 summarizes these experiments by presenting a phase diagram for the dependence of extinction thresholds on cost assumptions.Clearly, imposing a cost for producing offspring with social strategies different from an agent's social strategy reduces support for cooperation.Such behaviour would naturally be expected, since imposing costs penalizes the "mixed" strategies s = 0, e = 1 and s = 1, e = 0 and our previous argument relied on the presence of the first of those to support cooperation via the exploitation of offspring.Nevertheless, even imposing costs that are very substantial compared to game payoffs, cooperation can exist in regimes far beyond the support it would find from the network reciprocity of the spatial grid (with an extinction threshold of r c ≈ 0.017, see Fig. 4).
From the data presented in Fig. 7 it is also noteworthy that costs reduce support for the coexistence regime.Costs penalize strategies with s = e and hence mixed phases in which all four strategies can co-exist are increasingly suppressed the larger the costs imposed.
Conclusions
In this paper we have introduced a simple way to explore social niche construction (Powers et al., 2011) on spatial networks.In our framework, on top of a social strategy every ECAL -General Track ECAL 2013 agent is endowed with a second trait, a replication strategy, which allows the agent to determine the social strategy of its offspring.We then explored the co-evolution of social and replication strategies, subject to various assumptions about the timescales of spread of both strategy components.
Analyzing the dynamics of the co-evolution in the prisoner's dilemma, we have established that cooperation can only be supported in well-mixed populations if social and replication strategies are never both passed on from parent to offspring.In a social context this corresponds to the rather unrealistic assumption that the timescales of learning the respective traits are completely separated.In a biological context, this assumption translates into assumptions about the traits being located on uncoupled separate genes.As demonstrated by our exploration of the spatial prisoner's dilemma, the presence of a structured population can mitigate this strict condition.We have shown that in spatial settings cooperation can find very strong support, even if the simultaneous passing on of social and replication strategies is rather likely.The main driver of the support for cooperation is the prevalence of offspring-exploiting defectors which can generate the largest payoffs in the game.Offspring-exploiting defectors are found to be in a similar role as payoff-distinguished agents in (Perc and Szolnoki, 2008;Brede, 2011a): by virtue of their enhanced ability to pass on strategies they assume a "leadership" role (Zimmermann and Eguíluz, 2005).Different from previous models like (Zimmermann and Eguíluz, 2005;Perc and Szolnoki, 2008;Brede, 2011a), however, such agents with s = e never replicate identically and thus offspring-exploiting cooperators facilitate the spread of cooperation by surrounding themselves with cooperators.
We have also presented a number of further experiments that corroborate the robustness of the above finding.Support for cooperation is robust to changes of the timescales of strategy spread over several orders of magnitude and also the inclusion of substantial costs for imposing social strategies different from an agent's own social strategy do not alter outcomes in a qualitative way.
One may wonder if the framework in which we introduced replication strategies in this paper is too restrictive.In other words: Would our main findings be robust if replication strategies were context dependent, i.e. influenced by the social strategy of agents which replicate such that an agent in the role of a social cooperator may wish to impose a different strategy on its neighbours than when being in the role of a social defector?We reserve a more comprehensive analysis of this more general setting for future work.
Figure 1 :
Figure 1: Dependence of the concentrations of defect and cooperate strategies on dilemma toughness for a well-mixed population of size N = 40000 and noise level κ = 0.01.For p p = 1 cooperation can always survive, but for p p < 1 defection wins out for r > 0 (and since they all overlap circles at n = 0 are omitted for r > 0.01).
Figure 2 :
Figure 2: (Average) evolution of social and reproduction strategies for a prisoner's dilemma and average fraction of mutually cooperative interactions f c .(a) With r = 0.18 and p p = 0.6 when cooperation grows to dominance and (b) with r = 0.22 when an equilibrium between the strategies is reached (on a 200 × 200 torus with κ = 1).
Figure 3: Example configuration of an equilibrium arrangement of the four strategies (for r = 0.22, p p = 0.6, κ = 1).Colors are (s, e): red (D,D), light red (D,C), green (C,C) and blue (C,D).(70*70) C,D and D,C only occur at the boundaries of larger C,C and D,D clusters.
Figure 4 :Figure 5 :
Figure 4: (a) Dependence of the frequency of mutually cooperative interactions f c on the dilemma strength for various values of p p on a 200 × 200 torus.It becomes apparent that cooperation finds more and more support, the more frequent uncorrelated replication events become.(b)Dependence of the concentrations of the various strategies on the dilemma strength for p p = 0.4.There are two phases dominated by pure cooperators or pure defectors (small and large r and an in-between phase in which all strategies can coexist.(c) In contrast, for p p = 0 only pure cooperators and pure defectors survive and the phase diagram from the standard PD without replication strategies is reproduced.
FigFigure 6 :
Figure6: Dependence of critical thresholds for the extinction of strategies on timescales for the spread of the social strategy (p s ) and for the replication strategy (p a ) for fixed p p = 0.6.For r > r 0 only pure defection survives, for r < r 1 only pure cooperation survives and for r 0 > r > r 1 all four strategies can coexist.The faster social strategies spread relative to replication strategies, the more support for cooperation.Also the coexistence region becomes larger the slower the spread of the social strategy.
Figure 7 :
Figure 7: Dependence of critical thresholds for the extinction of strategies on costs for the propagation of unequal strategy traits s = e for p p = 0.5. | 2015-07-06T21:03:06.000Z | 2013-09-02T00:00:00.000 | {
"year": 2013,
"sha1": "a405ecb29c98fbbcbc863531301a692a4b04f0f2",
"oa_license": "CCBY",
"oa_url": "http://www.mitpressjournals.org/doi/pdf/10.1162/978-0-262-31709-2-ch045",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "924af5b16ca588ace7b1a0fe34ebeb088d0790a1",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Biology",
"Business"
]
} |
227243856 | pes2o/s2orc | v3-fos-license | Co-Circulation of Two Independent Clades and Persistence of CHIKV-ECSA Genotype during Epidemic Waves in Rio de Janeiro, Southeast Brazil
The Chikungunya virus infection in Brazil has raised several concerns due to the rapid dissemination of the virus and its association with several clinical complications. Nevertheless, there is limited information about the genomic epidemiology of CHIKV circulating in Brazil from surveillance studies. Thus, to better understand its dispersion dynamics in Rio de Janeiro (RJ), one of the most affected states during the 2016–2019 epidemic waves, we generated 23 near-complete genomes of CHIKV isolates from two main cities located in the metropolitan mesoregion, obtained directly from clinical samples. Our phylogenetic reconstructions suggest the 2019-CHIKV-ECSA epidemic in RJ state was characterized by the co-circulation of multiple clade (clade A and B), highlighting that two independent introduction events of CHIKV-ECSA into RJ state have occurred between 2016–2019, both mediated from the northeastern region. Interestingly, we identified that the two-clade displaying eighteen characteristic amino acids changes among structural and non-structural proteins. Our findings reinforce that genomic data can provide information about virus genetic diversity and transmission dynamics, which might assist in the arbovirus epidemics establishing of an effective surveillance framework.
Introduction
Chikungunya virus (CHIKV) is an RNA alphavirus belonging to the Togaviridae family, transmitted by Aedes aegypti and by A. albopictus mosquitoes [1,2]. Infection with CHIKV typically causes a self-limiting febrile illness, the chikungunya fever, and common clinical manifestations of the disease include fever, muscle pain, rash, and severe joint pain, which may last for months to years [1]. Chikungunya fever appears also to be linked with long-lasting rheumatic disorders presenting as acute and chronic polyarthralgia/polyarthritis, which lead to functional impairment affecting daily living activities up to several years after infection [1,2].
The first identified outbreak of chikungunya was reported in 1952 in Tanzania, East Africa, and since then it has been responsible for important emerging and re-emerging epidemics in several tropical and temperate regions [3,4].
Four distinct CHIKV genotypes (or lineages), have been already identified and named based on their geographical distribution: (i) the West African; (ii) the East/Central/South African (ECSA); (iii) the Asian; (iv) the Indian Ocean Lineage (IOL), which emerged from the ECSA lineage between 2005 and 2006 [5,6]. In the Americas, the first autochthonous CHIKV transmission was reported in 2013, being characterized by the circulation of the Asian genotype [7]. A year later, in 2014, in Brazil, the co-circulation of the Asian and ECSA genotypes were reported in Northern and Northeastern, respectively [8,9]. Since then, the ECSA genotype has been detected in several other Brazilian states in the northeastern, southeastern, and northern regions, exerting a serious threat to public health [10][11][12][13][14][15][16][17].
CHIKV infections in Brazil accounted for 132,205 probable cases in 2019, with the southeastern region responsible for approximately more than 92,000 cases [18]. In 2019, Rio de Janeiro (RJ) state reported 86,264 CHIKV suspected cases until epidemiological week 52, which is approximately 65% of all the probable cases notified in the country [18].
The RJ, located in southeast Brazil, is the second-most populous state in the country and has a territorial area of 43,750,427 km 2 , with an estimated population of 17,264,943 (demographic density: 365.23 hab/km 2 ) [19]. It is considered an important economic and an important tourist destination, geographically is divided into six mesoregions: Northwest Fluminense, North Fluminense, Center Fluminense, Baixadas, South Fluminense, and the metropolitan region [20]. Historically, it has also been described as an important gateway of several mosquito-borne viruses, including dengue, yellow fever, and zika viruses.
The first cases of autochthonous transmissions of CHIKV in Rio de Janeiro were reported in 2015, indicating the establishment of the circulation of the ECSA [10,13]. Nevertheless, the shortage of complete genome sequences available impairs our understanding regarding its dynamic dispersion into the state. Thus, in this study, we generate 23 new CHIKV near-complete genome sequences from the 2019 epidemic sampled in two different cities located in the Metropolitan mesoregion of the Rio de Janeiro state (Rio de Janeiro and Duca de Caixia cities), with the aim of providing an overview of the circulation and dispersion events of the virus in that state.
Results
To better understand the 2019 CHIKV epidemic in some of the most affected municipalities in Rio de Janeiro, we generated 23 CHIKV near-complete genomes (coverage range 77.3-93.8%, mean = 91.2%) from serum samples using a nanopore sequencing approach. PCR cycle threshold (Ct) values were on average 1307 (range: 5.75 to 20.89) ( Table 1). Most of the isolates (n = 17) belonged to patients that reside in the municipality of Rio de Janeiro, the capital of the RJ state, located in the metropolitan region. The remaining six samples were from a different neighboring municipality, the Duque de Caxias municipality, also belonging to the same metropolitan region of the state. Of the 23 samples, 19 were from adult patients (>18 years), 1 from an infant (1 year), and 3 from newborns (1 and 7 days; 1 month). These samples were from 11 female (9 adults and 2 newborns) and 12 male (10 adults, 1 infant and 1 newborn). None of the patients have reported travels to other previous epidemic areas, as indicated by epidemiological data obtained from the local surveillance service. Sequencing statistics and epidemiological details of the sequences generated here are available in Table 1.
Phylogenetic Analyses
To investigate the phylogenetic relationships of the 2019-CHIKV strains circulating in Rio de Janeiro state, we estimated a preliminary Maximum Likelihood (ML) phylogeny from a dataset containing 767 reference sequences from the four genotypes plus the newly 23 sequences generated in this study (n = 790 sequences) ( Figure S1). Our ML phylogeny revealed that the newly generated CHIKV sequences belong to the ECSA genotype and cluster together with other Brazilian strains belonging to the same genotype ( Figure S1). These results were also confirmed by using the phylogenetic arbovirus genotyping tool (https://www.genomedetective.com/).
Further, in order to investigate the Brazilian ECSA clade in more detail, we built a second dataset including all ECSA taxa from Brazil (ECSA-BR dataset, n = 96), and we performed a Bayesian molecular clock reconstruction. A regression of genetic divergence from root to tip against sampling dates confirmed sufficient temporal signal (R 2 = 0.75). Time-measured phylogenetic analysis reveals that the novel isolates were organized into two distinct clades, named hereafter as clades A and B.
Of the 23 samples, 21 were grouped in a well-supported monophyletic clade (posterior probability = 1) containing other sequences from the RJ state collected between 2016 and 2017 (clade A), suggesting that since its first introduction in the state, the virus persisted during at least during a 3-year period. The time to the most recent common ancestor (tMRCA) of this clade was estimated to be April 2017 (95% HPD: January to October 2017).
Interestingly, the other two samples grouped within a distinct clade (clade B), which also includes another sample from the RJ state isolated in 2018 (posterior probability = 1) (Figure 1) which indicate a more recent re-introduction of the CHIKV-ECSA genotype into RJ state dated back to July 2018 (95% HPD: January to October 2018).
Together, our results suggest that since 2015, Rio de Janeiro has experienced two independent events of CHIKV-ECSA introduction, both mediated by northeastern region where this lineage was first detected in late 2014.
Molecular Characterization of Newly CHIKV Sequences from Rio de Janeiro State
The presence of amino acid substitutions was investigated in the 23 newly near-complete genomes obtained in this study in comparison with the reference genome strain isolated in Feira de Santana, Bahia state in Mid-2014 (Accession Number: KP164568). We found evidence of synonymous and non-synonymous amino acid (aa) changes among the two clades (clade A and B), in the non-structural and structural proteins.
In more depth, we identified a total of 8 amino acid substitutions among the NSP2, NSP3, and NSP4 genes, and 10 among the E2, 6k, and E1 structural genes ( Table 2).
Among clade A we identified three conserved aa positions in the NSP2 (P352A; A545S) and E1 (K211T) proteins that have also been identified in all the other strains that are clustering together in the same clade, sampled in Rio de Janeiro between 2016 and 2017.
Moreover, we also found that our newly isolates presented aa changes in the E1 (V269M; A305T) protein that appears to be characteristic of the 2019-CHIKV-ECSA epidemic ( Table 2).
For the Clade B, we identified aa change signature in our 2019 strains located in the NSP2 (A57V) and E2 (R178H) proteins, respectively (Table 2).
Beside those aa changes among the two clades we also identified a total of six non-synonymous aa changes: one in NSP2 (A545S, non-polar to neutral polar), one in E2 (T74M, polar to non-polar), and four in E1 (A98T, non-polar to neutral polar neutral; D151V, acid polar to non-polar; K211T, basic polar to neutral polar; A305T, non-polar to neutral polar) proteins.
Molecular Characterization of Newly CHIKV Sequences from Rio de Janeiro State
The presence of amino acid substitutions was investigated in the 23 newly near-complete genomes obtained in this study in comparison with the reference genome strain isolated in Feira de Santana, Bahia state in Mid-2014 (Accession Number: KP164568). We found evidence of synonymous and non-synonymous amino acid (aa) changes among the two clades (clade A and B), in the nonstructural and structural proteins.
In more depth, we identified a total of 8 amino acid substitutions among the NSP2, NSP3, and NSP4 genes, and 10 among the E2, 6k, and E1 structural genes ( Table 2).
Discussion
Since CHIKV was first detected in Brazil in 2014 [8,9] more than 780,000 cases have been notified [18,[21][22][23][24]. Despite its synchronicity, there is still limited information about the genomic epidemiology of CHIKV during the Brazilian epidemics. To provide more information about the dynamics of CHIKV epidemics in Brazil, we generated 23 new near-complete genome sequences from Rio de Janeiro that had the first detection of CHIKV in 2016 and experienced an explosive outbreak in 2019 [10,[12][13][14].
Our results revealed that the 2019-CHIKV-ECSA epidemic in RJ state was characterized by the co-circulation of two clades (clade A and B), in line with previous findings [13] highlighting that these two different clades are persistent and responsible for the latest epidemics registered in the state. Time-scale phylogenetic analysis estimated the tMRCA of the Clade A and B to be April 2017 (95% HPD: January to October 2017), and July 2018 (95% HPD: January to October 2018), respectively.
Together, our results reinforce that two independent introduction events of CHIKV-ECSA have occurred between 2016-2019 into Rio de Janeiro, both mediated from the Northeastern region, which has played an important role in the introduction and establishment of the ECSA genotype into the country. Both strains' introduction and their persistence along time depict a complex dynamic transmission between the epidemic seasons and sampled locations. Several factors, including the high population density, low-income level, and precarious sanitary conditions, high air connectivity, and land transport between different regions of the country, might be contributing to the modeling of this complex viral spread scenario between the different Brazilian regions.
Analysis of the newly 23 isolates allowed us to display characteristic nucleotide (nn) signatures responsible of conservative and semi-conservative amino acid (aa) changes among the structural and non-structural proteins between the two clades, some of which appear to be characteristic of the 2019-CHIKV-ECSA epidemic.
Although most of the substitutions identified appear to be synonymous, it is important to note that among the non-synonymous ones, five have been identified in the Envelope protein. This protein presents the high antigenic variability and it is considered an important viral structure for host-cells infection, since, through it, the virus attaches to the cells. Additionally, it has also been suggested that this protein plays an important role during the viral replication [25,26]. These changes in the amino acid composition in of this region may be relevant and require further investigation and potential surveillance during epidemics seasons.
Furthermore, we did not detect the A226V nor K211E aa substitutions (residues located in the E1 protein) among the samples under study. These mutations were previously described as responsible to increase the CHIKV transmission in Ae. albopictus and Ae. aegypti mosquitoes, respectively [27,28].
Our study shows that genomic surveillance strategies, as previously suggested [29][30][31] might play an essential role in monitoring the spread as well as the diversity of emerging and re-emerging mosquito-borne viruses, which is fundamental to assist public health policies.
Ethical Statement
The strains analyzed in this study belong to a previously gathered collection from the Flavivirus Laboratory, IOC/FIOCRUZ, Rio de Janeiro, Brazil, obtained from human serum from an ongoing Project reviewed and approved by local Ethics Committee CAAE: 90249218.6.1001.5248 from the Oswaldo Cruz Foundation. The informed consent was obtained from all subjects. Samples were chosen anonymously, based on the laboratorial results and clinical manifestations available on the Laboratory database. All methods were performed in accordance with relevant guidelines and regulations.
Sample Collection and RT-qPCR Diagnosis
Serum samples (n = 23) from Chikunungya suspected patients were screened for CHIKV RNA detection in the Regional Reference Laboratory of Flavivirus (LABFLA) at the Oswaldo Cruz Foundation. Samples were obtained from 0 to 11 days after the onset of symptoms. Viral nucleic acid extraction was performed using the Magmax Pathogen RNA/DNA kit (Thermo Fisher Scientific, Waltham, MA, USA) and the KingFisher Flex Purification System (Thermo Fisher Scientific, Waltham, MA, USA) according to the manufacturer's instructions. The Molecular diagnostic assay was performed by Real-Time RT-PCR using a molecular ZDC kit (Bio-Manguinhos, Rio de Janeiro, Brazil) produced by Bio-Manguinhos, FIOCRUZ, on an Applied Biosystems 7500 Real-Time PCR System machine (Thermo Fisher Scientific, Waltham, MA, USA). All procedures were conducted in biological safety cabinets located in physically separated areas. Negative controls were used in all reactions.
Synthesis of cDNA and Multiplex Tiling PCR
DNA amplification and sequencing were attempted on the 23 selected RT-PCR positive samples that exhibited Ct-values <38, in order to increase the viral genome coverage by nanopore sequencing (selection based on DNA concentration after clean-up being >4 ng/µL) [32].
Extracted RNA was converted to cDNA using the Protoscript II First Strand cDNA synthesis Kit (New England Biolabs, Hitchin, UK) and random hexamer priming [32]. Then, a multiplex tiling PCR was conducted using Q5 High Fidelity Hot-Start DNA Polymerase (New England Biolabs, Hitchin, UK) and CHIKV sequencing primers scheme (primers are divided into two separated pools, A and B) designed by Quick and collaborators using Primal Scheme (http://primal.zibraproject.org) [32]. The thermocycling and reaction conditions were previously reported in [32].
Library Preparation and Nanopore Sequencing
Amplicons were purified using 1× AMPure XP Beads (Beckman Coulter, Brea, CA, USA) and cleaned-up PCR products concentrations were measured using Qubit dsDNA HS Assay Kit (Thermo Fisher Scientific, Waltham, MA, USA). on a Qubit 3.0 fluorimeter (Thermo Fisher Scientific, Waltham, MA, USA). DNA library preparation was performed using the Ligation Sequencing Kit (Oxford Nanopore Technologies, Oxford, UK) and Native Barcoding Expansion 1-24 kit (Oxford Nanopore Technologies, Oxford, UK), whose reactions conditions have already been described [32], with the following modifications: The same sample was added to both sequencing primers pools (A and B, separated tubes) during multiplex tiling PCR. After PCR, each pool was purified, and its DNA concentration was quantified using Qubit. Then, both pools (A and B) were mixed in a single tube (taking in consideration the DNA concentrations of each pool), and one barcode was used per sample in order to maximize the number of samples per flow cell. Sequencing library was generated from the barcoded products using the Genomic DNA Sequencing Kit SQK-MAP007/SQK-LSK208 (Oxford Nanopore Technologies, Oxford, UK). Sequencing library was loaded onto a R9.4 flow cell (Oxford Nanopore Technologies, Oxford, UK). Sequencing was performed for 3 h on MinION device (Oxford Nanopore Technologies, Oxford, UK). Reads were basecalled using Guppy and barcode demultiplexing was performed using qcat. Consensus sequences were generated by de novo assembling using Genome Detective (Available at: https://www.genomedetective.com) [33].
New genome sequences obtained in this study have been deposited in GenBank under accession numbers MT933029 to MT933051.
Phylogenetic and Bayesian Analysis
The 23 new genomic sequences reported in this study were initially submitted to a genotyping analysis performed by Genome Detective virus tool online (https://www.genomedetective.com/). New sequences were aligned to 767 complete or almost complete CHIKV genomic sequences (>10,000 bp), retrieved from VIPR Virus Pathogen Resource (https://www.viprbrc.org) in February 2020 and covering all four existing genotypes. Alignment was performed using MAFFT online program [34]. The sequences' analysis, edition, and molecular characterization were performed using the Bioedit (http://www.mbio.ncsu.edu/bioedit/bioedit.html). The complete dataset was assessed for presence of phylogenetic signal by applying the likelihood mapping analysis implemented in the IQ-TREE 1.6.8 software (http://www.iqtree.org) [35]. A maximum likelihood (ML) phylogeny was reconstructed from the dataset (n = 790) using IQ-TREE 1.6.8 software under the GTR+G+I nucleotide substitution model with four gamma categories, which was inferred in jModelTest (https://github.com/ddarriba/jmodeltest2) as the best fitting model [36]. GenBank accession numbers, countries of origin and year of isolation of all included sequences are shown in Table S1.
From the generated maximum likelihood (ML) phylogeny using the concatenated dataset we selected all ECSA taxa from Brazil (n = 96). In order to investigate the temporal signal in our CHIKV-ECSA dataset, we regressed root-to-tip genetic distances from this ML tree against sample collection dates using TempEst v 1.5.1 (http://tree.bio.ed.ac.uk) [37].
The ML phylogeny was used as a starting tree for Bayesian time-scaled phylogenetic analysis using BEAST 1.10.4 (http://beast.community/index.html) [38]. We employed a stringent model selection analysis using both path-sampling (PS) and stepping stone (SS) procedures to estimate the most appropriate molecular clock model for the Bayesian phylogenetic analysis [39]. Were tested: (a) the strict molecular clock model, which assumes a single rate across all phylogeny branches, and (b) the more flexible uncorrelated relaxed molecular clock model with a lognormal rate distribution (UCLN) [40]. Both SS and PS estimators indicated the uncorrelated relaxed molecular clock as the best fitted model to the dataset under analysis. Besides, we have used the HKY+G4 codon partitioned (CP)1+2,3 substitution model and the Bayesian SkyGrid coalescent model of population size and growth [40,41]. We computed MCMC (Markov chain Monte Carlo) duplicate runs of 100 million states each, sampling every 10,000 steps for the ECSA-BR dataset. Convergence of MCMC chains was checked using Tracer v.1.7.1 [42]. Maximum clade trees were summarized from the MCMC samples using TreeAnnotator (http://beast.community/index.html) after discarding 10% as burn-in.
Epidemiological Data Assembly
Data of monthly notified CHIKV cases were supplied by the Health Surveillance System of the Rio de Janeiro state and were plotted using the R software version 3.5.1.
Acknowledgments:
The authors thank all personnel from Health Surveillance Systems of Rio de Janeiro who coordinated surveillance and helped with data collection and assembly.
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. | 2020-12-02T14:11:18.674Z | 2020-11-26T00:00:00.000 | {
"year": 2020,
"sha1": "00c17ee5f1faf941a8c9a9f657dd606281624dc9",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/pathogens9120984",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "875c5fab991308942b6b7039227b08f4170b6128",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
263651761 | pes2o/s2orc | v3-fos-license | Amphetamines in child medicine: a review of ClinicalTrials.gov
Background: Globally, the use of amphetamines as therapeutic agents in pediatric medicine is a crucial area of concern, especially given the population’s vulnerability. Methods: On 6 August 2023, a search was conducted on ClinicalTrials.gov using “amphetamine” as the keyword. Two independent examiners screened trials against set criteria, including a focus on amphetamine, completion status, an interventional approach, and included children. Ongoing or observational studies were excluded. Data extracted from the qualified trials encompassed primary objectives, participant counts, study duration, and outcomes, with the aim of analyzing children disorders treated by amphetamine. Results: On 6 August 2023, a search of the ClinicalTrials.gov database with the term “amphetamines” identified 179 clinical trials. After extensive exclusion criteria, 19 trials were ultimately selected for analysis. The predominant condition under investigation was attention deficit hyperactivity disorder (ADHD), present in 84.2% of studies. Key study characteristics included: phase 4 trials (36.8%), randomized allocation (63.2%), and the parallel intervention model (42.1%). Masking techniques varied, with no masking in 42.1% of studies, and double and quadruple masking both accounting for 21.1%. Geographically, 78.9% of the studies’ participants were from the United States. Conclusion: This study highlights the notable therapeutic potential of amphetamines in pediatric ADHD populations and emphasizes the importance of recognizing potential side effects and addiction risks. As pharmacogenomics offers the prospect of personalized treatments, there is potential to increase therapeutic efficacy and decrease adverse reactions. It is vital to balance these benefits against the inherent risks, understanding the need for continued research to optimize the use of amphetamines in medicine.
Introduction
Globally, amphetamines play a significant role in pediatric medicine, both as therapeutic agents and subjects of concern.Their dual role necessitates an in-depth analysis, especially when their application targets vulnerable populations such as children (Meyers et al., 2020).As central nervous system (CNS) stimulants, the effects of amphetamines on a developing brain are both beneficial in specific therapeutic scenarios and potentially harmful if misused (Mental Health Services Administration, 1999).The global prevalence of amphetamine use in 2019 was estimated to be 0.5% in children as an antipsychotic medicine and 12% in pregnant mothers (Ochoa et al., 2023).This extensive utilization, predominantly in the medical treatment of conditions like attention deficit hyperactivity disorder (ADHD), underscores the importance of a thorough review of clinical trials focusing on children and amphetamines (Frölich et al., 2012).
Historically, the recognition of amphetamines dates back to the early 20th century.The medical community began to acknowledge its therapeutic potential for ADHD, a prevalent neurodevelopmental disorder in children (Strohl, 2011).However, the propensity for misuse and consequent dependency, especially in this demographic, raised flags.While the most common amphetamines, amphetamine and methamphetamine, dominate discussions, numerous derivatives and formulations cater specifically for children.Dextroamphetamine is frequently prescribed for pediatric ADHD (Clevel Clin, 2023a), while Adderall, which combines AMPH and dextroamphetamine, stands as a staple in ADHD treatment (Faraone and Biederman, 2002).Lisdexamfetamine metabolizes into dextroamphetamine and remains another crucial ADHD management tool for children (Najib et al., 2020).Although some studies have explored the potential therapeutic applications of MDMA for psychiatric conditions in adults, its effects on children remain largely uncharted and controversial.
The spectrum of amphetamines' influence on children extends beyond ADHD.For instance, their use in pediatric obsessivecompulsive disorder (OCD) patients has yielded mixed results (OCD Kids, 2023).Similarly, interventions using amphetamines for mood dysregulation disorders in children are treated with caution due to the potential side effects (Parsley et al., 2020).One of the primary concerns is the risk of addiction.A developing brain is susceptible, and the introduction of substances that alter its neurochemistry, especially influencing the dopamine system, requires vigilant monitoring (Berman et al., 2009).As children and adolescents engage with these drugs, even for therapeutic reasons, there is controversial evidence regarding the risk of developing a dependency (Clevel Clin, 2023b).
Another emerging area of research revolves around the longterm effects of amphetamines on the developing brain.Preliminary findings indicate potential structural and functional alterations in specific brain regions with prolonged amphetamine use.These changes, although subtle, might have implications for cognitive functions, emotional regulation, and even social behavior in the long term.With an increasing number of children undergoing amphetamine-based treatments, understanding these long-term implications becomes paramount.Continuous longitudinal studies tracking these children over several years can provide crucial insights into these effects (Reynolds et al., 2015a).
Despite the plethora of individual studies on amphetamines, a consolidated, in-depth examination of clinical trials aimed at children remains a conspicuous gap in the literature.ClinicalTrials.gov,a leading clinical trials database, is a treasure trove of information (Hegazi et al., 2023).However, the data it on amphetamines awaits a rigorous and comprehensive analysis.This not only hinders therapeutic advancements but also affect or reduce understanding of amphetamine's role in modern medicine.To address this gap, this review offers a comprehensive review of clinical trials from ClinicalTrials.gov on amphetamines.
Search strategy
On 6 August 2023, a comprehensive review of ClinicalTrials.gov was conducted using the keyword "amphetamine".To ensure objectivity, two independent Examiner evaluated the trials returned in the search, based on pre-established eligibility criteria.Trials qualified for inclusion if they predominantly focused on amphetamine, were completed, adopted an interventional approach, and enrolled children.Studies that were still in progress or of an observational nature were excluded.Relevant data were methodically obtained from ClinicalTrials.gov,highlighting vital components that aided the comprehensive analysis of human disorders treated by amphetamine and related substances.This study detailed the primary objectives of each trial and contained essential details such as the number of participants, the length of the study, and the outcomes.
Results
The detailed search of the ClinicalTrials.govdatabase with the term "amphetamine" yielded 179 clinical trials.After applying strict exclusion criteria-incomplete (n = 62), non-interventional (n = 4), and trials that did not enroll children (n = 92)-21 trials were initially suitable.However, two of these did not fully align with our review's focus, primarily due to their limited emphasis on amphetamine treatments or a focus on unrelated conditions.Thus, 19 trials were finalized for analysis, offering insights into current amphetamineassociated disorder treatments.The selection methodology is illustrated in Figure 1.
Characteristics of included studies
Table 1 provides a detailed breakdown of the various characteristics observed across the 19 trials.The most frequently occurring condition was ADHD, accounting for a significant 84.2%.Three specific sub-groups, each representing 5.3%, were identified: individuals with ADHD coupled with Deficient Emotional Self-Regulation, those with ADHD and Reading Disabilities, and participants with ADHD alongside Conduct Disorder and Oppositional Defiant Disorder.
The included clinical trials
In terms of the trial phases, Phase four stood out with the highest representation, accounting for 36.8%.Regarding allocation methods, a majority of 63.2% of the trials were randomized.When assessing the intervention model, it was observed that the parallel model was predominant with 42.1% of all studies.In the masking category, 42.1% of the studies had no masking, while both double and quadruple masking were equally represented at 21.1% each.As for geographical distribution, a substantial 78.9% of the participants were based in the United States (Table 2).
General overview, history, and classification
Amphetamines are classified as potent sympathomimetic agents with valuable therapeutic uses.Chemically, they share a great similarity with the body's innate catecholamines, specifically dopamine and norepinephrine (Heal et al., 2013).First synthesized in 1887 by Lazăr Edeleanu in Germany, their significance in the field of pharmacology was not acknowledged until the 1950s, when they emerged as a potential treatment for certain behavioral disorders (Edeleanu, 1887).The term "amphetamine" designates a particular compound, predominantly present in two enantiomers: levoamphetamine and dextroamphetamine.These mirror-image molecules, while structurally related, vary in terms of their potency and pharmacological effects (Heal et al., 2013).Noteworthy derivatives such as methamphetamine and para-methoxyamphetamine (PMA) also belong to the amphetamine family, and each has a unique set of characteristics (Gough et al., 2002).
Mechanism of action and medical uses
Amphetamines act by facilitate the release of catecholamines, especially dopamine, from presynaptic neurons while simultaneously blocking their reuptake.This dual action increases their concentration within the synaptic cleft, thereby extending neurotransmission durations (Heal et al., 2013).Elevated dopamine concentrations in the brain's mesolimbic dopamine system not only lead to feelings of euphoria but also heighten the risk of addiction, further underscoring the potential for amphetamine dependence (Adinoff, 2004;National Institute on Drug Abuse, 2007).On a related note, increased norepinephrine levels within the CNS are correlated with heightened alertness and arousal (Berridge, 2008).When prescribed and monitored by medical professionals, amphetamines serve as effective treatments for conditions such as ADHD, narcolepsy, and, in some cases, treatment-resistant depression.Their therapeutic applications include enhanced wakefulness, improved cognitive control, reduced fatigue, and mood elevation (Stotz et al., 1999;Berman et al., 2009;Ng and O'Brien, 2009).
Side effects, concerns, and legal implications
Extended and consistent misuse of amphetamines can result in a range of adverse side effects, including inhibited growth, heightened jitteriness, feelings of nausea, and diminished visual clarity (Berman et al., 2009;Craig et al., 2015;Richardson et al., 2017).Long-term abuse escalates the risk of complications such as pronounced dental deterioration commonly referred to as "meth mouth", significant weight loss, persistent skin lesions, an increased dependency, and a powerful addiction (Craig et al., 2015;Nida, 2019).When an addiction takes hold, users often find themselves developing a tolerance.This can lead to overwhelming cravings, pronounced withdrawal symptoms, and a relentless cycle of consumption, even when faced with detrimental repercussions (Leith and Kuczenski, 1981).In recognition of their substantial potential for abuse, amphetamines are stringently regulated on a global scale.Within the United States, these drugs are categorized as Schedule II controlled substances (Deadiversion, 2023).Due to their elevated risk of abuse in the course of their medical use, their prescription and distribution are heavily restricted.Comparable regulations are implemented internationally, and those found accountable of unauthorized distribution or trafficking can expect severe legal consequences (Marandure et al., 2023).
Pharmacokinetics
It is crucial to comprehend the pharmacokinetics of amphetamines, not only in order to optimize their therapeutic usage but also to detect their potential for misuse.Once taken orally, these compounds are rapidly absorbed from the gastrointestinal system, reaching their peak concentration in the bloodstream approximately 2, 3 h after ingestion.It is worth noting that the liver plays a central role in their metabolism (Berman et al., 2009; National Institute of Diabetes and Digestive and Kidney Diseases, 2012).
Amphetamine
Amphetamine (C 9 H 13 N), a derivative of the phenethylamine family, manifests in two distinct enantiomeric forms: levoamphetamine and dextroamphetamine.Notably, dextroamphetamine demonstrates a heightened potency in stimulating the CNS (Berman et al., 2009;Zanda et al., 2017;Losacker et al., 2022).At the neurochemical level, the primary mechanism of action for amphetamine involves the elevation of neurotransmitter levels within the synaptic junctions.This elevation arises from the compound's disruption of vesicular monoamine transporters, subsequently triggering the release of key neurotransmitters-namely, dopamine, norepinephrine, and serotonin-into the synaptic space (Heal et al., 2013).In addition to this, amphetamine has the capability to delay the reuptake of these neurotransmitters, thereby further amplifying their presence and concentration within the synapse (dela Peña et al., 2015) (Figure 2).
Dextroamphetamine
Dextroamphetamine, chemically represented as C 9 H 13 N, stands out as one of the two active enantiomers of amphetamine.This prescription medication plays a pivotal role in treating conditions such as ADHD and narcolepsy (Sharbaf Shoar et al., 2023).A notable mention is Adderall, a widely recognized medication for ADHD, which comprises four distinct amphetamine salts.However, dextroamphetamine predominates in its composition (Kolar et al., 2008).Distinguished Frontiers in Pharmacology frontiersin.orgas (S)-amphetamine, dextroamphetamine is an optically active isomer of the parent compound, amphetamine (Heal et al., 2013).With a chemical structure denoted by C9H13N, its chiral center permits the molecule to manifest in two distinct enantiomeric forms.The "dextro" prefix not only signifies its specific structural configuration but also underscores its capability to rotate plane-polarized light toward the right (Figure 3).
Levoamphetamine
Levoamphetamine, often overshadowed by its more renowned counterpart, dextroamphetamine, is another enantiomer of amphetamine.Unlike dextroamphetamine which primarily has central stimulant effects, levoamphetamine exhibits a stronger peripheral stimulant action.This means that it might lead to more pronounced cardiovascular effects, such as an increased heart rate (Heal et al., 2013;Chen et al., 2023).Levoamphetamine is present in medications such as Adderall, but in reduced quantities compared to dextroamphetamine (Sontheimer and Sontheimer, 2021).Structurally, while both dextroamphetamine and levoamphetamine share the same chemical formula (C9H13N), they differ in the spatial orientation of their atoms (Gough et al., 2002;Heal et al., 2013).This distinction, known as chirality, is reflected in levoamphetamine's name; the prefix "levo" denotes that it rotates plane-polarized light to the left.Although both compounds share many characteristics, this chiral difference gives levoamphetamine a unique pharmacological profile.Specifically, its interactions with neurotransmitter systems deviate from those of dextroamphetamine due to this structural variation (Goodwin et al., 2009;Heal et al., 2013) (Figure 4).
Methamphetamine
Methamphetamine, a potent derivative of amphetamine (Hall et al., 2008), is known for its heightened effects on the CNS.Although it has legitimate medical uses-available in prescription form as Desoxyn to treat certain cases of ADHD and obesity-it is more notoriously associated with the illicit drug "crystal meth" (Miller et al., 2021).A distinguishing feature of methamphetamine is its elevated lipid solubility, enabling it to penetrate the blood-brain barrier with greater concentration than other amphetamines (Jan et al., 2012;Kirkpatrick et al., 2012).This characteristic significantly escalates its potential for abuse and addiction, particularly when in its crystalline form.
From a chemical perspective, methamphetamine, also known as N-methylamphetamine, stands apart from amphetamine due to the inclusion of a methyl group on its nitrogen atom (Kirkpatrick et al., 2012).Its molecular formula reads as C 10 H 15 N.While this modification might seem subtle, it has a profound influence on the compound's pharmacokinetics.The added methyl group enhances the compound's lipid solubility, facilitating its rapid movement across the blood-brain barrier (Kirkpatrick et al., 2012).Moreover, methamphetamine exists as two enantiomers: dextromethamphetamine and levomethamphetamine (West et al., 2012) (Figure 5).
Lisdexamfetamine
Lisdexamfetamine is a prodrug of dextroamphetamine.It therefore remains inactive upon ingestion, and needs to be activated by metabolic conversion to manifest its active form (Cho and Yoon, 2018).What sets Lisdexamfetamine apart is its intrinsic extended-release mechanism due to its prodrug nature.This not only ensures a more prolonged therapeutic effect but also mitigates its potential for misuse.The reason behind this is its more gradual onset of action, compared to other immediate-release amphetamine formulations.Chemically, Lisdexamfetamine is an amide conjugate, formulated by combining dextroamphetamine with the essential amino acid lysine (Mrazek et al., 2009;Levine and Swanson, 2023).Its chemical formula is C 15 H 25 N 3 O (Figure 6).
Para-methoxyamphetamine
PMA, or para-Methoxyamphetamine, is a synthetic compound that shares a structural resemblance to amphetamines but stands apart due to its unique pharmacological profile (Richter et al., 2019).Although occasionally mistaken for MDMA (commonly known as "Ecstasy"), PMA's effects can be considerably more toxic.This compound specifically interacts with the brain's serotonin receptors to produce psychedelic experiences.However, one of its alarming side effects is a potentially hazardous surge in body temperature, making it significantly more perilous than many of its amphetamine counterparts (Freezer et al., 2005).Chemically, while PMA retains the foundational structure of amphetamines, it differentiates itself with an additional methoxy group situated at the para position of the phenyl ring.Its molecular formula is represented as C 10 H 15 NO (Figure 7).
Attention deficit hyperactivity disorder
Amphetamines are recognized for their wide-ranging applications, but are particularly notable for their role in treating ADHD (Frölich et al., 2012).ADHD is a complex neurodevelopmental disorder, typified by enduring patterns of inattention, heightened hyperactivity, and pronounced impulsivity (Magnus et al., 2023).The precise origins and causes of ADHD have yet to be definitively ascertained; however, the prevailing scientific consensus shows that imbalances in neurochemicals, especially within dopamine pathways, significantly influence its manifestation (Blum et al., 2008;Wilens and Spencer, 2010a).Medications formulated with amphetamines such as Adderall, which combines amphetamine and dextroamphetamine, and Evekeo, aim to modulate these neurotransmitter concentrations.As a result, they effectively mitigate the predominant symptoms of the disorder (Frölich et al., 2012).
Narcolepsy
Narcolepsy is a long-term sleep disorder marked by an intense and propensity for daytime sleepiness involuntary sleep episodes.People with this condition may find themselves inadvertently dozing off during daily activities, which presents not only an inconvenience but also poses potential safety risks.While the precise mechanisms underpinning narcolepsy remain a topic of ongoing research, many cases have been linked to a deficiency in the neuropeptide hypocretin (Akintomide and Rickards, 2011).Due to their stimulatory effects, amphetamines have demonstrated efficacy in mitigating the debilitating daytime drowsiness symptomatic of narcolepsy.Consequently, medications such as Adderall are frequently incorporated into treatment regimen for the disorder (Turner, 2019;Barker et al., 2020).
Obesity and weight management
Historically, Phentermine, an amphetamine derivative, was prescribed as an anorectic or appetite suppressant to assist with weight reduction (Coulter et al., 2018).Its stimulant properties have the potential to accelerate metabolism and reduce appetite.In the mid-20th century, drugs such as Benzedrine gained prominence for their weight management benefits.However, increasing concerns regarding the potential for misuse, adverse cardiovascular implications, and other side effects precipitated a decrease in their utilization for this objective.It is crucial to underscore that in contemporary medical practice, the prescription of amphetamines solely for weight loss is not commonly endorsed due to these aforementioned concerns (Abenhaim et al., 1996).
Treatment-resistant depression (TRD)
TRD is characterized by major depressive episodes that fail to show sufficient improvement, even after the administration of at least two distinct antidepressant regimens.Recognizing the moodelevating and energy-boosting properties of amphetamines, researchers have explored their potential as adjunctive treatments for TRD.While certain studies have yielded promising outcomes, the application of amphetamines in this specific scenario is still offlabel.It is essential for more comprehensive research to be conducted to firmly determine both their safety and efficacy in treating TRD (Stotz et al., 1999).
Cognitive enhancement and fatigue management
In specific situations, amphetamines have been employed offlabel as tools for cognitive augmentation and combating fatigue.The underlying intent is to bolster alertness, sharpen concentration, and enhance overall performance during extended durations of wakefulness or in instances of sleep deprivation.Nonetheless, the repercussions of prolonged use remain inadequately researched, accompanied by legitimate concerns surrounding the potential for misuse and subsequent dependency (Ricci, 2020).With cognitive deterioration being a significant issue in the senior demographic, there has been a growing interest in evaluating the efficacy of stimulants, including amphetamines, in amplifying cognitive abilities in older adults who do not suffer from dementia.Initial research hints at possible advantages in tasks demanding attention and memory, yet the long-term safety and effectiveness for this demographic are still to be conclusively determined (Bagot and Kaminer, 2014).
Traumatic brain injury (TBI)
After experiencing TBI, many patients may face several challenges, such as cognitive impairments, diminished alertness, and delayed processing speeds.Emerging studies have proposed that amphetamines could potentially accelerate recovery and improve cognitive outcomes for these individuals.The postulated mechanism behind this effect is the drug's capacity to boost synaptic transmission and amplify neural plasticity, which might bolster the brain's innate healing mechanisms.Nevertheless, the role of amphetamines in the rehabilitation of TBI is a topic under active investigation, and concrete conclusions regarding their effectiveness have yet to be firmly established (Hornstein et al., 1996;Coris et al., 2021).
Discussion
It is important to have a comprehensive grasp of the underlying mechanisms, applications, potential adverse effects, and the historical trajectories of drugs is pivotal to ascertaining their suitability and safety for diverse medical conditions.Our meticulous examination of clinical trials centered on amphetamines, strengthened by a profound study of their historical trajectory and present-day status, shows their multifaceted roles, advantages, and associated risks.
From our extensive analysis, it is apparent that the body of clinical trials on ADHD is coherent, reflecting amphetamines' historical significance and contemporary relevance in managing the disorder.Patients with ADHD, a disorder delineated by its hallmark symptoms of inattention, hyperactivity, and impulsivity, have experienced transformative treatment outcomes with amphetamine-based therapies (Singh et al., 2015).These medications, through their influential role in altering neurotransmitter concentrations, predominantly dopamine, present a compelling strategy for targeting the fundamental symptoms of the disorder (Gough et al., 2002;Heal et al., 2013).Recognized as a neurodevelopmental anomaly that predominantly surfaces during formative years (Wilens and Spencer, 2010b), studies have underscored the pivotal nature of amphetaminecentric interventions for this patient cohort.Notwithstanding their pronounced effectiveness in symptom alleviation, concerns related to side effects like impeded growth demand vigilant scrutiny and periodic oversight, especially among children (Richardson et al., 2017).Moreover, the prospective repercussions on growing neural trajectories and the consequent implications for long-term cognitive capacities call for thorough, sustained investigations (Berman et al., 2009;Reynolds et al., 2015b).
Turning to narcolepsy, a persistent sleep affliction, the therapeutic approach with amphetamines support the fact of determining drug pharmacodynamics in depth.Controlling the stimulative effects of amphetamines, lethargy during the day can be reduced for patients.However, it remains imperative to consistently balance the medicinal gains against any potential adverse outcomes or addiction susceptibility.The assessment of amphetamines across diverse conditions such as obesity, TRD, cognitive augmentation, and TBI unveils the expansive therapeutic potential of these molecules.Yet, as evidenced in the context of obesity management, the evolving landscape of medical protocols and burgeoning knowledge can recalibrate drug adoption trends.The diminished preference for amphetamines in weight management due to concerns over potential misuse and adverse reactions accentuates the necessity for constant evaluation, supervision, and recalibration of clinical directives.
Furthermore, while the therapeutic promise of amphetamines for addressing TRD and enhancing cognition, particularly among the geriatric populace or those with post-traumatic brain injuries, is captivating, it necessitates prudence.The off-label deployment of medications frequently navigates the unclear of clinical practice, making it vital to understand risks against the prospective benefits.A detailed dissection of specific amphetamine derivatives shows the intricate distinctions between them, highlighting that even minor chemical alterations can exert significant effects on pharmacokinetics and pharmacodynamics.This layered understanding can adeptly steer drug choices in specific clinical contexts.
Future directions
Future investigations into amphetamines should place a high emphasis on pediatric-focused trials, particularly in light of the rising prescriptions for children with ADHD.Alongside this, it is crucial to delve deeper into the intricate mechanisms underpinning addiction, while simultaneously developing robust prevention strategies.Additionally, considering the pharmacogenomics may offer tailored dosing and more predictable patient outcomes.Moreover, there exists a significant opportunity to innovate in the domain of extended-release formulations, aiming to affect an optimal balance between maximizing therapeutic advantages and minimizing the potential for misuse.
Conclusion
This study has underscored both the significant therapeutic potential of amphetamines, especially evident in pediatric ADHD populations, and the crucial need for awareness of their potential side effects and addiction risks.As the landscape of medicine expands, innovative formulations and broader therapeutic applications are emerging, with the exciting prospect of pharmacogenomics potentially redefining individualized treatments.This promises to reduce adverse reactions and bolster therapeutic efficacy.Balancing these benefits with the inherent risks remains paramount.Therefore, ongoing research is crucial in order to further understand amphetamines' broad applications and to navigate the balance between their benefits and risks.
TABLE 1
Clinical trials characteristics.
TABLE 2
Summary of the included trials.
TABLE 2 (
Continued) Summary of the included trials.
TABLE 2 (
Continued) Summary of the included trials. | 2023-10-05T15:16:22.965Z | 2023-10-03T00:00:00.000 | {
"year": 2023,
"sha1": "7931c4a49541d73c97ad0413d491fd22736a4a6d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3389/fphar.2023.1280562",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f49b9d5757aa781b6a7af33acd991eea1641d109",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": []
} |
118771294 | pes2o/s2orc | v3-fos-license | A Low-Cost Natural Gas/Freshwater Aerial Pipeline
Offered is a new type of low-cost aerial pipeline for delivery of natural gas, an important industrial and residential fuel, and freshwater as well as other payloads over long distances. The offered pipeline dramatically decreases the construction and operation costs and the time necessary for pipeline construction. A dual-use type of freight pipeline can improve an arid rural environment landscape and provide a reliable energy supply for cities. Our aerial pipeline is a large, self-lofting flexible tube disposed at high altitude. Presently, the term"natural gas"lacks a precise technical definition, but the main components of natural gas are methane, which has a specific weight less than air. A lift force of one cubic meter of methane equals approximately 0.5 kg. The lightweight film flexible pipeline can be located in the Earth-atmosphere at high altitude and poses no threat to airplanes or the local environment. The authors also suggest using lift force of this pipeline in tandem with wing devices for cheap shipment of a various payloads (oil, coal and water) over long distances. The article contains a computed macroproject in northwest China for delivery of 24 billion cubic meter of gas and 23 millions tonnes of water annually.
Introduction
Natural gas is a mixture of flammable gases, mainly the hydrocarbons methane (CH 4 ) and ethane that found in bulk beneath the Earth's surface. Helium is also found in relatively high concentrations in natural gas. Natural gas usually occurs in close association with petroleum. Although many natural gases can be used directly from the well without treatment, some must be processed to remove such undesirable constituents as carbon dioxide, poisonous hydrogen sulfide, and other sulfur components.
Methods of pipeline transportation that were developed in the 1920s marked a significant stage in the use of gas. After World War II there occurred a period of tremendous expansion that has continued into the 21 st Century. Increasingly, this expansion relies on the use of pipeline transportation of gas. Among the largest accumulations of natural gas are those of Urengoy in Siberia, the Texas Panhandle in the United States, Slochteren-Groningen area in The Netherlands, and Hassi RMel in Algeria. Gas accumulations are mostly encountered in the deeper parts of sedimentary basins. Natural gas fields are often located far from the major centers of consumption. Consequently, the gas must be transported.
Transportation of natural gas depends upon its form. In a gaseous form it is transported by pipeline under high pressure, and in a liquid form it is transported by tanker ship [1].
Large gas pipelines enable gas to be transported over great distances. Examples are the North American pipelines, which extend from Texas and Louisiana to the Northeast coast, and from the Alberta fields to the Atlantic seaboard. Transportation pressure is generally 70 kilograms per square centimeter (up to 200 atm) because transportation costs are lowest for pressures in this range. Natural gas pipeline diameters for such long-distance transportation have tended to increase from an average of about 60 to 70 centimeters in 1960 to about 1.20 meters nowadays. Some macroprojects involve diameters of more than 2 meters. Because of pressure losses, the pressure is boosted every 80 or 100 kilometers to keep a constant rate of flow.
Petroleum prospecting has revealed the presence of large gas fields in Africa, the Middle East, Alaska, and China. Gas is transported from developed regions by special LNG ships. The gas is liquefied to -160 C and transported in tankers with insulated containers. Since 1965 the capacity of tankers has risen to as much as 120,000 cubic meters, which enables some tankers to convey as much as 70 million cubic meters of gas per voyage. Land or sea-based storage of low-temperature liquefied gas requires double-walled tanks with special insulation. Such tanks may hold as much as 50,000 cubic meters. Even larger storage facilities have been created by using depleted subsurface oil or gas geological reservoirs near consumption centers or by the creation of artificial gas fields in aquifer layers. The latter technique developed rapidly, and the number of storage facilities of this type in the USA increased tremendously during the late 20 th Century. There are also such underground storage areas in France and Germany.
Residential and commercial use consumes the largest proportion of natural gas in North America and Western Europe, while industry consumes the next largest amount and electric-power generation is third in worldwide natural-gas consumption. By far the major use of natural gas is as fuel, though increasing amounts are used by the chemical industry for raw material. Among the industries that consume large volumes are food, paper, chemicals, petroleum refining, and primary metals. In the USA, a large amount fuels household heaters; in Russia a considerable volume goes for electric-power generation and to generate export revenue. Exportation and importation of natural gas involves some aspects of geopolitical assessment [2] Most materials that can be moved in large quantities in the form of liquids, gases, or slurries (fine particles suspended in liquid) are generally moved through freight pipelines [3]. Pipelines are lines of pipe equipped with pumps, valves, and other control devices for transporting materials from their remote sources to storage tanks or refineries and in turn to distribution facilities; pipelines may also convey industrial waste and sewage to processing plants for treatment before disposal.
Pipelines vary in diameter from tiny pipes up to lines 9 meters across used in high-volume freshwater distribution and sewage collection networks. Pipelines usually consist of sections of pipe made of steel, cast iron, or aluminum, though some are constructed of concrete, fired clay products, and occasionally plastics. The sections are joined together and, in most cases, installed underground. Because great quantities of often expensive and sometimes environmentally harmful materials are carried by pipelines, it is essential that the systems be well constructed and monitored in order to ensure that they will operate smoothly, efficiently, and safely. Pipes are often covered with a protective coating of coaltar enamel, asphalt, or plastic; sometimes these coatings may be reinforced or supplemented by an additional sheath of asbestos felt, fiberglass or polyurethane. The materials used depend on the substance to be carried and its chemical activity and possible corrosive action on the pipe. Pipeline designers must also consider such factors as the capacity of the pipeline, internal and external pressures affecting the pipeline, water-and air-tightness, and construction and operating costs. Generally the first step in construction is to clear the ground and dig a trench deep enough to allow for approximately 51 centimeters of soil overburden to cover the pipe. Sections of pipe are then held over the trench, where they are joined together by welding, riveting or mechanical coupling, covered with a protective coating, and lowered into position. Pipelines of some water-supply systems may follow the slope of the land, winding through irregular landscapes like low-gradient railroads and highways do, and rely on gravity to keep the water flowing through them. If necessary, the gravity flow is supplemented by pumping. Most pipelines, however, are operated under pressure to overcome friction within the pipe and differences in elevation. Such systems have a series of pumping stations that are located at intervals of from 80 to 320 kilometers. Many pipelines are equipped with a system of valves that may be shut in the event of a breach in the line. Nevertheless, a short-period breach could still result in a spill of oil or escapes of gas. Vigilant ground and air inspection crews help to avert such damaging and costly accidents by checking periodically the pipeline for obvious weaknesses and stresses. Various methods are used to control corrosion in metallic pipelines. It is worth noting that metallic pipelines, especially those located on the Earth's surface, are subject to Space Weather, just like electric power grids [4]. In cathodic protection, a negative electrical charge is maintained throughout the pipe to inhibit the electrochemical process of corrosion. In other cases the interior is lined with paints and coatings of plastic and rubber or wrappings of fiberglass, asphalt, or felt. Sometimes corrosioninhibiting chemicals are injected into the cargo. Pipelines are also cleaned by passing devices called pigs; a pig may be a ball of the same diameter as the pipe; this kind of pig works by scraping clean the pipe's interior as it is propelled along by the flowing cargo. It may also be a complex scrubbing machine that is inserted into the pipe through a special opening. One of the longest metallic gas pipelines in the world is the Northern Lights pipeline, which is 5,470 kilometers long and links the West Siberian gas fields on the Arctic Circle with locations in Eastern Europe; in China, the recently completed "West Gas Supplying To East Project", yearly conveys 12 trillion cubic meters of natural gas from Xijiang Province gas fields to Beijing, the capital, in a 4,000 kilometer-long metallic pipeline.
The main differences of suggested Gas Transportation Method and Installation from modern metallic pipelines are: 1. The tubes are made from a lightweight flexible thin film (no steel or solid rigid hard material). 2. The gas pressure in tube equals an atmospheric pressure of 1-2 atm. (Some current gas pipelines have pressure of 70 atm.). 3. Most of the filmic pipeline [except compressor (pumping) and driver stations] is located in the Earth-atmosphere at a high altitude (0.1-6 km) and does not have a rigid support (pillar, pylon, tower). All operating pipelines are located on the ground surface, underground or underwater. 4. The transported natural gas supports the air pipeline in the air above the route selected. 5. Additional aerial support may be made by employment of attached winged devices. 6. The natural gas pipeline can be used as an air transport system for oil and solid payloads with a maximum speed up of 250 m/sec. 7. The natural gas pipeline can be used as a transfer of a mechanical energy. The suggested Method and Installation have remarkable cost-benefit advantages in comparison with all existing natural gas pipelines.
The installation works in the following manner: The compressor station pumps natural gas from storage into the tube (pipeline). The tube is made from light strong flexible gas-impermeable fireproof material (film), for example, from composed material (fibers, whiskers, nanotubes, etc.). A gas pressure is a less than an atmospheric pressure (up 1-2 atm). A natural (fuel) gas has methane as its main component with a specific density about 0.72 kg/m 3 . Air has a specific density about 1.225 kg/cubic meter. That means that every cubic meter of gas (methane) or a gas mixture has a lift force approximately 0.5 kg. The linear (one meter) weight of a tube is less than a linear lift force of gas into the tube and the pipeline, therefore, has a lift force. The pipeline rises up and steadies at a given altitude (0.1 -6 km), held fast by the tensile elements 3. The altitude of the aerial pipeline can be changed by the use of common winches 7. The compressor station is located on the ground surface and moves natural gas to the next compressor station that is ordinarily located at the distance 70 -250 km from previous compressor station. Inside of the aerial pipeline there are valves ( fig.4) dedicated to lock the tube tightly in case of it is punctured, ruptured or otherwise damaged. The pipeline has also the warning light indicator 5 for aircraft. The route selected for our example-see below-is well north of IATA-1, the new International Air Transport Association-approved flight-path for airliners coming from, or going to Europe via Hong Kong or Shanghai. Only international flights arriving or departing Beijing might come close to the selected example. Even if hit by an airliner, if the aircraft speed is greater than about 3% of the stress wave velocity, or greater than about 150 m/sec, the airplane's speed causes an immediate fracture that is independent of cable diameter, although the force on the vehicle's wing certainly is not! Fig.2 shows the gross-section of the gas pipeline and support ring. The light rigid tube ring keeps the lift force from gas tube, wing support devices, from monorail and load container. The winged device 4 is a special automatic wing feature. When there is windy weather and the side wind produces a strong side force, the winged devices produce a strong lift force and support pipeline in fixed vertical position. The winged device works in the following way: when there is a side wind the tube has the wing drag and the winged device creates needed additional lift force. All forces (lifts, drags, weights) are in equilibrium. The distance between the tensile elements 3 is such that the tube can resist the maximum storm wind speed. The system can have a compensation ring. The compensation ring includes ring, elastic element, and cover. The ring compensates the temperature's change of the tube and decreases a stress from a wind.
The suggested gas pipeline has big advantages over the conventional steel gas pipeline: 1. The suggested natural gas pipeline is to be made from a thin film that is hundreds of times less expensive than current gas pipelines currently made from steel tubes. 2. Construction time might be decreased from years to a few months. 3. There is no need to compress a gas, a huge saving of energy and expenditures for high-maintenance pumps. 4. No need for expensive ground surface and environmental damage during either the building or exploitation phases of the macroproject. 5. No environmental damage in case of pipeline's damage during use. 6. Easy to repair. 7. Decreasing energy for delivery. 8. Additional possibility of payload delivery in both directions. 9. If the aerial natural gas pipeline is situated at high altitude, it is more difficult for successful terrorist attacks and for gas thefts. 10. The suggested transportation system may be also used for a transfer of mechanical energy from one driver station to another.
More detail description of innovation the reader finds in publication. See [5]- [8].
Below, the authors have computed a macroproject suitable for Beijing region and the desert located in China's northwest territory. In addition, the authors have also solved additional problems, which appear in this and other macroprojects and which can appear as difficult as the proposed pipeline and transportation system itself. (The authors are prepared to discuss the technical problems with serious organizations that are interested in researching and developing these ideas and related macroprojects.)
Methods of the estimation of the altitude gas pipeline
1. Gas delivery capability is is measure of the absolute roughness of tube wall. 3. Lift force F of the one meter length of pipeline is is safety stress. That is equals 100 -200 kg/mm 2 for matter from current artificial fiber. 5. Weight of one meter pipeline is W = D , (6) where: is specific weight of tube matter (film, cover). That equals about 0.9 -2.2 kg/m 3 for matter from artificial fiber. 6. Air drag D of pipeline from side wind is where: C d is drag coefficient; S is logistical pipeline area between tensile elements; is air density. 7. Needed power for delivery is where: 0.9 is a efficiency coefficient of a compressor station.
Load transportation system under pipeline 1. Load delivery capability by wingless container is G p = kFV p , (9) where: k is load coefficient (k 0.5 < 1); V p is speed of container (load).
Friction force of wingless containers (rollers) is
where: f 0.03 -0.05 is coefficient of roller friction; W c is weight of container between driver stations. 3. Air drug of container is where: C c is drug friction coefficient related to S c ; S c is cross-section area of container. 4. The lift force of a wing container is where: is air dynamic pressure, N/m 2 ; S cw is wing area of container, m 2 . 5. The drug of wing container may be computed by equation where K10 20 is coefficient of aerodynamic efficiency; C D is air drag coefficient of wing container. If lift force of wing container equals the container weight, the friction force F is absent and not necessary in monorail. 6. The delivery (load) capacity of the wing container is where W 1 is weight of one container, kg; V c = 30 200 m/s is container speed, m/s; T is time, s; d is distance between two containers, m. 7. The lift and drag of the wing device may be computed by Equations (12)-(13). The power needs for transportation system of wing container is where W is total weight of containers, kg; g = 9.81 m/s 2 if Earth gravity; K c 10 20 is aerodynamic efficiency coefficient of container and thrust cable; 8. The stability of the pipeline against a side storm wind may be estimated by inequality where L T is lift force of given part of pipe line (conventionally it is distance between the tensile element, N; L d is lift force of wing device, N; W T is a weight of pipeline of given part, kg; W s is weight of the given part suspending system (containers, monorail, thrust cable, tensile element, rigid ring, etc.), kg; D T is drag of the given part of pipeline, N; L d is the lift force of the wing device, N; K d is an aerodynamic efficiency coefficient of wing device; is angle between tensile element and ground surface.
China Gas/Water Aerial Pipeline Macroproject
(Tube diameter equals D = 10 m, gas pipeline has the suspension load transport system, the project is suitable for Beijing region -Gobi Desert) Let us take the distance between the compressor-driver stations 100 km and a gas speed V = 10 m/sec. Gas delivery capacity is (Eq. (1)) G = D 2 V/4 = 800 m 3 /s = 24 billions m 3 per year. For the Reynolds number R =10 7 value is 0.015, P = 0.18 atm (Eq. (2)-(3)). We can take V = 20 m/s and decrease delivery capacity by two (or more) times.
Lift force (Eq.4) of one meter pipeline's length equals F = 39 kg. We take the thickness of wall as 0.15 mm for = 200 kg/sq. mm.
The cover weight of one-meter pipeline's length is 7 kg. The needed power of the compressor station (located at distance 100 km) equals N = 10,890 kW for = 0.9.
Load transportation System Let us take the speed of delivery equals V = 30 m/sec, payload capability is 20 -25 kg per one meter of pipeline in one direction. Then the delivery capability for non-wing containers is 750 kg/s or 23 millions tons per year.
That is more than gas delivery (18 million tons per year). The total load weight suspended under the pipeline of length L = 100 km equals 2500 tons. If a friction coefficient is f = 0.03, the needed trust is 75 tons and needed power from only a friction roller drug is N 1 = 22,500 kW (Eq. (10)).
If the air drag coefficient C d = 0.1, cross section container area S c = 0.2 m 2 , the air drag of a one container equals D c1 = 2.2 kg, and total drag 20,000 container of length 100 km is D c = 44 tons. The need driver power is N 2 = 13,200 kW. The total power of transportation system is N = 22500 + 13200 = 35,700 kW. The total trust force is 77 + 44 = 121 tons. If = 200 kg/sq. mm, the cable diameter equals 30 mm. The suggested delivery system can delivery a weight units (non-wing container) up to 100 kg if a selected length of container is 5 -7.5 m.
The pipeline and container delivery capability may be increased at tens of times if winged containers are utilized. In this case we are not limited in load capability. Winged container needs a very lightweight monorail (or without it) and only in closed-loop thrust cable. That can be used for delivery water, oil or payload in containers. For example, if our system deliveries 4 m 3 /second that is equivalent of a normal river (or a water irrigation canal) having a cross-section area equal to 202 m and water flowing speed 0.1 m/s. In other words, northwest China's planted desert dust suppression macroproject-the Great Green Wall [9]-can be fostered by delivery of irrigation water to the vegetation that may become available in AD 2008, just as the Olympic Games are played in Beijing, from the East Route of the "South-North Water Transfer Scheme" [10]. This particular macroproject system can transfer mechanical energy, we can transfer 35,700 kW for the cable speed at 30 m/sec, and 8 times more by the same cable having a speed 250 m/sec.
If the < 60 o and wing of winged device has a width of 6 m, the system is stable against a sidethrusting storm wind of 30 -40 m/second. | 2019-04-14T03:15:55.411Z | 2007-01-05T00:00:00.000 | {
"year": 2007,
"sha1": "509c3847d2d8be790a6713b52c9a9cc767bc7cc1",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "2b20a3f83bedd9bc720e9a49f0117447282d8179",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
55853996 | pes2o/s2orc | v3-fos-license | Mycoplasma pneumoniae-associated mucositis syndrome: A rare and clinically challenging disease in a Saudi child
Mycoplasma pneumoniae-associated mucositis (MPAM) is an extra-pulmonary manifestation of M. pneumoniae infection and may present as isolated mucosal lesions (e.g., ocular, oral, and urogenital) or as a combination of mucosal and minimal cutaneous lesions. MPAM is a rare entity that lies on the spectrum of erythema multiform (EM) major and Stevens–Johnson syndrome (SJS). We present a 12-year-old boy who presented with classical clinical manifestations of MPAM and strongly positive M. pneumoniae PCR results. The patient was treated with antimicrobial therapy and had an uneventful recovery. Physicians should be aware of this rare entity and manage patients accordingly.
is an extra-pulmonary manifestation of M. pneumoniae infection and may present as isolated mucosal lesions (e.g., ocular, oral, and urogenital) or as a combination of mucosal and minimal cutaneous lesions. MPAM is a rare entity that lies on the spectrum of erythema multiform (EM) major and StevenseJohnson syndrome (SJS). We present a 12-year-old boy who presented with classical clinical manifestations of MPAM and strongly positive M. pneumoniae PCR results. The patient was treated with antimicrobial therapy and had an uneventful recovery. Physicians should be aware of this rare entity and manage patients accordingly.
Introduction
Mycoplasma pneumoniae is an important infectious cause of community-acquired pneumonia in children that is often associated with extrapulmonary complications such as mucocutaneous eruptions, including StevenseJohnson syndrome (SJS) and toxic epidermal necrolysis (TEN) which are, in most cases, almost exclusively attributed to drugs. 1 However, in the case of M. pneumoniae infections, these severe cutaneous reactions may differ from drug-induced SJS or viral-associated erythema multiforme. 2 M. pneumoniae-associated mucositis exhibits prominent mucositis and sparse cutaneous involvement, although cutaneous involvement varies. 3 Here, we present a rare case of SJS associated with M. pneumoniae in a Saudi child and review the literature for similar cases.
Case report
A 12-year-old boy with no significant past medical history presented initially with a history of fever, productive cough and shortness of breath for 10 days. A chest radiograph showed infiltration of the left lung (Figure 1). He was diagnosed with atypical pneumonia and given oral Clarithromycin (500 mg, every 8 h) which improved his symptoms. On day 4 of antibiotic treatment, the patient complained of mild eye itching. The next morning, he developed lid swelling with marked erythema in both eyes and whitish eye discharge. He also developed bullae inside the mouth and small pruritic lesions on both palms. Upon examination, he appeared healthy with no respiratory distress or fever (36.5 C, axillary). His respiratory rate was 24 breaths/min, his blood pressure was 115/70 mmHg, and his SpO2 was 97%. A mouth examination showed severe oral mucositis with haemorrhagic vesiculobullous eruptions over the buccal mucosa, the soft and hard palate and the tonsillar pillars but not the gingiva (Figure 2). An eye examination revealed bilateral conjunctival ingestion and pseudomembrane formation (Figure 3). A skin examination showed only one red papule over the trunk. The palms of both hands exhibited target skin lesions (Figure 4). Systemic and chest examinations were normal with neither hepatosplenomegaly nor significant lymphadenopathy. No evidence of any lesion in the genital, anal or perianal area was found upon examination. Laboratory tests, including serology, revealed a white blood cell count of 12.6 K/mL and negative findings for mononucleosis, Herpes Simplex Virus (HSV) 1 and 2, and influenza viruses. Serology for M. pneumoniae IgM and M. pneumoniae PCR were both positive. An enzyme-linked immunosorbent assay (ELISA), also known as an enzyme immunoassay (EIA), is a biochemical technique used mainly in immunology to detect the presence of an antibody or an antigen in a sample. In an ELISA, an unknown amount of antigen is affixed to a surface, and then a specific antibody is applied over the surface so that it can bind to the antigen. This antibody is linked to an enzyme, and in the final step, a substance that the enzyme can convert to some detectable signal, most commonly a color change in a chemical substrate, is added. PCR was used for detection of the 16S rRNA gene.
The patient was treated with supportive management including intravenous fluids for hydration and analgesics including oral paracetamol (10 ml) for pain if needed, and he was given lubricant ofloxacin eye drops and mouth wash for the buccal lesions. He was discharged within four days of admission in stable condition. Ophthalmological follow-up disclosed healing of his conjunctivitis, and subsequent clinical follow up after two weeks showed complete resolution.
Discussion
M. pneumoniae is a significant cause of communityacquired pneumonia in children. In rare cases, patients may present with extra-pulmonary manifestations of M. pneumoniae, such as SJS, and may require hospitalization and, occasionally, intensive care for respiratory failure. 3 Most reported cases of SJS and TEN, which are lifethreatening conditions, are almost exclusively attributed to drugs. 1 The incidence in children has been reported to be lower than that in adults and has a better outcome. 1 In developed countries, the most common precipitating cause of SJS in children has been reported to be infections, particularly M. pneumoniae and HSV. However, in India, drugs have been reported to be the most common trigger. 4 The literature consists of many reports of mycoplasma induced mucocutaneous skin lesions/rash that were diagnosed based on clinical and/or radiographic evidence of pneumonia, with positive serology tests consisting of either positive cold agglutination or elevated IgM antibodies against M. pneumoniae. 2,5,6 In a systematic review, patients were often young (mean age: 11.9 years) and male (66%). Cutaneous involvement ranged from absent (34%), sparse (47%), or moderate (19%). Oral, ocular, and urogenital mucositis were reported in 94%, 82%, and 63% of cases, respectively. 2 Four cases were described by Latsch et al. and Ravin et al. 5,6 of three male and one female patient with genital involvement. Latsch reported two adolescents who presented with severe exudative and ulcerative stomatitis accompanied by conjunctivitis and genital erosions. 5 Ravin also reported two patients with genital involvement. 6 All cases were diagnosed by positive Mycoplasma PCR (throat/sputum) and microparticle agglutination assays (IgM, IgA, IgG), and all received Clarithromycin. Patients in other reports were diagnosed only by positive serology. 7,8 Bressan et al. 7 reported a case of MPAM in a 9-year-old girl who presented with genital involvement. The patient was diagnosed based on IgM agglutination assays and received intravenous immunoglobulin. Another similar case was reported by Trapp et al., 8 who described a 13-year-old boy who presented with mucositis and genital involvement and was diagnosed serologically by the detection of Mycoplasma-specific IgM antibodies. Incomplete StevenseJohnson syndrome secondary to atypical pneumonia has been reported. 9 Our patient showed similar presentations to other reported cases, exhibited prominent mucositis and sparse cutaneous involvement, and had positive sputum PCR results, and his follow up showed complete recovery. Treatment is usually supportive, and specific treatment with immunosuppressive drugs or immunoglobulins did not show a better outcome in most studies and remains controversial. 10 We report this case because paediatricians should be aware of the clinical entity of atypical or incomplete M. pneumoniae-associated mucositis, particularly when the patient has a clinical presentation suggestive of prior M. pneumoniae infection along with a mild disease course and positive serology and/ or PCR results. | 2018-12-14T19:26:26.817Z | 2017-01-13T00:00:00.000 | {
"year": 2017,
"sha1": "4b7adb3de8099cb3b83a72e8de286fe794312dd5",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jtumed.2016.12.002",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4b7adb3de8099cb3b83a72e8de286fe794312dd5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258616903 | pes2o/s2orc | v3-fos-license | The prognostic role of lymphocyte to monocyte ratio (LMR) in patients with Myelodysplastic Neoplasms
ABSTRACT Background: Previous studies validated the prognostic significance of lymphocyte to monocyte ratio (LMR) in patients with solid tumors and some hematologic malignancies. However, the correlation between LMR and Myelodysplastic Neoplasms (MDS) was unclear. The study intends to investigate the prognostic impact of LMR on MDS patients. Methods: 91 newly diagnosed MDS patients were included in this retrospective study. The cut-off of LMR was 3.2 by X-Tile. All patients were divided into the low LMR group (<3.2) and the high LMR group (≥3.2). Clinical characteristics were compared between the two groups. Results: Patients in the high LMR group (n = 67) had better OS (P = 0.007) from the Kaplan-Meier survival curves. The results of the univariate analysis demonstrated that LMR was a prognostic factor for OS [hazard ratio (HR) = 2.070, 95%CI 1.201-3.571, P = 0.009]. After multivariate cox analysis, low LMR was confirmed to be an independent predictor of poor OS in MDS patients (HR = 1.872, 95%CI 1.084-3.230, P = 0.024). Conclusions: LMR, a representative marker of systematic inflammation and immune response, has potential prognostic significance in MDS patients.
Introduction
Myelodysplastic Neoplasms (MDS) is a heterogeneous group of myeloid neoplastic diseases derived from hematopoietic stem cells, characterized by peripheral blood cytopenia, pathological hematopoiesis and risk of transformation to acute myeloid leukemia (AML) [1].In the past, MDS has been an abbreviation for Myelodysplastic Syndromes.With the increasing understanding of the disease, the fifth edition of the World Health Organization (WHO) classification of myeloid neoplasms changed the name of the disease to Myelodysplastic Neoplasms.Compared to the 2016 classification, the new version emphasizes more on the neoplastic characteristics of MDS and harmonizes terminology with myeloproliferative neoplasms (MPN) [2,3].
The National Cancer Institute reported that the incidence rate of MDS was 4.2 cases per 100,000 people per year in the United States, and the 5-year relative survival rate was only 37.5%.The diagnosis of MDS mainly depends on peripheral blood counts and blood smear, marrow morphology, cytogenetics, immunology and molecular genetics [4].Nowadays, the Revised International Prognostic Scoring System (IPSS-R) is the most accepted risk stratification system.It consists of five factors: bone marrow cytogenetics, percent of bone marrow blasts, platelet count, hemoglobin level and absolute neutrophil count (ANC) [5,6].MDS patients received individualized and risk-adapted therapy based on the International Prognostic Scoring System (IPSS) and IPSS-R.However, IPSS-R has some limitations.It is only suitable for predicting the clinical outcomes of untreated MDS patients.Besides, several clinical factors with independent prognostic significance are not integrated, such as red blood cell infusion dependence and gene mutations [7,8].With this in mind, recently, Bernard et al. [9] developed a clinical molecular prognostic model including somatic gene mutations, named Molecular International Prognostic Scoring System (IPSS-M).It is composed of hematological parameters, cytogenetic abnormalities and genetic mutations that classifies MDS patients into six risk categories.Compared to the IPSS-R, IPSS-M improves prognostic discrimination and is also suitable for prognostic stratification of both primary and treatment-related MDS patients.However, the IPSS-M did not include monocyte and lymphocyte counts, both of which had been demonstrated to have an independent prognostic role in cancer patients [10,11].Thus, more potential biomarkers should be investigated to precisely predict the prognosis of MDS patients.
Lymphocyte to monocyte ratio (LMR), one of the representative markers of inflammation and immune response, is defined as the absolute lymphocyte count (ALC) divided by absolute monocyte count (AMC).Lymphopenia has been observed in advanced cancer patients and demonstrated to be associated with poor outcomes in patients with various types of cancer.On the other hand, monocytes have been found to be recruited into the tumor microenvironment and promote tumor progression through local immune suppression as well as angiogenesis.Monocytosis has been verified to be a poor prognostic marker in solid tumors [12].The lower LMR may represent an active inflammatory state.It has been confirmed to be an independent prognostic factor in solid tumors and hematologic malignancies in the past few years, such as colorectal cancer, pancreatic cancer, multiple myeloma (MM) and diffuse large B cell lymphoma (DLBCL) [13][14][15][16].However, there is no consensus on the prognostic role of LMR in patients with MDS.This study aims to investigate the prognostic significance of LMR in MDS.
Patients and treatments
A total of 91 newly diagnosed patients with MDS between March 2010 and January 2021 in the Huaian No.1 People's Hospital were recruited retrospectively.Another 28 MDS patients were collected for external validation.The fifth edition of the WHO classification has revised the categorization of MDS.However, considering that the study was retrospective, the diagnosis of MDS referred to 2016 WHO classification of myeloid neoplasms and acute leukemia [3,17].The follow-up continued up to April 2021.All patients received individualized treatments based on chemotherapy, hypomethylating agents (HMAs), immunosuppressive drugs, immunomodulatory therapy, allogeneic stem cell transplantation and supportive care according to the NCCN guideline of MDS.This study was approved by the Institutional Review Committee of Huai'an No.1 People's Hospital and was conducted following the Helsinki Declaration.
The inclusion and exclusion criteria of this study were as follows:1) age > 18 years old; 2) diagnosed with MDS according to WHO Definition; 3) available laboratory data for calculating LMR; 4) patients with acquired MDS were excluded.
LMR and grouping
LMR was defined as the ratio of absolute count of lymphocyte to monocyte.The relevant parameters were acquired from the patient's laboratory test results at the time of initial diagnosis.The median LMR of all 91 patients was 6.57 (range:0.08-213.50).In the study, the cut-off of LMR was derived from X-Tile.All included patients both in the training and validation cohorts were divided into the high and low LMR groups.The clinical characteristics between the two groups were compared, including age, gender, peripheral blood counts, bone marrow blasts, IPSS-R scores, MDS subtypes and treatments.
Statistical analysis
The cut-off of LMR was determined by X-Tile (version 3.6.1,Yale University, New Haven, CT, United States) [18].Statistical Package for social sciences (SPSS, version 23, IBM SPSS Statistics 23 software, IBM Corp., Armonk, NY, USA) was used to perform statistical analysis.Mann-Whitney U-test and chi-square test were utilized to evaluate the difference between the two groups and p-value < 0.05 (2-tailed) demonstrated a statistical significance.Kaplan-Meier method has been applied to assess the correlation between LMR and OS.Furthermore, the corresponding p-value was achieved through the log-rank test.Survival curves were graphed by GraphPad Prism (version 8.0.1, GraphPad Software, San Diego, California, USA).Univariate and multivariate Cox regression analyses were conducted to investigate the prognostic factors affecting OS.In univariate analysis, p-value < 0.05 was considered statistically significant, and the corresponding prognostic factors were included in multivariate Cox regression.P-value < 0.05 were considered statistically significant in multivariate analysis.
OS was defined as the period between the first diagnosis and the death of any causes or the last follow-up.The flow chart of the study was displayed in Figure 1.
Clinical and laboratory characteristics
A total of 91 patients were included in this retrospective study.The clinical and laboratory characteristics of patients were summarized in Table 1.The cut-off of LMR was identified as 3.2.The median age of 91 patients was 65 years (range:26-88 years).There were 60 males in the study, 43 of whom were in the high LMR group and the remaining 17 in the low LMR group.There was no statistical difference between the low and high LMR groups (P = 0.555) regarding gender.The median white blood cell (WBC), ALC and AMC were 2.16*10 9 /L, 0.91*10 9 /L and 0.14*10 9 /L, respectively.The median platelet count of all patients was 46*10 9 /L (range:3-765*10 9 /L).Patients in the high LMR group had no significantly higher platelet counts than those in the low LMR group (47*10 9 /L versus 38*10 9 /L, P = 0.829).The median hemoglobin of all patients was 67 g/L (range:37-144 g/L).
Kaplan-Meier survival curves of LMR
The Kaplan-Meier survival curves were graphed and used to compare OS between the low and high LMR groups by the log-rank test.The survival curves were shown in Figure 2. Patients with higher LMR experienced better OS than those with lower LMR (P = 0.007).Low LMR was associated with poor OS of MDS patients in the validation set (P = 0.014).
Univariate and multivariate cox regression analysis
The results of the univariate analysis were displayed in Table 2.As shown in the table, patients in the low LMR group had worse OS (HR = 2.070, 95%CI 1.201-3.571,P = 0.009) than those in the high LMR group.Besides, IPSS-R (HR = 3.056, 95%CI 1.639-5.699,P < 0.001) have a prognostic impact on the patients' OS.
Multivariate Cox regression analysis was performed to explore the potential clinical factors that influence OS in patients with MDS.The results of the multivariate analysis were summarized in Table 2. LMR was an independent prognostic factor for OS (HR = 1.872, 95%CI 1.084-3.230,P = 0.024) of patients with MDS.Likewise, IPSS-R was proven to be an independent prognostic factor of MDS patients' OS (HR = 2.892, 95%CI 1.549-5.402,P = 0.001).The univariate and multivariate analysis for MDS patients' OS in the validation set were displayed in Supplementary Table 1.
Discussion
MDS is a heterogeneous group of clonal hematopoietic stem-cell disorders characterized by ineffective and dysplastic hematopoietic differentiation and variable risk that progressed to acute myeloid leukemia.The fifth edition of the WHO Classification replaced myelodysplastic syndromes with myelodysplastic neoplasms and emphasized MDS as neoplastic disease.According to the National Cancer Institute, MDS patients did not have a significantly higher five-year survival rate than AML patients (36.9% versus 30.5%).The most widely used prognostic scoring systems for MDS today are IPSS-R [19].However, it has a few limitations.For example, the 5th edition of the WHO Classification emphasized that MDS is a genetically-defined disease type [2].However, factors associated with gene mutations were not included in the IPSS-R.Recently, a new clinical molecular prognostic model incorporating somatic gene mutations has been developed under the name IPSS-M.However, neither IPSS-R nor IPSS-M included lymphocyte and monocyte counts, which were demonstrated to have predictive prognostic value in cancer patients.LMR has been demonstrated prognostic role in patients with solid tumors in the past few years.Besides, existing research focuses on the predicting value of LMR for hematologic malignancies such as MM and DLBCL [15,16].However, the prognostic significance of LMR in MDS was unclear.Thus, the retrospective study was conducted to investigate the correlation between LMR and the prognosis of MDS patients.
This study retrospectively collected clinical Information of 91 newly diagnosed MDS patients in our institution.There were 24 patients in the low LMR group and 67 in the high LMR group.Compared to patients with high LMR, those in the low LMR group had an unfavorable influence on OS of MDS patients from the survival curves (P = 0.007).Moreover, this research validated that LMR was an independent prognostic factor in MDS cases after multivariate analysis (HR = 1.872, 95%CI 1.084-3.230,P = 0.024).Additionally, IPSS-R can also independently predict MDS patients' prognosis (HR = 2.892, 95%CI 1.549-5.402,P = 0.001).
The underlying mechanisms by which LMR correlates with prognosis have not been completely elucidated.Former researchers suggested that systematic inflammation contributes to the development of malignancies and has a vital role in the survival of cancer patients [20].Specifically, systemic inflammatory and immune response causes tumor-related symptoms, including fever, sweating and weight loss (so-called B symptoms).Additionally, it can mitigate the effectiveness of treatment, increase toxicity and even lead to treatment failure [20,21].Previous studies have also demonstrated that biochemical indicators and peripheral blood counts or ratios can be used as markers of immunological and inflammatory responses.They include albumin, C-reactive protein (CRP), lactate dehydrogenase (LDH), neutrophils, lymphocytes, monocytes, platelets, neutrophil to lymphocyte ratio (NLR), platelet to lymphocyte ratio (PLR) and the systemic immune-inflammation index (SII) [22].Furthermore, a retrospective study of 503 patients with non-del(5q) MDS verified that lymphocytopenia at diagnosis has an unfavorable influence on the prognosis of MDS patients, as ALC could reflect the host's immune status [23].Monocytes differentiate into macrophages that participate in tumor infiltration and metastasis in the tumor microenvironment [11].In solid tumors, increased numbers of monocytes have been shown to be associated with worse prognosis [24,25].Thus, it can be postulated that lower LMR correlates with the worse prognosis of cancer patients.
As shown in Table 1, the number of patients with SF3B1 mutation was 8, of which 2 were in the low LMR group, and 6 were in the high LMR group.Patients with TP53 multihit , FLT3 or KMT2A mutations in the low and high LMR groups were 5 and 5, respectively.Previous studies have demonstrated that SF3B1 was the most common mutant gene in MDS patients and patients with SF3B1 mutation have a relatively good prognosis [26].MDS patients with TP53 mutation have a high risk of transformation to AML, resistance to conventional therapies and a relatively poor prognosis [27].Bernard et al. identified TP53 multihit , FLT3 and KMT2A mutations as the three predictors most associated with adverse outcomes in MDS.The low LMR group had fewer patients with SF3B1 mutation and the same number of patients with poor prognosis mutations.
Although our study confirmed no statistical difference between the two groups of patients in terms of SF3B1 mutations and poor prognosis mutations (P = 0.433), we believe that the sample size was too small to conclude whether there was a difference the two groups.Other frequent gene mutations in MDS patients were displayed in the heatmap (Figure 3).This requires physicians to comprehensively evaluate the prognosis of patients by combining various indicators such as LMR and patients' genetic mutations, select treatment options, and guide clinical practice.
The study has a few limitations.It was a retrospective, single-center study with small sample size.In the future, prospective cohort studies with larger samples are required further to explore the prognostic significance of LMR in MDS.
Conclusion
In conclusion, LMR, a representative marker of systematic inflammation and immune response, has the potential predicting value for the prognosis of MDS, which deserve to be further studied on a large scale.
Ethics
This study was approved by the Institutional Review Committee of Huai'an No.1 People's Hospital and was conducted following the Helsinki Declaration.All the patients were anonymous.Informed consent was waived because of the retrospective design of the data collection.
Fig 1 .
Fig 1.The flow chart of the study.
Fig 3 .
Fig 3.The heatmap of frequent gene mutations in MDS patients (Green represents the occurrence of gene mutation in this patient, gray represents no gene mutation.Patients with missing relevant information have been removed).
Table 1 .
The clinical characteristics of the enrolled 91 MDS patients.
Table 2 .
The Univariate and Multivariate analysis for OS in MDS patients. | 2023-05-12T06:16:28.474Z | 2023-05-11T00:00:00.000 | {
"year": 2023,
"sha1": "d763250e64b03d5f2fae6acfd670d6d980b5d0a7",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/16078454.2023.2210929?needAccess=true&role=button",
"oa_status": "CLOSED",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "5536fbca8a8a1f6e01babea099a4987cbd4052a2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
143512757 | pes2o/s2orc | v3-fos-license | THEORIES OF SUBSTANCE ABUSE PREVENTION IN THE WORKPLACE
Workplaces have been identified as important structures for the implementation of alcohol abuse prevention programmes (Ames, 1993; Cook & Youngblood, 1990; Gill, 1994; Heirich & Sieck, 2000; Parry & Bennets, 1998; Roman, 1990; Roman & Blum, 2002; Snow, Swan & Wilton, 2002). The need for substance abuse prevention programmes in the workplace has also been recognised by South African authors (Albertyn & McCann, 1993; Parry & Bennetts, 1998; Strydom, 1997). The National Drug Master Plan: 2006-2011 (Department of Social Development, 1997), which directs all substance abuse services in South Africa, has prevention amongst workers as one of its priority areas.
Evaluations of existing workplace-based programmes, however, have not reported much success (Ames, 1993;Holder, 1998).Moskowitz (1989:75) remarked: "Whereas many have argued for the conceptual and practical advantages of workplace prevention programs, there is an absence of both viable program models and research data to support the efficacy of this approach for preventing alcohol problems".
It is the opinion of the author that one of the contributing factors to the poor performance of workplace-based prevention programmes can be found in the conceptualisation and design of the programmes.All social programmes are based on a theory of how the intended programme will address an identified need.This means that when a programme is implemented, programme planners have an idea of how and why the efforts will address a need or lead to a change that is in the interests of the programme beneficiaries.The theory may be implicit or may be well defined.Chen (1990:39) remarks: "The question of how to structure the organized efforts appropriately and why the organized efforts lead to the desired outcomes imply that the program operates under some theory.Although this theory is frequently implicit or unsystematic, it provides general guidance for the formation of the program and explains how the program is supposed to work".In an educational alcohol abuse prevention programme, for example, the theory is probably that information on the consequences of alcohol abuse improves the knowledge of the target group; improved knowledge leads to attitude change and the change in attitude leads to behaviour change.Programme theory is constructed from the view of role players and from social science theories.However, the programme theory may be weak or wrong, especially if it is not based on scientific evidence.The experience of the author is that in South Africa many prevention programmes are implemented that are not based on scientific evidence, as described in social science theory.The reason why a programme is implemented can more often be found in customary practice than in clear thinking about theories of change and evidence-based practice.
The aim of this article is thus to discuss theoretical approaches to the prevention of alcohol abuse.Programme planners can use the information to design prevention programmes that are based on a sound programme theory, constructed from the social science theory described here.Ultimately the aim is to contribute to more viable and successful programmes and evidencebased practice.The traditional approach, the ecological approach, the health promotion approach and the coping skills approach will be discussed.
THE TRADITIONAL APPROACH
Traditionally alcohol problems in the workplace have been managed by employee assistance programmes, the formulation of a substance abuse policy, training of supervisors and managers, information sessions for employees and chemical testing for substances (Cook & Youngblood, 1990;Cook, Back & Trudeau, 1996a;Gill, 1994;Trice & Sonnenstuhl, 1990).According to the South African literature (Albertyn & McCann, 1993;Parry & Bennetts, 1998;Strydom, 1997), this approach was the one followed almost exclusively in South Africa.
DESCRIPTION OF THE TRADITIONAL APPROACH
Employee assistance programmes (EAPs) focus on the identification of employees with alcohol problems, assessment of the problem and referral to inpatient or outpatient treatment centres.Employee assistance programmes can be considered as tertiary prevention or treatment.Some authors (Roman & Blum, 1996;Strydom, 1997) see this type of services as secondary prevention, as the emphasis is on early identification and treatment.Intervention is offered before the employee loses his job and before the stage of chronic dependence is reached.Early diagnosis also means that the prognosis of treatment is better.
Employees with alcohol problems are often identified by their supervisors and referred to the EAP.Employees are identified on the grounds of their under-achievement in the workplace.This typically includes absenteeism (especially after a weekend or after pay day), increase in the use of sick leave, increase in workplace accidents, conflict with colleagues and reduced productivity.Supervisors are trained to identify the problem worker, confront the worker constructively and refer them to the EAP.
Employees can also approach the EAP on their own initiative or can be referred by family, colleagues or unions.Often disciplinary committees refer employees if it becomes evident, during the hearing, that alcohol problems contribute to the problem.
Often treatment consists of inpatient treatment.Alcohol dependence is seen as a complex problem and a multidisciplinary approach is followed.The aim of the treatment is abstinence and an alcohol-free lifestyle.Controlled alcohol use or responsible use is normally not an aim of this type of programme.
As part of the traditional approach, a substance abuse policy is drawn up.This is done in collaboration with stakeholders, e.g.management and the unions.The substance abuse policy normally specifies: the organisation"s views on the use and abuse of substances; rules regarding the use of substances at the workplace; the disciplinary procedure that will be followed where the rules are transgressed; provision for treatment and procedures for referral for treatment; the preventative and educational programmes that the workplace provides.In some organisations the substance abuse policy makes provision for testing of employees for the presence of substances in the system.Testing might be done prior to employment, or occasionally, e.g. after an accident or at random, according to the decision of the employer.Testing can contribute to the reduction of substance abuse at the workplace, but is not effective for the treatment or prevention of substance abuse.
Educational methods are used for primary prevention.The aim is to educate the workforce about the negative consequences of alcohol abuse.Employees are educated about the fact that: alcohol is addictive; alcohol abuse has personal, physiological, social and psychological consequences; addiction is a process that often starts with "innocent" social drinking; alcohol abuse is a common and extensive problem; help is available.
In addition to the education of the workforce, the management, supervisors and representatives of the unions are also trained.Training of management includes the raising of awareness about the influence of alcohol abuse on productivity, and the social responsibility of management towards employees.The programme is marketed to management in this way.Supervisor training is an important element of the programme and focuses on skills related to the identification of alcohol abuse problems, the confrontation of the employee and referral to the EAP.
The underlying theory in the traditional approach is that more knowledge will lead to attitude change and a change in attitude will lead to a change in behaviour.However, Pentz (1999) maintains that programmes that are didactic in nature and focus only on the acquisition of knowledge about substances and the consequences of abuse have no effect.The traditional approach also focuses predominantly on treatment and hence prevention plays a minor role in them.
EXAMPLES OF PROGRAMMES THAT FOLLOW A TRADITIONAL APPROACH
There is some support in the literature for programmes that implement at least three of the elements of the traditional approach, namely the formulation of the substance abuse policy, the EAP programme and education (Trice, 1990).Roman and Blum (1996) did a literature study on the efficacy of alcohol abuse treatment at the workplace.Twenty-four studies matched the inclusion criteria.Twenty-one of these studies were based on an EAP model of treatment.Three studies were educational in nature and focused on the training of employees, supervisors and management.The effect of the intervention was determined by the following factors: Changes in knowledge and attitudes regarding drinking practices; Changes in willingness to refer employees for intervention; Decrease in amount of alcohol consumed or in unhealthy drinking practices; Changes in work behaviour, e.g.better productivity and less absenteeism.The authors came to the conclusion that most studies demonstrated positive results, although there were methodological problems: "The literature therefore demonstrates the generalized efficacy of interventions that are fashioned after the EAP model in dealing with employee alcohol problems and the value of training and education in changing attitudes, behaviour, and EAP utilization" (Roman & Blum, 1996:146).However, as has already been said, most of these studies focus on treatment and not on prevention.
Of the three studies in the Roman and Blum (1996) analysis that can be described as preventative, two follow a life skills approach and a healthy lifestyle approach respectively.The study by McLatchie, Grey, Johns and Lomp (1981) can be described as an educational prevention programme.In the latter study the programme was presented to two groups of employees in a manufacturing enterprise in Ontario.A group of 61 employees consisted of supervisors, section managers and union representatives.The other group consisted of 142 hourly-paid employees.A 30-minute session was presented to the hourly-paid employees.The group with the supervisors attended a session of one and a half hours.Both groups received information on alcohol abuse, the policy of the organisation and treatment facilities.The supervisor group received additional information on the role of supervisors, managers and union representatives.Audiovisual material, group discussions and short lectures were used in presenting the material.A questionnaire was administered before and after the intervention to test knowledge about alcohol, knowledge about the policy of the organisation and willingness to accept treatment.There was no control group.The results were that both groups increased their knowledge about alcohol and about the policy of the organisation, and that the hourly-paid workers were more willing to accept treatment.The study did not test the effect on the consumption of alcohol.Cyster and McEwen (1987) also described an educational programme that was presented in the British Post Office.The aim of the project was to: improve knowledge on the nature and effect of alcohol abuse; introduce the policy of the Post Office on alcohol abuse; encourage positive attitudes towards responsible alcohol use.The information was presented by means of a video-tape.The person presenting the video was also available to answer questions.In addition, employees could play a computer game that was aimed at improving knowledge about alcohol.The video was presented to groups of 25 to 30 employees.The presentation took 30 minutes.Training of supervisors was another aspect of the programme.Questionnaires were again distributed before and after the intervention.There were no control groups.The results were that there was a small increase in knowledge, but no change in attitude towards drinking, social pressure to use alcohol, or alcohol use.
The programmes discussed in this section confirm the general conclusion that educational prevention programmes can lead to an increase in knowledge, but do little to change drinking behaviour.There is also an over emphasis on treatment in the traditional approach, to the detriment of prevention.
DESCRIPTION OF THE ECOLOGICAL APPROACH
Ames questions the general tendency to ascribe the causes of alcohol problems to individual characteristics only.She focuses instead on social and cultural factors in the workplace that contribute to alcohol abuse.Factors that have been identified are: control (policy, rules, visibility of work and efficient supervision); physical and social availability of alcohol (can alcohol be obtained at the workplace and is alcohol use accepted by co-workers); quality of work (stress, physically demanding work, exclusion from decision making, unrealistic expectations and job insecurity).Several studies were done to determine the relationship between these factors and alcohol problems in the workplace.Ames and Janes (1990) found that the workplace can sustain heavy drinking.A study amongst 6 000 employees whose services were terminated revealed that most workers drank less after the termination, although their salaries remained constant because of insurance.Delaney and Ames (1995) found that positive attitudes towards work groups led to more positive norms regarding drinking.Positive norms resulted in less alcohol use.Ames and Grube (1997) found that employees" conceptions about the drinking practices of colleagues were the most important factor in determining their own drinking patterns.Seeman, Seeman and Budros (1988) did a study on alienation and alcohol abuse.They found that a feeling of helplessness was directly related to alcohol abuse.Trice and Sonnenstuhl (1990) developed a classification of factors that contribute to workplace drinking.
The cultural perspective refers to norms and attitudes about alcohol use that develop in a particular work setting.In some professions, e.g. the military, alcohol use plays an important role in the workplace culture.The social control perspective refers to those factors that impede the worker"s integration into the workplace, e.g. a lack of supervision, low visibility of the worker (as in travelling jobs), poor management and lack of disciplinary action.The alienation perspective indicates that an absence of creativity, diversity and independent decision making in work roles can lead to feelings of helplessness and alienation, which in turn may lead to alcohol abuse.The work stress perspective focuses on the relationship between stress at work and alcohol use.Stressors can be the work environment, contents of work, role conflict, boredom, inadequate remuneration and the complexity of the work.According to Trice and Sonnenstuhl (1990), these factors are contributing factors and not necessarily causes of alcohol abuse.There is an interaction between workplace, family, personal, genetic and community factors.Albertyn and McCann (1993) emphasise the important role of cultural factors in the workplace: "The drinking population seems to move in unison up and down the consumption continuum when changes in culture occur.Individual drinking habits are closely related to drinking habits among friends in the social network.Individual drinking habits and heavy drinking in particular are products of a company"s culture.Problem drinking is a learned behavioural disorder and education on its own is useless; it has to be linked to a change in culture" (Albertyn & McCann, 1993:41).Similarly MacDonald, Wells and Wild (1999) found that a subculture of alcohol abuse was the strongest factor to predict alcohol use by workers.Ames (1993) maintains that an effective prevention programme should be based on research.Knowledge about risk factors and drinking patterns are indispensable.The following steps are proposed for the development of a prevention programme: Initiate research to determine the cost of alcohol abuse to the company (absenteeism, injuries, and disciplinary actions); Do research to identify risk factors in the workplace that contribute to alcohol abuse; Share the results of the research with management, employees and the human resources section; Develop partnerships with other stakeholders, e.g. the EAP programme personnel and health clinics; Introduce changes that eliminate or reduce risk factors.Ames (1993) warns that it is often difficult to introduce the necessary changes in the workplace and that resistance is common.Strydom (1997) described a similar design for the primary prevention of alcohol abuse in the workplace in a South African industry.Holder (1990 and1998) advocated a systems approach to the prevention of alcohol problems in the workplace.As there are many similarities between the systems approach and the ecological approach the systems approach is incorporated with the ecological approach in this study.Holder is interested in the total system to which the worker belongs, including cultural and social groups within and outside the workplace, the values and norms of these groups regarding alcohol use, patterns of alcohol consumption, and the role of the family as well as the physical and social availability of alcohol in the workplace.According to Holder, the workplace is an important subsystem of the community and influences the community, as well as being influenced by the community.
EXAMPLES OF PROGRAMMES THAT FOLLOW AN ECOLOGICAL APPROACH
The Minnesota Mining and Manufacturing company developed a programme for the primary prevention of alcohol abuse in the workplace that can serve as an example of a programme with an ecological approach (Stoltzfus & Benson, 1994).The programme originated from the traditional approach, but introduced new concepts such as the changing of the culture of the organisation, a consideration of values, attitudes and skills, and a peer-group helping programme.Methods used were still mostly educational in nature e.g. a 10-hour supervisor training programme, a 2.5-hour employee training session and a peer-group training programme.However, the programme also encouraged discussions and joint responsibility for an alcohol-free workplace.
The programme was implemented at one of the branches of the company, while another branch served as a control.Pre-and post-test questionnaires were administered.In the experimental group there was a decrease in alcohol use, an increase in prevention skills, an increase in the acceptance of responsibility for prevention, and a decrease in the effect of alcohol abuse on productivity.A five percent change in the desired direction was accepted as representing meaningful change.Lehman, Reynolds and Bennett (2002) described a prevention programme that focused on work groups.The aim of the programme was to create awareness that substance abuse was a problem of the group and not only of the individual.Additional aims were to decrease the tolerance of alcohol abuse, to decrease enabling behaviour, to increase the groups" actions towards abuse, improve attitudes towards the substance abuse policy and increase referrals to EAPs.
The work group training consisted of two four-hour sessions.Between nine and fifteen employees took part in each group.Contents of the sessions covered the importance of understanding the impact of alcohol abuse for the group, a discussion of the substance abuse policy, stress management, the risks of allowing substance abuse in the work group, a drinking culture, referral to the EAP, support of the employee with problems, and confidentiality.There was no effect on group perceptions on alcohol abuse or the experience of stress in any of the groups.In both the work groups and the traditional training groups there was an increase of knowledge of the policy and the EAP.There were mixed results with regard to drinking norms.According to the authors, the findings showed support for both work-group training and traditional training.It can be concluded that there is support in the literature for an ecological approach, although more studies need to be done to develop this approach.
THE HEALTH PROMOTION APPROACH
The health promotion approach and the lifeskills approach assume different drinking patterns amongst employees.The drinking patterns may vary from total abstinence to social drinking to problem drinking and ultimately to dependence.As different drinking patterns manifest themselves, there are different aims for treatment outcomes, e.g.abstinence or responsible drinking.
DESCRIPTION OF THE HEALTH PROMOTION APPROACH
In America there is an increasing emphasis on programmes in the workplace that promote health.However, alcohol abuse prevention does not feature regularly amongst these programmes (Cook & Youngblood, 1990).Cook and Youngblood provide the following reasons for the incorporation of alcohol abuse prevention programmes in health promotion programmes: Alcohol abuse is a health risk; The use of alcohol and drugs has an impact on all the issues that are normally addressed by a health promotion programme, e.g.stress management, weight management, physical exercise, healthy eating and spiritual wellness; Health promotion programmes usually adopt a positive, approach which can be advantageous for conveying alcohol abuse prevention messages.Alcohol abuse prevention programmes are still stigmatised and incorporating them into lifestyle programmes can serve to overcome the stigmatisation problem; Health promotion programmes can reach more employees who are heavy drinkers, but who are not dependent.These employees are normally not reached through traditional programmes.
Health promotion programmes and alcohol prevention programmes can be reciprocally reinforcing.
There is also resistance to the incorporation of alcohol prevention programmes into health promotion programmes.Presenters of health promotion programmes may choose to focus on the positive and pleasurable aspects of a healthy lifestyle and not on the more complex problems of preventing abuse.Management may be unwilling to address substance abuse.The emphasis on a healthy lifestyle and the prevention of cardiovascular problems may lead to an under-exposure of the alcohol problems.Cook and Youngblood (1990) quote two studies by Shain in which alcohol abuse prevention and the promotion of a healthy lifestyle were successfully integrated.The "Take Charge" programme extended over six hours and encouraged participants to evaluate their lifestyle with reference to cardiovascular problems, stress and alcohol abuse.Messages on alcohol abuse and its effect on health were integrated in sessions on fitness and stress.Questionnaires were administered before and after the intervention.Heavy drinkers reduced their alcohol intake by 12.3 units (men) and 9 units (women) of alcohol.Moderate drinkers also reduced their intake.There was no control group in this study."Beyond Stress" was another programme that focused on the acquisition of social skills and techniques of relaxation.In a quasi-experimental study the author found that male subjects who drank moderately reduced their intake significantly.There was no change in the control group.
EXAMPLES OF PROGRAMMES THAT FOLLOW A HEALTH PROMOTION APPROACH
Cook and his co-workers developed the health promotion approach further.Cook, Back and Trudeau (1996b) implemented a programme called "SAY YES! Healthy Choices for Feeling Good" at a manufacturing facility in the north-eastern United States.The programme was delivered into three sessions: Session 1: Introduction (45 minutes).Concepts of a healthy lifestyle, personal choices and the impact of alcohol and drug use on health and well-being were discussed; Session 2: Drugs, alcohol and a healthy lifestyle.(1½ hour).Investigate the rewards and costs attached to the use of alcohol and drugs in comparison with healthy choices, e.g.relaxation exercises, physical exercise and recreational activities.Guidelines for responsible alcohol use and skills for the refusal of drinks were presented; Session 3: Healthy choices into action (45 minutes).Guidelines were given on the process of change, e.g.setting realistic goals and acquiring social support.The programme was evaluated with an experimental research design.There were positive results for attitude towards healthy behaviour, self-efficacy and the desire to cut down on drinking.There was no impact on alcohol consumption.The authors argued that attitude change precedes behaviour change.
The next programme that the authors developed, "Working People: Decisions about Drinking" (Cook et al., 1996b) focused more strongly on alcohol use and targeted mainly the blue-collar workforce.The programme was implemented at a printing company in Atlanta.Four 30-minute sessions were presented.The sessions are described briefly below: Session 1: A Closer look at Drinking -The negative health and lifestyle effects of alcohol abuse were discussed and the idea of "cutting down" was introduced; Session 2: Some Important Facts about Alcohol -Information about the properties of alcohol, health risks associated with heavy use, definitions of alcohol use, alcohol dependence and alcohol abuse, and the signs and symptoms of dependence were discussed.Session 3: One More Pitcher? -This session focused on decision-making skills and setting personal limits for alcohol use.Refusal skills were taught.Session 4: It"s about Choices: Building Personal Power -The session focused on healthful alternatives to drinking, e.g.exercise.Additionally, parenting and providing a role model for children were also discussed.
A quasi-experimental pre-test post-test design was used to evaluate the programme.The experimental group showed decreases on two of the three alcohol consumption measures, relative to the comparison group.Positive results were found mostly for the number of drinking days, and not for the amount of alcohol consumed.
In the ongoing development of the programme Cook, Back, Trudeau and McPherson (2002) developed a new model.The authors noticed that where alcohol use played a less prominent role in the health promotion programme, as in the SAY YES! Programme, the interest in the programme was high, but the effect on alcohol use was insignificant.The situation reversed where alcohol abuse played a more significant role in the programme content.A programme was thus developed where alcohol use was integrated with the healthy lifestyle material that was presented.The programme was called "Make the Connection" (Cook et al., 2002) and there were three components, namely "The Stress Management Connection", "The Healthy Eating Connection" and "The Active Lifestyle Connection".The aim of the study was to test the effect of the alcohol abuse prevention material, presented during the health promotion programme, on attitudes and behaviour regarding alcohol use.A secondary aim was to determine whether the inclusion of alcohol abuse prevention material had a negative effect on the impact of the health promotion material.An experimental design was followed in the study.Participants were randomly allocated to the healthy lifestyle (HL) group or a group that incorporated alcohol abuse prevention with the healthy lifestyle material (HL+A).Two studies were done, one with stress management and the other with healthy eating habits.The stress management group convened for three sessions of 45 minutes each.In the experimental group (HL+A) healthy stress management practices were contrasted with the use of alcohol to relieve stress.The control group (HL) received information on stress management only.The same was done in the study on healthy eating habits.
The authors found little difference between the experimental group (HL+A) and control group (HL) in alcohol use in the first study (stress).Both groups improved their stress management skills and decreased their use of alcohol.In the second study (healthy eating) the experimental group had significantly higher measures on the connection between health and alcohol use construct and were more aware of the dangers of alcohol abuse.There was no effect on alcohol use for the experimental or control group.Both groups improved significantly in their eating habits and weight control.
The study demonstrated that alcohol abuse prevention material can be incorporated into a health promotion programme without any negative consequences for the original programme.It was also evident that alcohol abuse decreased in the stress management groups, even where the material on alcohol abuse was not included.Cook et al. (2002) referred to the work of Snow andKline (1994, 1995) and concluded that this study was an additional indication that programmes on stress management can have an impact on drinking behaviour.Heirich and Sieck (2000) found that cardiovascular intervention could be an effective means of addressing alcohol abuse.Two thousand employees were randomly allocated to receive intervention individually or in a group format.Counsellors visited the employees at their work stations and did a health assessment.Employees were very interested in the medical examination, e.g.blood tests and cholesterol tests.Half of the group was followed up after three years.Significant results were found for improved cardiovascular health.Also 43% of employees who were initially identified as high risk drinkers decreased their alcohol intake to safe limits or stopped drinking.Individual follow-up was more successful than a group approach.Heirich and Sieck (2002) did a similar study with university employees.Faculties were randomly allocated to an experimental group and control group.The experimental group received health screening and individual follow-up.The control group received no intervention.Fifty percent of employees who were identified as heavy drinkers decreased their intake at follow-up and 25% of these decreased their intake to safe levels.Of those that were described as potential problem drinkers, 42% were also drinking at safe levels after the intervention.
Information about the control groups was not available.Heirich and Sieck (2002) demonstrated that proactive assessment of health risks and individual follow-up showed positive results for the prevention of alcohol abuse.
Richmond, Kehoe, Heather and Wodak (2000) combined health screening and brief intervention for employees who demonstrated heavy alcohol use in a study at the Post Offices in Sydney, Australia.An experimental design was followed.All employees in the experimental group received general health promotion material.Those who were identified as heavy drinkers in the health screening part of the programme received brief intervention for alcohol abuse.Similarly programmes were presented to those employees who were identified with eating problems, smoking or stress.There was no reduction for alcohol use in the organization as a whole.However, women in the experimental group showed a decrease in the number of drinks they take.
The studies described in the health promotion approach demonstrated that health promotion programmes can provide a successful avenue for the presentation of alcohol abuse material.
THE LIFE SKILLS APPROACH
The life skills approach is championed by Snow and Kline and their co-workers (Kline & Snow, 1994;Snow & Kline, 1995;Snow et al., 2002;Snow, Swan, Raghavan, Connell & Kline, 2003) in particular.
DESCRIPTION OF THE LIFE SKILLS APPROACH
Important concepts in the life skills approach are those of risk factors and protective factors.Risk factors are individual characteristics and characteristics in the environment that contribute towards psychological problems (e.g.depression) and substance abuse.Exposure to multiple risk factors increases the risk of serious psychological problems.Protective factors enable a person to decrease, change or adapt his/her reaction to risk factors in such a way that the risk factors do not have negative consequences (Snow et al., 2002).Protective factors are particularly important in situations in which it is not possible to alter risk factors directly.In the workplace the research focused on risk factors such as stressors (at work and in the family), individual coping skills and social support.This research is summarised by Snow et al. (2002).Snow et al. (2003) also found that employees who reported higher demands, pressures and role conflicts were significantly more likely to experience symptoms of depression, anxiety and somatic complaints.Active coping styles correlated negatively with symptoms of psychological stress, whereas avoidance coping had a positive correlation.The life skills approach thus postulates that stressors at work and in the family and an avoidance style of problem solving are risk factors for the development of psychological problems and substance abuse.On the other hand, active problem solving and social support are protective factors that can prevent psychological problems and substance abuse.Snow and Kline (1995) presented the "Yale Work and Family Stress Project" to 239 female secretarial employees in Connecticut.Participants were randomly assigned to an experimental group (136 employees) and control group (103 employees).All participants completed questionnaires before, after, and at 6 months and 22 months after the intervention.Participants met in small groups of 10 to 12 employees for weekly sessions of 1½ hours over 15 weeks.There were three components to the programme.In the first component (10 sessions) the focus was on problem solving, the second component concentrated on re-appraisal techniques (2 sessions) and in the third component active stress management techniques were taught (3 sessions).
EXAMPLES OF PROGRAMMES THAT FOLLOW A LIFE SKILLS APPROACH
Immediately after the intervention participants in the experimental group reported significantly lower employee role stress, higher social support from work sources, lower psychological symptomatology, fewer depressive symptoms, fewer somatic complaints, less tobacco use, less anxiety and greater use of behavioural coping strategies.At the 6-month follow-up, participants also reported, amongst other benefits, lower alcohol use.At 22 months the only significant programme effect was that participants reported fewer somatic complaints.
In a second study certain refinements were made to the original intervention.Direct attention was paid to changing employees" drinking behaviour and discouraging the use of alcohol for stress management purposes.A second control group was added to exclude the confounding effect of extra attention, time off or information.In this control condition participants met for 8 sessions over 16 weeks and information was given on stress, substance use and resources in the community.The experimental group followed a similar programme as described in the first study, except that an extra session was added to the programme.The participants were 468 employees who worked at three organisations in Connecticut.Several instruments were used to determine programme effects.The post-test sample consisted of 72.6% of the original sample.Participants in the experimental group reported a greater decrease in stress in their roles as spouses and parents compared to the control groups.The experimental group also made more use of social support as a coping mechanism and less of social withdrawal.Intervention participants reported less alcohol use at post-test, particularly drinkers who were heavier alcohol users.
The results of the two studies provide support for the life skills approach to alcohol prevention in the workplace.In these studies the emphasis was on changing the behaviour of individual employees to increase their resilience through individual coping skills and social support.The authors point out that the approach has limitations and indicate the need for a comprehensive model where organisational factors, community factors and individual factors are integrated.A multi-system approach could improve the effectiveness of programmes, but the barriers to the implementation of multi-level programmes in the workplace have to be taken into account.
Increase in knowledge.
No effect on alcohol use.
Decrease in alcohol use.Decrease in negative effect of alcohol abuse.
Positive results with regard to attitudes towards alcohol use, health and intention to drink less.Decrease in alcohol use.
DISCUSSION
From this summary of approaches to the prevention of alcohol abuse in the workplace, it is evident that there are promising developments in the field of substance abuse prevention.Positive results were obtained in studies based on the ecological approach, the health promotion approach and the life skills approach.It is evident that the literature supports the ecological and cognitive behavioural approaches (both the health promotion approach and the life skills approach utilise cognitive behavioural methods).According to Cook (2002), the cognitive behavioural approach has also been found to be the most successful in the treatment of substance dependence (e.g.minimal intervention therapies, motivational interviewing and selfregulation).It is thus not surprising that cognitive behavioural methods also show promise in the prevention of alcohol abuse.
However, alcohol abuse is a multi-factorial problem and several authors (Cook et al., 1996a;Snow & Kline, 1995) point out that substance abuse should be addressed on different levels, e.g.individual, organisational, family and community levels.The question of which approach to follow thus remains.While a multi-level approach might seem more beneficial, it is not easy to gain access to a workplace and a multi-level approach might place heavy demands on management in terms of money and time.Management support and workplace politics play a crucial role in the success of a prevention programme.It is the view of the author that the approach should be determined by evidence of success from the literature, the characteristics of the target group for whom the programme is intended, and the practicalities and demands of the workplace situation.
CONCLUSION
This article summarises the most important approaches to the prevention of alcohol abuse in the workplace.It is evident that programmes that are solely focused on the increase of knowledge have not demonstrated much success.On the other hand, there is support for programmes based on an ecological and a cognitive behavioural approach.Programme planners can increase the validity of their programmes by ensuring that prevention programmes have a programme theory based on an approach to prevention for which there is evidence in the literature.
The programme was presented to municipal workers in two cities in the south-western United States; 957 employees took part.Work groups were allocated randomly to an experimental group (work group training) and a control group (traditional training), or a second control group (no training).Questionnaires were completed before the training, two to four weeks after the training and six months after the training. | 2019-05-04T13:08:13.824Z | 2014-06-13T00:00:00.000 | {
"year": 2014,
"sha1": "efd5fc29c34273f568a66edcfe160cfadb1aa052",
"oa_license": "CCBY",
"oa_url": "https://socialwork.journals.ac.za/pub/article/download/140/127",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "efd5fc29c34273f568a66edcfe160cfadb1aa052",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Sociology"
]
} |
44519539 | pes2o/s2orc | v3-fos-license | Anomalous phonons in CaFe2As2 explored by inelastic neutron scattering
Extensive inelastic neutron scattering measurements of phonons on a single crystal of CaFe2As2 allowed us to establish a fairly complete picture of phonon dispersions in the main symmetry directions. The phonon spectra were also calculated by density functional theory (DFT) in the local density approximation (LDA). There are serious discrepancies between calculations done for the optimized structure and experiment, because the optimised structure is not the ambient pressure structure but is very close to the "collapsed" structure reached at p = 3.5 kbar. However, if the experimental crystal structure is used the calculation gives correct frequencies of most phonons. The most important new result is that linewidths/frequencies of certain modes are larger/softer than predicted by DFT-LDA. We also observed strong temperature dependence of some phonons near the structural phase transition near 172 K. This behavior may indicate anomalously strong electron phonon coupling and/or anharmonicity, which may be important to the mechanism of superconductivity.
Introduction
The discovery of superconductivity at temperatures exceeding 50 K in iron arsenide compounds with general compositions RFeAsO (R = rare earth) and MFe 2 As 2 (M = alkaline earth metal) and MFeAsF has attracted great interest [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15] in these materials. At present, it is hotly debated whether these compounds are unconventional metals similar to the cuprate superconductors or can be understood within the same theoretical framework as conventional intermetallic compounds like the borocarbides or MgB 2 . Superconductivity in these compounds appears either at a critical doping level of the parent compound, or by application of pressure above a critical value. The role of the phonons for the mechanism of superconductivity is not known at present. DFT calculations predict weak electron- [9] with a negligible contribution to the superconductivity mechanism. Inelastic xray scattering investigation [10] of the phonon density of states in LaFeAsO 1-x F x and NdFeAsO, as well as measurements of a few phonon branches [10(a),11] on single crystals of BaFe 2 As 2 and PrFeAsO 1-y showed that DFT is only moderately [10(a),12] successful in predicting phonon frequencies in these compounds. The phonon density of states in BaFe 2 As 2 , Sr 0.6 K 0.4 Fe 2 As 2 and Ca 0.6 Na 0.4 Fe 2 As 2 was investigated on polycrystalline samples using inelastic neutron scattering [13,14]. Empirical models used to analyze the data again had limited success.
Experiment and phonon calculations
Single crystals of CaFe 2 As 2 (15 mm × 10 mm × 0.4 mm) were grown from a high temperature solution using Sn as flux [15]. The details of crystal characterization are given in Ref. [15]. The neutron measurements were performed on the 1T1 triple-axis spectrometer at the Laboratoire Léon Brillouin, Saclay. Measurements were done with pyrolytic graphite (PG002) as a monochromator and analyzer.
Most measurements were carried out at 300K with open collimations and double focusing on both the analyzer and the monochromator. Selected phonons were studied as a function of temperature down to T = 100 K, which is well below the magnetic/structural phase transition at 172 K.
The calculations were carried out within the framework of the LDA and GGA using a mixed basis pseudopotential method [16]. A density functional perturbation approach was used for calculating the phonon frequencies and phonon eigenvectors [17]. We employed norm-conserving pseudopotentials and a plane-wave cutoff of 22 Ryd, augmented by local functions at the Ca and Fe sites. Brillouin zone (BZ) summations were done with a Gaussian broadening technique using a broadening of 0.2 eV and 40 wavevector points in the irreducible part of the BZ.
Fig. 1
Comparison of experimentally determined [21] phonon frequencies (solid circles) in the (100), (001) and (110) directions at T = 300 K with results of density functional theory (solid lines). The calculations were based on the experimental crystal structure. The 15 phonon modes along the ∆(100), Λ(001) and Σ(110) directions can be classified as ∆ : 5∆ 1 + 2∆ 2 + 5∆ 3 + 3∆ 4 ; Λ : 4Λ 1 + Λ 2 + 5Λ 3 ; Σ : The optimized structure was initially used for the phonon dispersion calculation. However, as reported previously [18,19], we soon realized that the optimized structure is relatively far away from the experimental one. Figure 1 shows that DFT in the LDA is quite successful in predicting the phonon frequencies in CaFe 2 As 2 if, instead of the relaxed structure, the experimental one is used. It is important to emphasize here that one must impose a nearly perfect tetrahedral environment of the Fe atoms as observed in experiment in order to obtain best agreement between calculated and experimental phonon frequencies. However, even in this case it is worse than in many other [20]. This finding is similar to the previous observations on the Ba122 compounds by inelastic x-ray scattering [10(a)].
Although the calculations based on the experimental structure appear to be more accurate, some important differences with experiment remain. The main one is for the phonons of ∆ 3 symmetry between q=(0.5 0 0) and (1 0 0). Some appear around 19 meV but predicted at 22 meV and others are observed around 16 meV but predicted 2 meV lower. One must also keep in mind that the good agreement for other phonons may be somewhat misleading in the sense that the predicted eigenvectors may differ from the experimental ones even where the phonon frequencies agree. In experiment, phonon eigenvectors determine observed phonon intensities. When phonons are nearly degenerate as in CaFe 2 As 2 near 20 meV, different phonon branches may hybridize. In this case small differences between calculated and experimental frequencies result in large differences in the eigenvectors and the comparison between predicted and calculated phonon intensities is not very meaningful. However, in the case of the Σ 3 -frequencies observed at the zone boundary, the situation is clear-cut: because of the high symmetry of the zone boundary point, the Σ 3 -phonons decompose into sub-groups. The mode observed at E = 18 meV with very high intensity at (2.5,1.5,0) is single and hence, its eigenvector is completely determined by symmetry. It therefore can be unambiguously assigned to a mode calculated at 23 meV by theory. On these grounds, the disagreement between calculated and observed frequencies of Σ 3 -symmetry at the q=(0.5, 0.5, 0) zone boundary is stronger than one might guess from inspection of Fig. 1 when considering the eigenvectors, the data points shown at 18 meV and 23 meV correspond to calculated frequencies at 23 meV and 20 meV, respectively, i.e. the phonon frequencies are "flipped". We also found substantial line broadenings for a number of phonons. For instance, an energy scan at a wavevector Q = (2.5,1.5,0) and T = 300 K shows a pronounced broadening for a mode at 18 meV (Fig. 2, left), which, based on its intensity, can be unambiguously assigned to Fe vibrations. The line broadening of this branch is maximum at the zone boundary, which becomes a reciprocal lattice point in the low temperature orthorhombic phase. As already mentioned above, its frequency is considerably lower than 23 meV calculated by DFT. There is very little change of this mode on cooling from 300 K to 190 K but its linewidth shrinks considerably below the tetragonal-to-orthorhombic phase transition at 172 K (Fig. 2, right). These observations indicate a close relationship between the line broadening and the structural instability. However, there is no direct relationship between the elongation patterns of the 18 meV mode and the displacements during the phase transition. We have also carried out spin-polarized DFT-GGA calculations in the orthorhombic phase of CaFe 2 As 2 . The calculated phonon spectra for non-magnetic/spin-polarized structures are shown as dashed lines Fig. 2 left/right respectively. It appears that the calculated line widths of phonon modes are larger in the orthorhombic phase because the orthorhombic distortion leads to a splitting of modes. The agreement between our experimental results and the calculation is poor, which further suggest anomalous phonons in CaFe 2 As 2 . Unlike the case of BaFe 2 As 2 [11], including magnetism does not improve agreement with experiment ( Fig. 2 right).
Simple anharmonicity is unlikely to account for the observed line widths. Strong coupling of phonons to electron-hole excitations is another possibility. However, we calculated the electron-phonon coupling induced phonon line widths using density functional perturbation theory and found that the calculated line widths are much smaller than observed. Also, no strong broadenings appear in BaFe 2 As 2 [3], which has similar electronic and crystal structure, and also orders magnetically below 170 K. Since the main difference between BaFe 2 As 2 and CaFe 2 As 2 that the former is not close to the "collapsed" high pressure phase, this points to the proximity to the "collapsed" high pressure phase as the most probable explanation of the phonon anomalies in CaFe 2 As 2 . A mechanism for this behavior, which accounts for the difference between CaFe 2 As 2 and BaFe 2 As 2 , has been proposed in [18]. On the other hand, our observation that the broadening of the 18 meV mode becomes much smaller in the magnetic phase leads to the conclusion that the proximity to the magnetic phase is important.
Conclusions
More work is necessary to understand the effects we report here. In any case, our findings indicate that the interplay between magnetism and the lattice vibrations is in some way responsible for the anomalous phonons in CaFe 2 As 2 . That is to say, the coupling of the vibrational and the electronic degrees of freedom is stronger than calculated by DFT, and hence phonons might play an important role in superconductivity in the doped compounds. | 2017-09-30T12:27:04.051Z | 2010-11-01T00:00:00.000 | {
"year": 2010,
"sha1": "e12c463162a718a8749371b1033327dde3cbb10f",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/251/1/012008",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "cea03ebdca8a4427290df8619c4282c5977a8513",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
34555750 | pes2o/s2orc | v3-fos-license | Expression of Ki-67 in normal oral epithelium, leukoplakic oral epithelium and oral squamous cell carcinoma
Aims and Objective: To demonstrate the presence, location and pattern of cell proliferation in different histological grades of oral epithelial dysplasia (OED), oral squamous cell carcinoma (OSCC) and normal oral epithelium (NOE) using an antibody directed against the Ki-67 antigen and its intensity of staining evaluated respectively. Materials and Methods: A total number of 100 archival paraffin embedded blocks obtained from Department of Oral and Maxillofacial Pathology were studied. The case details were retrieved which consisted of histopathologically diagnosed cases of OSCC (n = 20), low risk OED ( n = 30), high risk OED ( n = 30) and normal appearing mucosa ( n = 20) were taken as standard for comparison. Ki‑67 immunostaining was detected. Ki‑67 positive cells were counted in the five random high power fields in each case. Results: Ki‑67 labeling Index (LI) was restricted to the basal and parabasal layers of the normal oral epithelium irrespective of age, sex and site whereas it was seen in the basal, suprabasal and spinous layers in OED. Ki‑67 LI is increased in high risk cases than the low risk cases of OED. Ki‑67 positive cells in OSCC were located in the periphery of the tumor nests than the center, where frequent mitoses were observed. Conclusion: The architectural alteration evaluated by Ki-67 antibody in proliferating cell distribution in the layers of epithelial dysplasias may provide useful information to evaluate the grading of OED. Ki‑67 LI increased in high risk cases than low risk cases of OED. This study showed that over expression of Ki‑67 antigen between well‑differentiated and poorly differentiated OSCC was in accordance with histologic grade of malignancy but not in accordance with moderately differentiated OSCC.
INTRODUCTION
The cell is the basic, living, structural and functional unit of the body. [1] Cell proliferation is a biological process of vital importance to all living organisms and is fundamental to both embryonic and post embryonic existence. [2] The control on this important biological process is thought to be lost in cancer [3] and many studies have reported that abnormal cell proliferation appears to be a precursor and may be a predictor of tumorigenesis. [4] The development of cancer is a complex succession of events and multistep process in which the genomes of cancer cells acquire mutant alleles of proto-oncogenes, tumor-suppressor genes and other genes that control, directly or indirectly, cell proliferation. [5] Genetic aberrations are necessary for the affected tumor cell to express malignant phenotype.
Epidemiological studies of oral cancer showed that Southern Asia had the highest incidence of oral cancer, accounting for 18% of all cancers. [6] Oral squamous cell carcinoma (OSCC) being most prevalent type of cancer represents about 91% of the diagnosed cases of malignant tumors of the mouth. [7] The risk factors for oral cancers is closely related to lifestyle, such as tobacco use, alcohol use, poor oral hygiene and betel quid chewing habit. [8] Clinicopathologically, malignant transformation of oral precancerous lesion is observed in up to 17.5% of the cases. [9] Theories of carcinogenesis suggest that premalignant change may occur in any area of mucous membrane exposed to carcinogens with the risk of developing a second or multiple primary carcinomas. [11,12] The proliferative activity of any tissue or neoplasm can be determined by its growth rate using antibodies directed against specific antigens allowing the simultaneous analysis of cell proliferation and histology.
The two most common immunohistochemical markers used to study cell proliferation are proliferating cell nuclear antigen (PCNA) and Ki-67 antigen. [3] Of these two markers Ki-67 has been shown to be excellent for the estimation of the growth fraction in both normal and malignant human tissue and this antibody is now used as the usual standard for the assessment of cell proliferation than PCNA as it does not suffer much from the influence of internal and external factors. Its nuclear expression during a defined period of the cell cycle represents an advantage in its use as a biological marker of mitotic activity. [13] Also it has a much shorter half life, thus producing less residual staining after cells have gone through proliferative stage. [14] Its demonstration therefore indicates the proliferative stage of the cell rather than being just residual evidence of the cell that has passed through the stage. Ki-67 is not involved in DNA repair. [15] PCNA is not proliferation specific. [16] Many studies revealed a poor correlation between this antigen and other proliferation markers, in addition to clinical parameters. [17] Consequently, PCNA staining is no longer recommended for use in surgical pathology as it lacks all above mentioned advantages of Ki-67. [18] The fraction of Ki-67 positive cells is often correlated with the clinical course of the disease. Ki-67 marker has been extensively examined in oral epithelial dysplasia (OED) and OSCC. [19] Recently, it was demonstrated that Ki-67 gene suffers "over expression" in epithelial cells of premalignant and malignant oral lesions. [13] Following the above information, the present study aimed to evaluate the potential association between histologic grades of OED and OSCC by the proliferative marker Ki-67 and compare it with normal appearing mucosa.
MATERIALS AND METHODS
The retrospective study was conducted on sections obtained from the 100 archival paraffin embedded blocks of patients diagnosed histologically as OED and OSCC from the Department of Oral and Maxillofacial Pathology and Microbiology.
Immunohistochemical procedure
Immunohistochemical (IHC) detection of Ki-67 was performed using prediluted rabbit monoclonal antibody (6.0 ml), provided by Biocare Medical bearing control number: 901-325-091911, certified by ISO 9001 and 13485 bearing catalog number: PRM 325 AA, which was ready-to-use and has been standardized with Biocare's MACH 2 detection system. This antibody was stored at 2-8ºC.
For IHC procedure two paraffin embedded tissue sections for each case of the above groups were obtained using semiautomatic microtome approximately of 4 µm thickness. Out of two sections, one section was stained by hematoxylin and eosin [ Figure 1a-f] while other serial section of the same was stained with Ki-67. For IHC sections were placed on precoated slides.
Positive control consisted of paraffin embedded sections of human tonsil tissue with known antigenic reactivity to Ki-67 in the lymphoid follicles and a negative control was performed in all cases by omitting the step of primary antibody during the staining, which resulted in lack of staining in all cases.
All glassware was gently cleaned with running distilled water prior to the usage to avoid background staining and nonspecific deposits on tissue sections. The slides were fixed on a slide warming table at 60 ºC for 15 mins. The sections were cleared by passing through two changes of xylene for 10 mins each and rehydrated by passing through two changes of absolute alcohol. Then rinsed thoroughly with distilled water and kept in the distilled water koplin jars until antigen retrieval.
Preparation of antigen retrieval solution: Diva Decloaker solution which was concentrated 10 × was diluted with distilled water in the ratio of 1:10. The slides were placed in above prepared buffer solution in a slide racks and kept in Decloaking chamber. Distilled water of 500 ml was poured in Decloaking chamber before keeping the racks. The chamber was closed with the lid and switched on by pressing the start button on the front panel. The temperature was allowed to rise till 125ºC and then allowed to gradually taper down till 90ºC followed by gradual cooling back to the room temperature. Slides were rinsed thoroughly with distilled water.
IHC staining procedure: All reagents were brought to room temperature prior to immunostaining. Incubations were performed at room temperature in a humidifying chamber and sections were not allowed to dry out during the staining procedure.
• The sections were blocked for any endogenous peroxide activity for 5 mins and then washed • Sniper Protein Block was used for 10 mins • primary antibody was applied for 30 mins • MACH 1 polymer wasapplied for 35 mins • Betazoid DAB chromogen was applied for 10 mins • The sections were counterstained with CAT hematoxylin and then rinsed firstly with buffer and later by distilled water. Then slides were mounted in DPX.
The slides were viewed under bright field microscope and compared with their respective H and E sections. Cells were considered to be positive for Ki-67 antigen if there was any staining of nucleoplasm or nucleoli as this is a component of the nuclear matrix and a cell cycle-associated nuclear antigen according to the study by Verheuen et al., [20] All sections were evaluated for distribution and intensity of the immunohistochemical reaction product. The staining pattern was observed and made a note of. Diffuse staining of the nucleoplasm and a granular pattern, which stained nucleoli or granules of different size were dispersed throughout the nucleoplasm. Some nuclei showed a mixed pattern; that is strongly stained granules against a diffusely positive background. We classified these nuclei as granular.
The staining intensity was classified as strong, moderate or weak.
In this study nuclear distribution was found to be granular, diffuse or a combination of both which is in agreement with previous reports. [18] Two observers performed the counts twice independent of each other, but from the same areas of the epithelium regardless of staining quality to overcome inter as well as intra observer bias.
Ki-67 LI was determined by the number of positive nuclear profiles/mm 2 of epithelial cells. Preferably 5 nonoverlapping areas of high power field (40X = 0.1 mm 2 of epithelium) of a IHC slide were captured by Olympus DSLR camera and these areas were viewed for Ki-67 expression. These photographs were analyzed by image analyzer Biowizard Dewinter software 4.1 version with grid system. Positive cells were counted as recommended by Iamaroon et al., (2004). [21] The Ki-67 protein was extensively examined in OED where the number of proliferating cells increased according to the grade of dysplasia and OSCC. The histological examinations mainly focused on the total number of positive cells within the epithelium as an index of malignancy rather than for architectural distribution within the altered epithelium. The nuclear expression of Ki-67 antibody was counted according to epithelial layers/strata as the basal layer, with positive nuclei present just above the basement membrane; parabasal layer, positive nuclei within two layers above the basement membrane and next to the basal layer; and suprabasal layer, with nuclear positivity in layers/ strata above the parabasal layer.
Statistics
A descriptive statistical analysis was carried out to tabulate our results. Significance was assessed at 5% level of significance. The values were noted and subjected to Analysis of variance (ANOVA) and Post-hoc Tukey test with P < 0.001 as statistically significant.
RESULTS
The Ki-67 expression was detected in all cases of normal oral epithelium (NOE) and was restricted to the basal and parabasal layers of the epithelium with parabasal layer showing intense staining which is in accordance with a previous report by Takeda et al. [22] Overall comparison of Ki-67 positivity among study groups showed variable results Ki-67 positive cells comparison between Group I [Figures 2a and b] and Group II, showed 'P' value of 0.0005 which is strongly significant statistically whereas between Group I and low risk Group II was not statistically significant as the sample size was more for low risk OED than NOE. This was in accordance with the study by Alfredo Maurıcio Batista de Paula et al., [23] indicating that inflammation induces an increase in the number of epithelial cells in proliferative/cell cycle stage. Comparison between Group I and III was not statistically significant; whereas comparison between Group II and III was moderately significant statistically [ Table 1 and Graph 1]. Strong significant differences were observed between Group I and high risk Group II (P < 0.001). Comparison between low risk [ Figure 3a and b] and high risk Group II OED [ Figure 4 a and b] showed significant P value (<0.001) [ Table 2 and Graph 2].
The nuclear Ki-67 positivity was found to be increased and positivity was observed reaching up to the superficial layers of the epithelium, according to the grades of dysplasia.
Comparison of Ki-67 positivity among WDSCC, MDSCC and PDSCC
Statistical analysis showed no significant results. The PDSCC showed the highest mean of Ki-67 labeling index LI followed by WDSCC and MDSCC. Ki-67 positivity in OSCC located at the periphery of the tumor nests than the center of WDSCC [ Figure 5a and b] which appeared granular but whereas it was diffuse and patchy in most of the PDSCC [ Figure 6a and b; Table 3 and Graph 3].
DISCUSSION
Oral mucosa is made up of stratified squamous epithelium (SSE); the stratification is the result of cell proliferation and sequential differentiation. [24] Proliferation is a property of stem cells of the basal layers of the SSE and their immediate progeny, the transit-amplifying cells. [25] Differentiation starts when recently divided cells detach from the underlying extracellular matrix. [26] As the differentiating cells mature, they are pushed toward the epithelial surface by the pressure generated in the underlying proliferation compartment. [24] Proliferation and differentiation are controlled by autocrine and paracrine factors generated by the keratinocytes; the cytokines and growth factors originating in the underlying connective tissue and the circulating systemic factors. [27] Cell proliferation, a vital biological process, is an important adjunct to histologically based tumor classification and has potential relevance as an indicator of treatment response and relapse. Many studies have reported that abnormal cell proliferation appears to be a precursor and may be a predictor of tumorigenesis. [4] Various immunohistochemical markers are used to detect cellular proliferation of which Ki-67 is used as a more reliable marker of proliferation in our study.
The monoclonal antibody Ki-67 was first described in 1983 by Johannes Gerdes et al., who suggested that it might be used as a marker for proliferating cells. [28] Immunostaining with antibodies to Ki-67 antigen is well established as a quick and efficient method for evaluating growth fractions of various tumor types because of its distinctive reaction patterns that exclusively involves the proliferating cells. [29] The Ki-67 antibody was first isolated during attempts to raise monoclonal antibodies to antigens specific for Hodgkin and Reed-Sternberg cells. [28] Ki-67 stood out from other antibodies because it only reacted with cells which were proliferating, for example cortical thymocytes and cells in the crypts of the small intestine, whereas it would show no reaction with cells which were known to be in a resting or terminally differentiated state, such as liver cells and neurones. [29] The Ki-67 antigen was named after its place of characterization in Kiel, Germany and because the clone producing the antibody was grown in the 67 th well of tissue culture plate. [30] It is a large basic protein found as peptides with molecular weights of 345 kD and 395 kD [31] which have been detected within the nucleus and its gene is located on chromosome 10q25-ter. [32] Ki-67 is not expressed in cells showing an arrest in cell cycle and starts to be expressed in the S-phase, progressively increasing through S and G2 phases which reaches a plateau at mitosis as appropriate stimulation occurs in G1 phase where there is a subsequent increase in the level of Ki-67 protein. If no proper stimulation to proliferate is received then the cell enters Go and production of the Ki-67 protein drops to an undetectable level. [29] Our aim was to study and interpret the relationship of Ki-67 LI with different histological grades of OED and histologically diagnosed different grades of OSCC. Again these were compared with NOE for the proliferative index. In our study the Ki-67 expression in all the cases of NOE was found to be restricted to the basal and parabasal layers of the epithelium and mainly presented in the parabasal layer where the numbers of proliferating cells were limited when compared to the basal layer of NOE. There were no significant differences of LIs between groups by age, sex and region.
Several lines of evidence, including clinical, experimental and morphological data, support the concept that squamous cell carcinoma of the upper aerodigestive tract arises from noninvasive lesions of the squamous mucosa. These lesions encompass a histological continuum between the normal mucosa at one end and high grade dysplasia/carcinoma in situ, at the other end, establishing a model of neoplastic progression. [33] Cancer being a genetic disorder, involves multiple alterations of the genome progressively accumulated during a protracted period, the overall effects of which surpass the inherent reparative ability of the cell. [33] Histologically, the majority of oral cancers are OSCC. In the oral cavity, OSCC is thought to develop from precancerous dysplastic lesions by multistep carcinogenesis. In fact, OSCC frequently co-exists with or is surrounded by epithelial dysplasia or leukoplakia. Clinicopathologically, malignant transformation of oral precancerous lesions is observed at a frequency of up to 17.5%, [6] although malignant transformation may rarely also develop directly from normal epithelium. [34] In the course of its progression, visible physical changes take place at the cellular level (atypia) and at the resultant tissue level (dysplasia). These alterations include genetic changes, epigenetic changes, surface alterations and alterations in intercellular interactions. The sum total of these physical and morphological alterations are of diagnostic and prognostic relevance and are designated as precancerous changes. [33] Maerker and Bukradt found a correlation between the development of carcinomas and the grade of dysplasia of the primary lesions. Oral leukoplakia is a precancerous lesion that can exhibit the histopathologic features of OED. [27] The percentage of leukoplakia that progress to invasive OSCC is accepted to be directly related to the severity of dysplastic changes. They range from 5% for leukoplakia with mild to moderate dysplasia to up to 43% for leukoplakia showing with severe dysplasia or carcinoma in situ (CIS). Patient can be presented with multiple lesions at the same site or different site. This phenomenon is usually referred to as field cancerization suggesting that these patients exhibit susceptibility to malignant transformation through the epithelia exposed to exogenous carcinogens (usually tobacco related) that result in higher probability of developing multiple precursor lesions and malignancies at other sites. [35] OED presents as an alteration of the cellular maturation in the epithelium and as increase in the proliferative activity in suprabasal layers, that is spinous layer, which helps to establish a more objective diagnosis. Studies have revealed that Ki-67 positivity increased according to the proliferative activity and degree of epithelial dysplasia. Thus implicating as a marker of the proliferation and exhibiting the degree of severity of OED.
In the present study, OED group is subdivided as suggested by Kujan et al., [24] into low risk group and high risk group. In this study proliferation was seen in the basal, parabasal and lower spinous layer in the low risk lesions, whereas it has extended to the superficial part of the spinous layer in high risk lesions. The number of proliferating cells which had stained positive had increased till the superficial layers of the epithelium according to the grade of dysplasia as it is increased in high risk than low risk Group II cases and up to CIS. This increased proliferation in parabasal layers of premalignant oral epithelium is likely related to loss of heterozygosity in 3p, 9p and 17p which behaves as a marker of precancerous fields and increases the risk of developing multiple tumors as stated by Tabor and Brakenhoff et al. [33] Increase in Ki-67 LI was seen in the basal, parabasal and spinous layers of OED as proliferative activity increased due to cellular alteration. The Ki-67 positivity was constant through every grade in the parabasal layer. The number of Ki-67 positive cells when compared between Group I and low risk Group II, was not significant statistically as the P value obtained was 0.421. In low risk OED maximum expression of Ki-67 was in the basal layer followed by parabasal and then spinous layer which showed the least expression. Whereas, it was highly significant statistically when Group I and high risk Group II were compared (P < 0.001). When Group II low risk OED and high risk OED were compared the P value obtained was < 0.001 was highly statistically significant.
The increased proliferating cell population in both basal and suprabasal layers of OED in this study suggest that proliferating cells might increase not only in a superficial direction but also downward to the basal layer in OED.
Ki-67 positive cells in WDOSCC were located in the periphery of the tumor nests where frequent mitoses were observed than the central areas of squamous maturation which suggest that less differentiated cells are located in peripheral layer and the central cells are highly differentiated with an ability to keratinize, thus no expression of Ki-67 was observed in the central cells of the tumor island.
In MDSCC, Ki-67 expression observed in both peripheral and part of central layer as cells were less differentiated than WDSCC and had shown a lesser proliferation rate when compared to WDSCC which was not in accordance with the previous studies but correlates with the study done by Roland et al., (1994) [36] and Piffko et al., (1996) [37] on OSCC.
PDSCC Ki-67 expression was diffuse and more intense as the cells were less differentiated than WDSCC as well as MDSCC. More number of cells were in proliferative phase and hence showed an increase Ki-67 LI than WDSCC and MDSCC. These findings correlate with previously mentioned studies. The staining of Ki-67 positive cells was patchy in most of the PDSCC whereas it was granular and localized to the nuclei in cases of MDSCC and WDSCC.
In our study when Group I and Group III were compared, no statistical significance was observed, but when Group II and Group III were compared the P value obtained was 0.032 which was moderately significant statistically which signify the fact that dysplastic epithelium holds a high potential for malignant transformation.
In conclusion, we propose that Ki-67 is a reliable proliferative marker which can be used for the diagnosis of OEDs which have tendency to undergo malignant transformation. Information on the growth fraction of the tumors may be used in the assessment of tumor grade and in all tumors which have been studied by Ki-67 staining, a highly significant correlation between Ki-67 staining and the degree of malignancy has been reported. Furthermore, a marked variation in the amount of Ki-67 expression within different tumor grades is observed, indicating that Ki-67 staining may be of use in individual tumor diagnosis and prognosis. | 2018-04-03T02:41:45.677Z | 2014-05-01T00:00:00.000 | {
"year": 2014,
"sha1": "aa350ae762f0a211cf08ad2a4b35028e40953381",
"oa_license": "CCBYNCSA",
"oa_url": "https://europepmc.org/articles/pmc4196282",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "10eaa74fe2f6219cd356660fbf1e9b9c8d1e49fb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
10776868 | pes2o/s2orc | v3-fos-license | A variant of sparse partial least squares for variable selection and data exploration
When data are sparse and/or predictors multicollinear, current implementation of sparse partial least squares (SPLS) does not give estimates for non-selected predictors nor provide a measure of inference. In response, an approach termed “all-possible” SPLS is proposed, which fits a SPLS model for all tuning parameter values across a set grid. Noted is the percentage of time a given predictor is chosen, as well as the average non-zero parameter estimate. Using a “large” number of multicollinear predictors, simulation confirmed variables not associated with the outcome were least likely to be chosen as sparsity increased across the grid of tuning parameters, while the opposite was true for those strongly associated. Lastly, variables with a weak association were chosen more often than those with no association, but less often than those with a strong relationship to the outcome. Similarly, predictors most strongly related to the outcome had the largest average parameter estimate magnitude, followed by those with a weak relationship, followed by those with no relationship. Across two independent studies regarding the relationship between volumetric MRI measures and a cognitive test score, this method confirmed a priori hypotheses about which brain regions would be selected most often and have the largest average parameter estimates. In conclusion, the percentage of time a predictor is chosen is a useful measure for ordering the strength of the relationship between the independent and dependent variables, serving as a form of inference. The average parameter estimates give further insight regarding the direction and strength of association. As a result, all-possible SPLS gives more information than the dichotomous output of traditional SPLS, making it useful when undertaking data exploration and hypothesis generation for a large number of potential predictors.
INTRODUCTION
In fields such as neuroscience, chemometrics, and genetics, data is often collected on a large number of variables but with a relatively small sample size, and predictors may also be highly collinear. Statistical methods used in this setting include regression models, cluster analysis and/or tree-based methods, ridge regression and dimension-reduction techniques such as partial least squares (PLS). However, when variable selection is the goal, these may prove inadequate or difficult to interpret.
In the realm of ordinary least squares (OLS), multicollinearity affects both the stability of the estimated coefficients (Wold et al., 1984) and inference on these estimates (Farrar and Glauber, 1967). Essentially, model prediction ability is poor when estimates are unstable (Wold et al., 1984), and one cannot trust conclusions drawn from test statistics, p-values or confidence intervals due to artificially inflated standard errors (Farrar and Glauber, 1967). As an alternative to OLS, ridge regression (Hoerl and Kennard, 2000;McDonald, 2009) and PLS account for multicollinearity and/or over-fitting. However, they are not intended for variable selection without additional computation such as bootstrapping (Abdi, 2010).
In PLS, latent variables (linear combinations of the predictors) are formed using both the outcome(s) and predictors such that all pairs of latent variables are orthogonal and have a sample correlation of zero (Garthwaite, 1994). Regression models are then fit using these latent variables rather than the original predictors and multicollinearity is no longer a concern. In addition, the number of latent variables is often smaller than the number of predictors, so that PLS reduces the dimensionality of the data and the likelihood of over-fitting. However, all predictors are assigned a non-zero weight and inference is not provided, so that variable selection is not readily achieved (Tobias, 1997;Chun and Keleş, 2010). Further detail on the theory underlying PLS regression is available elsewhere (Garthwaite, 1994;Wold et al., 2001;Krishnan et al., 2011).
Given standard PLS is not intended for variable selection but rather prediction, sparse methods such as sparse partial least squares (SPLS) were developed. Variable selection is accomplished by using tuning parameters in the modeling process, which drive both the latent variable selection and computation of predictors' weights (Chun and Keleş, 2010). Here, estimates may be set to zero, indicating a predictor is not significantly associated with the outcome.
Although some weights are zero so as to provide variable selection, this can also be viewed as a weakness of SPLS. In data exploration and hypothesis generation, effect size and p-values, despite insignificance, are often of interest. During exploratory analyses, one may wish to increase the type-I error rate and allow variables that would otherwise be borderline significant or insignificant into the set of selected predictors. Also, one may wish to compare standardized estimates of various predictors despite insignificance. None of this information is provided by executing SPLS in its traditional manner.
To address these shortcomings, an alternative approach, referred to here as "all-possible" SPLS, is proposed. Briefly, a SPLS model is fit for "all possible" values of the model's tuning parameters, as opposed to fitting only one model based on the "optimal" parameters (this latter approach will be referred to as "traditional" SPLS). Predictors are ranked by the percentage of time they are chosen across all models, and the average of non-zero standardized parameter estimates is given for all predictors, even those not chosen by traditional SPLS. Although not formal inference such as a p-value, the former gives the relative ranking of predictors, allowing one to identify potentially borderline significant variables, as well as those least likely to be predictive of the outcome. Simulation confirms predictors most strongly associated with the outcome are robust to changes in the tuning parameters and continue to be selected as sparsity increases, while those with the weakest association are less likely to be chosen under high levels of sparsity. This approach yields supplementary information lost in the traditional application of SPLS, providing increased insight into one's data.
TRADITIONAL SPLS
The spls package (version 2.1-0) in R (version 2.13.2) based on the theory presented by Chun and Keleş (2010) is considered here. The algorithm requires the specification of two tuning parameters, K and η. K (an integer between 1 and min{p, (v -1)n/v}, where v is the number of folds for the cross-validation (CV), p is the number of predictors and n is the sample size (Chung et al., 2009)) is the number of latent variables and η (a continuous value on the interval [0, 1)) determines the amount of sparsity in the algorithm. In general, lower values of η represent less sparsity (and thus more variables tend to be selected), whereas higher values imply more sparsity. However, the choice of K also affects variable selection in conjunction with η (lower values of K tend to result in fewer chosen variables).
To facilitate the choice of K and η, the package includes a CV function, where the "optimal" K and η are those with the lowest mean squared prediction error. For the purposes of this paper, "traditional" SPLS refers to the use of this CV to choose one pair of "optimal" tuning parameters. Once determined, the SPLS model is fit and selected predictors are noted.
While using traditional SPLS, it was discovered the selection of optimal tuning parameters was affected by the seed if CV other than leave-one-out (LOO) was used. For example, for 1000 randomly-chosen seeds, the optimal values of the tuning parameters chosen most often by a 10-fold CV in the real data used in Section Data Application: Volumetric MRI Regions as Predictors of Cognitive Test Results were K = 2, η = 0.7. However, they were only chosen for 171 seeds out of 1000-about 17% of the time. The next pair chosen most often was K = 3, η = 0.6, at 106 times. All of the remaining pairings were chosen less than 10% of the time, so that no one pair was selected notably more than the others. Note that if K and/or η differ only by one unit, this can mean the addition or exclusion of one or more variables from the results. Here, eight predictors were chosen by the first set of tuning parameters, whereas 17 were chosen by the second, indicating instability in the tuning parameter values can cause instability in the variable selection process, affecting conclusions. Because of the unreliability of the 10-fold CV with these data, LOO CV is recommended for traditional SPLS.
Another consideration with the CV is how fine of a grid to use when searching for the optimal value of η, since, again, it is continuous. In the examples provided by the authors of the spls package, η may be one of 0.1, 0.2, 0.3, . . . , 0.9 (Chung et al., 2009;Chun and Keleş, 2010). Given this, and also the fact that considering more η-values results in significantly more computational time, η-values of 0.1, 0.2, 0.3, . . . , 0.9 were used in this paper as well.
"ALL-POSSIBLE" SPLS "All-possible" is quoted because, given η is continuous, one cannot actually achieve every possible combination of tuning parameters. Given a discrete subset of η (here, {0.1, 0.2, . . . , 0.9}), however, one considers "all possible" combinations of the parameters. Specifically, there will be K × η total models fit, one for each combination of K and η, with standardized estimates recorded in each instance. The results are the percentage of time chosen (i.e., the parameter estimate was non-zero), as well as the average non-zero standardized parameter estimate.
It should be noted that with this method it is expected all predictors will be chosen a reasonable number of times (usually in at least 70% of the models). This is because once a large enough K-and/or small enough η-value is used, the method no longer induces enough sparsity to allow for variable selection-it essentially acts like PLS and chooses all variables. Since all pairings of K and η were considered here, many of them resulted in all variables being selected.
There are two advantages to all-possible SPLS. First, by ranking the variables based on how often they are chosen across all models, one has a relative way to compare them, as opposed to "chosen" or "not chosen." Specifically, one can see those variables selected most and least frequently, as well as those that were somewhere in between. In this way, one obtains a continuum of information instead of a dichotomy. Second, an effect size for all predictors-not just those chosen by traditional SPLS-is provided. Thus, even if a predictor was only selected 75% of the time, one still has information on its estimate whenever it was selected.
SIMULATION SIMULATION STRUCTURE
A design analogous to that in Chun and Keleş (2010) was used to create collinear predictors of varying association with the outcome-one set of predictors was strongly associated, another weakly and a third not at all. For j = 1, 2, 3 and c j − 1 + 1 ≤ i ≤ c j , where (c 0 , c 1 , c 2 , c 3 ) = (0, 7, 17, 27), predictors were of the form x i = m j + ε i . Given a sample size of n = 100, m j were each vectors of length 100 from N(0, 20I 100 ) and ε i ∼N(0, I 100 ). Lastly, y = 2m 1 − 0.2m 2 + τ, where τ ∼N(0, I 100 ). All variables were standardized while other settings for the SPLS function were kept at default.
PREDICTORS WITH WEAKER ASSOCIATION ARE LESS LIKELY TO BE CHOSEN WITH INCREASED SPARSITY
This simulation demonstrated how predictors with varying levels of association with y are affected by changes in the tuning parameter pair, (K, η). The general pattern is that for lower values of K and higher values of η, sparsity increases and fewer variables are selected. Here, K = {1, . . . , 27} and again η = {0.1, . . . , 0.9}.
Consider three sets of predictors: S 1 = {x 1 , . . . , x 7 } (strongly associated with y), S 2 = {x 8 , . . . , x 17 } (weakly associated) and S 3 = {x 18 , . . . , x 27 } (not associated). For each d = 1, . . . , D = 1000 samples drawn randomly from the distribution as outlined in Section Simulation Structure, a SPLS model was run for all pairs of K and η. The percentage of predictors chosen from each set was noted for each pair and the average across all 1000 data sets is shown in Figures 1A,B for S 2 and S 3 . Note that K only ranges from 1 to 15, as after K = 15, the average was 100% for all pairs of tuning parameters. For S 1 , all seven predictors were always chosen (i.e., the average was always 100%).
These results confirm variables in set S 3 (not associated with y) were less likely to be chosen as K decreased and η increased (i.e., sparsity increased). Variables in S 2 showed a similar pattern due to their weak association, although their rate of selection was notably higher than those in S 3 . The fact that all variables in S 1 were chosen for 100% of the (K, η) pairs across all D data sets shows strongly associated variables are robust to changes in the tuning parameters. Subsequently, calculating the percentage of time a variable is selected over all pairs of tuning parameters (i.e., conducting all-possible SPLS) will result in those with the strongest association having the highest percentage of time chosen, while the opposite will be true for those with the weakest. This is shown via simulation in the next section.
PERCENTAGE OF TIME CHOSEN AND AVERAGE NON-ZERO STANDARDIZED ESTIMATES
For each of d = 1, . . . , D = 1000 samples from the distribution as described in Section Simulation Structure, all-possible SPLS was conducted: For a given data set, a SPLS model was run for all pairs of K = {1, . . . , 27} and η = {0.1, . . . , 0.9}. Recorded was the percentage of time each variable was chosen, as well as the mean non-zero standardized parameter estimates. Table 1 reports the average of these percentages and mean estimates across all 1000 samples, in order to assess the method's behavior in the long run.
The average percentage of time chosen for all predictors in S 1 was 100, while those in S 2 and S 3 were all chosen around 96% and 90% of the time on average, respectively, resulting in three distinct groups. The average mean non-zero standardized estimates for those in S 1 were all around 0.15, while those in S 2 were FIGURE 1 | (A) shows the average percentage of variables in S 2 selected for each pair of tuning parameters across D = 1000 simulated data sets, while (B) shows this for S 3 .
Frontiers in Neuroinformatics
www.frontiersin.org about −0.01, and those in S 3 were always smaller than those in S 2 (and S 1 ). Both the magnitudes and directions of the estimates for S 1 and S 2 were as expected given the structure of the data outlined in Section Simulation Structure and the fact that estimates were standardized. The small magnitudes and varying directions of predictors in S 3 were reasonable, as they should have estimates that hover around zero.
DATA APPLICATION: VOLUMETRIC MRI REGIONS AS PREDICTORS OF COGNITIVE TEST RESULTS
In neuroimaging, brain regions tend to be numerous and highly correlated, so that over-fitting and multicollinearity are of concern. Here, a well-established predictor-outcome relationship is used to illustrate the proposed SPLS method.
Participants
Data were obtained from the Cardiovascular Health Study (CHS), which is an ongoing, population-based, longitudinal study, and the Healthy Brain Project (HBP), a sub-study of the Health, Aging and Body Composition (Health ABC) Study, which is also longitudinal and population-based. The CHS is a study of coronary heart disease and stroke risk in older adults. Briefly, 5888 community-dwelling older adults were identified between 1987 and 1993 from Medicare eligibility lists in four clinical centers (Forsyth County, NC; Sacramento County, CA; Washington County, MD and Pittsburgh, PA) (Fried and Borhani, 1991). Participants were recruited if they were age 65 or older at time of recruitment, non-institutionalized, not wheelchair-bound or undergoing active cancer treatment, able to give informed consent and expected to remain in the area for at least 3 years. The participants had annual clinic examinations through 1998-1999.
Brain MRIs were acquired for 523 participants in Pittsburgh in -1999(Lopez et al., 2003. Compared to the participants who did not have a brain MRI, these participants were younger, more likely to have more years of education and had a lower prevalence of cardiovascular diseases and cerebrovascular findings (Rosano et al., 2006(Rosano et al., , 2007a. In 2003-2004, a random sample of 327 brain MRIs from the 523 were re-read (Rosano et al., 2005(Rosano et al., , 2007a(Rosano et al., ,b, 2008. No significant differences were observed with regard to demographics or health-related factors between these 327 participants and the 523 total subjects. The Health ABC study began in 1997-1998 as a longitudinal, observational cohort study of 3075 well-functioning older adults from Pittsburgh, PA and Memphis, TN (Simonsick et al., 2001). Participants were enrolled if they were 70-79 years old and reported no difficulty walking a quarter of a mile (400 meters), climbing 10 steps or performing activities of daily living; were free of life-threatening cancers with no active treatment within the prior 3 years and had planned to remain within the study area for at least 3 years. In [2006][2007]314 Health ABC participants from the Pittsburgh site who were interested in and eligible for a brain 3T MRI received a MRI in addition to in-person Health ABC assessments. This ancillary study of the Health ABC is referred to as the HBP.
Both studies have been approved by the institutional review boards of the University of Pittsburgh.
Magnetic Resonance Imaging (MRI) Measures
In both the CHS and HBP, brain MRI assessments included volumetric measures of gray matter for both individual regions and the whole brain. The brain MRI protocol for the CHS carried out in 1997-1999 has been described elsewhere (Yue et al., 1997). Briefly, sagittal T1-weighted localizer sequences and axial spin-echo spin-density-weighted, spin-echo T2-weighted and T1-weighted images were acquired using a 1.5T scanner. A volumetric Spoiled Gradient Recalled Acquisition (SPGR) sequence with parameters optimized for maximal contrast among gray matter, white matter and cerebrospinal fluid (CSF) was acquired in the coronal plane (echo time/repetition time (TE/TR) = 5/25, flip angle = 40 deg., NEX = 1, slice thickness = 1.5/0 mm interslice). All MRI data were interpreted at a central MRI Reading Center using a standardized protocol (Bryan et al., 1997;Yue et al., 1997).
The protocol for the HBP study was performed with a Siemens 12-channel head coil and 3T Siemens Tim Trio MR Voxel counts of the gray matter were obtained for individual regions of interest and for the whole brain using a procedure previously described (Zhang et al., 2001;Tzourio-Mazoyer et al., 2002;Rosano et al., 2005;Wu et al., 2006). After skull and scalp stripping (Smith, 2002), and after segmentation of gray matter, white matter and CSF, the brain atlas and the individual subject brain were aligned and intensity normalization was done on each subject's structural image (SPGR for the CHS and MPRAGE for the HBP images), as well as on the template colin27, to give each subject the same orientation and image intensity distribution as the template and to improve the registration accuracy. For both the CHS and HBP, FMRIB-FAST was applied to segment the image into gray matter, white matter and CSF, while also correcting for spatial intensity variations such as bias field or radio-frequency inhomogeneities (Rosano et al., 2005;Wu et al., 2006). The registration procedure used a fully-deformable automatic algorithm (Thirion, 1998) that does not warp or stretch the individual brain, and thus minimizes measurement inaccuracies (Wu et al., 2006). Volumes were converted from number of voxels to cubic millimeters.
Dependent variable
Scores from the Modified Mini-Mental State Examination (3MS) were used as the dependent variable, as it is a highly studied outcome with regard to memory. The 3MS is a brief, general cognitive battery with components for orientation, concentration, language, praxis and immediate and delayed memory (Teng and Chui, 1987). Because scores tend to be clustered at the high end of the scale, a transformation for left-skewed data was used:-ln(101 -3MS), where 3MS represents the test score for a given individual (Shackman et al., 2006).
Regions of interest and confounding variables
A tiered hypothesis was formed based on the strength of current findings, with the expectation that primary regions would have the strongest association with 3MS, followed by secondary regions. A third set of regions referred to as "non-hypothesized" were not expected to be associated with the outcome.
The primary hypothesized regions were the hippocampus, parahippocampus and entorhinal cortex (Zola-Morgan and Squire, 1993;Dickerson et al., 2001). The secondary hypothesis included additional memory-related regions: amygdala, caudate and medial parietal, lateral parietal and posterior cingulate cortices (Packard and Knowlton, 2002;Koivunen et al., 2011;Squire and Wixted, 2011). Lastly, non-hypothesized regions were those traditionally related to motor tasks and performance (not memory): putamen, pallidum, thalamus, supplementary motor cortex, cerebellum, and post-central and pre-central gyri (Rosano et al., 2007a). Because the pallidum measurements were highly skewed right, the natural logarithm of these values was used. Regions were not normalized, as total gray matter parenchyma was included as a covariate.
The following variables were included as predictors in all models because of prior work indicating an association with 3MS and/or brain structure (Brickman et al., 2008;Raji et al., 2010): race (coded as white and all other races), sex, age, obesity (indicated by a BMI greater than 30) and total brain parenchyma volume (here, represented by total gray matter volume). The treatment of confounding variables here is analogous to that in the OLS regression framework: They were included in all models and never removed, even if they were ultimately not significant. Thus, the interpretation of a set of selected variables is that they are significantly related to the outcome, controlling for confounding variables and all other brain regions.
Influential points
Before the analysis commenced, potentially influential data points were determined by modeling each predictor against each outcome individually and calculating externally studentized residuals in each case (SAS Institute Inc, 2008). Any observation with a residual greater than 2.5 in absolute value was removed from the analysis (this value is slightly less conservative than the cut-off of 2 suggested by the SAS documentation).
Three observations were removed from the HBP data based on the above criterion, while 11 were removed from the CHS. In both data sets, influential points were those with a notably small/large 3MS value paired with a large/small regional volume. The only exception was one observation in the HBP data, which had a very large total brain volume relative to the other subjects. For each data set, there were some subjects with invalid MRIs and/or missing covariate values, so that after removing these subjects and also the influential observations, the final sample size for the CHS was n = 286, while n = 302 for the HBP. In Table 2, p-values for differences in demographic measures between the CHS and HBP cohorts were obtained either by a chi-square test, two-sample t-test or the Kruskal-Wallis Test when normality was suspect.
Analyses were conducted using R version 2.13.2 (spls package 2.1-0) and SAS version 9.2 (SAS Institute Inc, 2008). Both the dependent and continuous independent variables were standardized, and, unless otherwise mentioned, all other settings were kept at default for all functions/procedures used. Run-time for the Frontiers in Neuroinformatics www.frontiersin.org SPLS analyses of interest was less than 5 minutes on a machine with the Windows 7 operating system (64 bit) and a 2.16 GHz Intel Core i7 processor.
MULTICOLLINEARITY AND OVER-FITTING
Multicollinearity was assessed using the condition number by fitting an OLS regression model that included all regions of interest and a priori confounders, where a value greater than 100 indicated significant multicollinearity (Belsley et al., 1980). The CHS cohort had a condition number of 190, while the HBP group had a value of 227. Since both are notably larger than 100, multicollinearity is likely present in these data when all MRI regions are considered simultaneously in the same model (Belsley et al., 1980). While the number of predictors (23) was not larger than the sample sizes (297 and 302), various rules of thumb indicate there should be 10-20 observations for each predictor in a model (Harrell, 2001). This suggests one should have at least 230 observations, and potentially as many as 460, which could indicate potential over-fitting with these data.
SPARSE PARTIAL LEAST SQUARES ANALYSIS
The spls package based on the theory presented by Chun and Keleş (2010) was used for both traditional and all-possible SPLS (Tables 3, 4). Horizontal lines show potential empirically-driven cut-points that indicate varying levels of association between the predictors and outcome. Within the HBP data set (Table 3), all-possible SPLS largely confirmed the proposed hypotheses by choosing two of the primary regions (hippocampus and parahippocampus) 100% of the time and the third (entorhinal cortex) in 96.1% of the models. Additionally, the three largest average non-zero parameter estimates from all-possible (second column) were for the three primary regions: entorhinal cortex (−0.279), hippocampus (0.276) and parahippocampus (0.258). This contrasts traditional SPLS in that the region with the largest estimated magnitude (third column) was the supplementary motor cortex (−0.310), yet this was not a hypothesized region. Although chosen a relatively large percentage of the time by all-possible (96.1%), this region was ranked below/tied with all three primary regions and two secondary (amygdala, medial parietal cortex). It also had a smaller average estimate (−0.187) than all three primary regions. Thus, this region was deemed most significant by traditional, but ranked below multiple hypothesized regions by all-possible. Traditional SPLS also chose post-central gyrus and cerebellum, so that one might conclude these regions are significantly predictive of 3MS, yet cerebellum was the third lowest-ranked region by all-possible (84.1%), and post-central the sixth lowest (88.4%).
Lastly, the additional information gained by all-possible SPLS (ranking according to percent) indicates the lateral parietal inferior cortex is a potentially borderline significant predictor (89.4%), which could not have been known based on the traditional results, as its parameter estimate was set to zero.
Despite being secondary regions, neither the caudate nor posterior cingulate cortex were chosen by either method, so that the results were consistent in this way and may indicate a different relationship in a multivariable setting than has been seen in previous studies involving individual predictors.
The CHS results (Table 4) are notably consistent with those from the HBP data. Specifically, two primary regions (parahippocampus, hippocampus) were again chosen in 100% of the models, although the third primary region (entorhinal cortex) was selected less often, at 89.4%. However, this region had a larger average parameter estimate (−0.132) than all other regions selected less than 90% of the time, and some regions selected in greater than 90% of the models (pallidum, dorsolateral prefrontal and supplementary motor cortices, all non-hypothesized). This again shows the utility of all-possible SPLS in that it highlighted a potentially important, borderline predictor that was missed by traditional.
The regions with the largest average magnitudes according to all-possible were the lateral parietal superior (−0.290), medial parietal (0.228) and lateral parietal inferior (0.220) cortices (all secondary), and the parahippocampus (0.196), a primary region, so that the top four largest estimates were associated with hypothesized regions. Alternatively, traditional SPLS assigned the largest parameter estimate to the pallidum (0.126), followed by the parahippocampus (0.111) and the supplementary motor cortex (−0.110), so that two of the three regions with the largest estimates according to traditional SPLS were nonhypothesized. In contrast, all-possible ranked both the pallidum (96.6%) and supplementary motor (90.8%) lower than two primary (parahippocampus, hippocampus) and two secondary (medial parietal, lateral parietal inferior cortices) regions (and also lower than lateral parietal superior in the case of the supplementary motor cortex).
Lastly, the posterior cingulate cortex and caudate, despite being secondary regions, were not chosen by either method. This finding for the caudate is consistent with that from the HBP.
DISCUSSION
The purpose of this study was to illustrate that all-possible SPLS provides additional, useful information not attainable by traditional SPLS: relative rankings and parameter estimates for non-selected predictors. Simulation verified that predictors not associated with the outcome are selected less often as sparsity increases, while strong, and in most cases weak, associations remain robust. Additionally, conducting all-possible SPLS a large number of times showed that, on average, the percentage of time chosen and mean non-zero standardized estimates were consistent with the structure of the simulated data. A real data example indicated all-possible SPLS was more successful at highlighting hypothesized relationships than traditional SPLS, and also gave useful information about borderline variables that could not otherwise have been known.
Given the CHS and HBP data sets differed with respect to neuroimaging protocols and demographics, it is notable that allpossible SPLS detected hypothesized associations across these cohorts, suggesting robustness in the method. Specifically, MR scanners had different field strengths: the CHS MRIs were obtained with a 1.5 Tesla, the HBP with a 3.0. Additionally, protocols with different spatial resolutions were used across groups: the CHS applied a 5.0 mm slice, whereas the HBP applied a 1.5. Lastly, the cohorts were significantly different with regard to race, obesity and age (although these factors were controlled for in all models). Despite these differences between data sets, the method yielded consistent results overall, indicating its utility as variable selection technique.
A weakness of all-possible SPLS is its relative nature (i.e., ranking by percentage) with no strict cut-off value due to a lack of distributional properties. For example, in the simulation in Section Percentage of Time Chosen and Average Non-Zero Standardized Estimates (Table 1), the average percentage defined three distinct groups, but with no insight into significance (or lack thereof). However, viewing the predictors in this way allows one to see more detail than the dichotomous results of traditional SPLS, and to apply a cut-off if desired, where the value would be based on empirical experience, rather than guided by theory.
By utilizing simulation and a well-studied predictor-outcome relationship across two independent studies, the current findings validate this variation of SPLS as a useful technique for selecting variables in situations where other approaches (namely, OLS) fail. The results of this study suggest all-possible SPLS could be used for hypothesis generation without having to restrict the set of predictors due to multicollinearity or a comparatively small sample size, which geneticists, neuroscientists, economists and social scientists often encounter. The additional information given by all-possible SPLS is especially useful in exploratory analyses, as it allows for a more thorough understanding of the data than can be provided by the binary results of traditional SPLS. | 2015-07-17T22:55:48.000Z | 2014-03-03T00:00:00.000 | {
"year": 2014,
"sha1": "1cd0e4f4d2e6628cd9dba6fda15f59e4f9b873fa",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fninf.2014.00018/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1cd0e4f4d2e6628cd9dba6fda15f59e4f9b873fa",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
249200903 | pes2o/s2orc | v3-fos-license | Mathematical modelling of echinococcosis in human, dogs and sheep with intervention
In this study, a model for the spread of cyst echinococcosis with interventions is formulated. The disease-free and endemic equilibrium points of the model are calculated. The control reproduction number for the model is derived, and the global dynamics are established by the values of . The disease-free equilibrium is globally asymptotically stable if and only if . For , using Volterra–Lyapunov stable matrices, it is proven that the endemic equilibrium is globally asymptotically stable. Sensitivity analysis to identify the most influential parameters in the dynamics of CE is carried out. To establish the long-term behaviour of the disease, numerical simulations are performed. The impact of control strategies is investigated. It is shown that, whenever vaccination of sheep is carried out solely or in combination with cleaning or disinfecting of the environment, cyst echinococcosis can be wiped out.
Introduction
Cystic echinococcosis (CE) is caused by the tapeworm belonging to the family Taeniidae, called Echinococcus granulosus (for more detail, see [3,9,11,26]). Echinococcus granulosus organisms take part in both asexual reproduction and sexual reproduction. Asexual reproduction takes place via budding in the intermediate host, while sexual reproduction takes place by gamete fusion in the definitive host. Echinococcus granulosus is hermaphroditic, containing both male and female sex organs. The transmission dynamics of the disease depends on a number of factors. These include the parasite's biotic potential, stimulation of immunity in life cycle hosts, life expectancy and development time of the parasite. Social and ecological factors such as meat inspection practices, disposal of offal and casualty animals and populations of stray, feral or sylvatic hosts can all affect transmission of this parasite [26]. Consuming offal containing E. granulosus by definite hosts can lead to infection. The frequency of offal feedings and the prevalence of the parasites within the offal are factors that affect infection pressure within the definitive host. The immunity of both the definitive and intermediate host plays a large role in the transmission of the parasite, as well as the contact rate between the intermediate and the definitive host (such as herding dogs and pasture animals being kept in close proximity where dogs can contaminate grazing areas with fecal matter) [21]. The environment plays a powerful role in the transmission of infectious diseases. As that of many environmental disease, cystic echinococcosis is affected by various biological and environmental factors [12,23].
Infection with E. granulosus remains a major public health issue in several countries and regions, even in places where it was previously at low levels, as a result of a reduction of control programmes due to economic problems and lack of resources [13]. Although control programmes against human cystic (CE), caused by E. granulosus, have been established in some countries and effective control strategies are available, the parasite has still affected many countries of all continents. Thus human CE is persisting in many parts of the world with high incidences [9,13]. The human incidence can exceed 50 per 100,000 person-years in areas of endemicity, and prevalence rates as high as 5-10% can be found in some countries [14]. The incidence of human Hydatid disease in any country is closely related to the prevalence of the disease in domestic animals and is highest where there is a large dog population and high sheep production [7]. The average annual death rate from echinococcosis is 0.007 per 10,000 population, which is very low. The main causes of death are either complications of hepatic and pulmonary echinococcosis or echinococcosis of the heart. The complications of liver echinococcosis may develop due to the changes occurring not only in the parasitic cyst but also in the affected organ or in the patient's body [15].
According to WHO, cystic echinococcosis is a preventable disease [22]. To employ preventive measures, understanding of the transmission dynamics, both between dogs and sheep where the parasite maintains itself and from dogs to human is important. It is from this knowledge that effective control measures can be devised to reduce the prevalence of the parasite in animals and hence reduce the incidence of human disease. Understanding of the epidemiology of echinococcosis has been greatly improved, new diagnostic techniques for both humans and animals have been developed, new prevention strategies have emerged with the development of a vaccine against E. granulosus in intermediate hosts [9]. Since sheep has a substantial potential to transmission of parasite, vaccination of sheep with an E. granulosus recombinant antigen (EG95) offers encouraging prospects for prevention and control [5]. The vaccine is currently being produced commercially and is registered in China and Argentina. Trials in Argentina demonstrated the added value of vaccinating sheep, and in China the vaccine is being used extensively [1,7]. Currently there are no human vaccines against any form of echinococcosis [1].
Echinococcus eggs can be inactivated by disinfectants such as formalin, chlorine gas, certain freshly-prepared iodine solutions (but not most iodides) or lime can inhibit hatching of the embryo and reduce the number of viable eggs. Food safety precautions such as thorough washing of fruits and vegetables, combined with good hygiene, can reduce exposure to eggs on food. The hands should always be washed after handling pets or farming, gardening or preparing food, and before eating. Water from unsafe sources such as lakes should be boiled or filtered. Meat, particularly the intestinal tract of carnivores, should be thoroughly cooked before eating. PPE reduces the risk of infection when working with animal tissues or fecal samples and periodic surveillance of high-risk populations [10].
In [5], a mathematical model of the transmission dynamics of cyst echinococcosis without intervention was presented and analysed. From the result of sensitivity analysis, it was found that the transmission rate of E. granulosus eggs from the environment to sheep (β es ) is the most influential parameter that control the dynamics of disease. Thus the prevention of the disease depends on the interruption of the life cycle of E. granulosus. To break the parasite's life cycle and control the disease transmission, vaccination of sheep and/or cleaning of environment are practical alternatives.
In this paper, the model developed in [5] is extended by incorporating vaccination of sheep and disinfection or cleaning of the environment as control strategies. The objective of this paper is to determine the better control method from vaccination of sheep and disinfection or cleaning of environment.
The rest of this paper is organized as follows. In Section 2, mathematical model of cyst echinococcosis with intervention is presented. In Section 3, the positivity and boundedness of solutions are discussed. In Section 4, both the disease free and endemic equilibrium points are determined, local and global stability of these equilibrium points with the calculation of the control reproduction number are presented. Moreover, local and global sensitivity analyses, numerical simulations and effects of control strategies are presented in Section 5, and finally conclusion is drawn in Section 6.
Model formulation
We formulate a compartmental model to describe the transmission dynamics of the disease by considering the dog, sheep and human populations. The total populations of dog, sheep and human are assumed constant, and denoted by N * d , N * s and N * h respectively, so that the birth rate and death rate of each of the populations are equal. The dog population has three classes: the Susceptible (S d ), Exposed (E d ) and Infectious (I d ) classes. The human population has four classes: the Susceptible (S h ), Exposed (E h ), Infectious (I h ) and Removed (R h ) classes. The sheep population has four classes: the Susceptible (S s ), Exposed (E s ), Infectious (I s ) and Vaccinated (V s ) classes. Dog, sheep and human populations are recruited to susceptible class by birth at rates μ d , μ s and μ h respectively.
The transmission cycle of E. granulosus involves two hosts (dogs and sheep) and free living stages. The relative times spent in the different stages of the life cycle of the parasite and the hosts are of considerable importance to the analysis of model behaviour. The sexual reproduction of the parasites in the dog population is an important factor for the load of the parasite. The concentration of the parasite in the environment is increased by shedding from infected dog at a rate δ(P) and decreased by the natural death rate of E. granulosus eggs at rate μ e and disinfection or cleaning of environment at rate μ, where P(t) denote the mean number of parasite per dog host. Although modelling infectious diseases that are caused by parasites is best formulated using the intensity of infection which measures the number of parasites rather than the prevalence of infection [18], in this work, we considered the latter. Here, the incidence function represents sufficient number of parasites that can cause infection as reported in [30]. Thus the time evolution of the parasite egg's is represented by the following differential equation: In the dynamics of the disease transmission, susceptible sheep are infected by ingesting parasite eggs in the feces of infected definitive hosts (dogs), while humans are infected by accidentally ingesting E. granulosus eggs from the environment. We introduced vaccination to the susceptible sheep population at a rate ν, so that the population of susceptible sheep is reduced through vaccination and moved to vaccinated V s class. We assume that vaccination is not lifelong. The sheep population may lose of vaccine-induced immunity and move back to susceptible class at a rate ρ. Rate of infection of susceptible sheep is β es B χ s +B , where β es denotes the rate of ingestion of Echinococcus egg from the environment by sheep and χ s is the half-saturation constant of parasite in the environment sufficient to infect sheep. Rate of infection of susceptible humans is β eh B χ h +B , where β eh denotes the rate of ingestion of Echinococcus egg from the environment by human, and χ h is the half-saturation constant of parasite in the environment sufficient to infect human. Susceptible dogs are infected by preying on the infected sheep. The disease transmission rate from sheep to dogs is denoted by β sd . The rates at which exposed dog, sheep and human progress to infectious classes are denoted by γ d , γ s and γ h respectively. Infected human population could recover from the disease naturally, at rate α h , whereas sheep and dogs cannot recover once they are infected. We assume that there is no echinococcus induced death. However, dogs, sheep and humans die naturally at rates μ d , μ s and μ h respectively. The density of E. granulosus eggs depends mainly on the number of infectious dogs. Its concentration in the environment is increased by shedding of a parasite from infected dog at a rate δ and decreased by the natural death rate of E. granulosus eggs at rate μ e and disinfection or cleaning of environment at rate μ.
The general structure of the model is captured by the flow chart displayed in Figure 1.
The transmission dynamics of the disease in the three populations can now be expressed by the following system of first-order differential equations: For convenience, we make the following substitutions: Thus we can rewrite the system (1)-(12) with initial conditions as
Well-posedness of the solutions
Before we proceed with the mathematical analysis, we need to show that the models (1)-(12) (alternatively model (13)) is well-posed epidemiologically and mathematically in a feasible domain.
Existence and stability of equilibria
The equilibrium point(s) of the system (1)- (12) are obtained by equating the right hand sides to zero as follows: From Equations (14c) and (14d), we respectively have From Equations (14a), (14b) and using (15a) we have Similarly, from Equations (14i)-(14l), we obtain By Equating (15c) and (15d), we then obtain a quadratic equation where The results presented in Sections 4.1 and 4.4 follow from these two roots.
Disease-free equilibrium (DFE)
From algebraic computation when B = 0, the system (13) has the DFE given by
The control reproduction number
A key quantity in epidemiological models is the reproduction number. It is a useful threshold in the study of a disease for predicting outbreak and for evaluating the control strategies.
Due to the presence of controls in the model (13), the term 'the control reproduction number' is used. The control reproduction number, denoted by R c represents the average number of secondary infections caused by an infectious individual over the course of infectious period in a totally susceptible population under specified controls. We derive the control reproduction number R c by using the Next Generation Matrix (NGM) approach [27] on the system (13). The detail computation is done in Appendix A. Thus the control reproduction number is given by is the basic reproduction number as derived in [5], which represents the average number of secondary infections caused by an infectious individual over the course of infectious period in a totally susceptible population without vaccination and disinfection or cleaning of the environment. Here, we can notice that R c < R 0 .
Stability of the DFE
then the DFE is unstable, the system is persistent and there is at least one equilibrium in the interior of D.
Proof: To prove the global stability of the disease free equilibrium X 0 , we use a matrixtheoretic method as explained in [25].
The disease compartments of model (13) can be written as and V are matrices given in (A1a) and (A1b), and [4], the condition of Theorem 2.2 in [25] fails. Instead, to establish the global stability of the DFE, we construct a Lyapunov function by using Theorem 2.1 of [25]. Let W T = (w 1 , w 2 , w 3 , w 4 , w 5 , w 6 , w 7 ) be the left eigenvector of V −1 F corresponding to the eigenvalue R c . Thus As a result, we found that W = (0, 0, 1, 0, 0, 0, corresponding to the eigenvalue R c . Thus by Theorem 2.1 of [25], is a Lyapunov function for model (1)- (12). Then differentiating along the solutions of the system (1)- (12) gives If Q = 0, then B = 0 and I s = 0. Hence, the largest invariant set of the model where Q = 0 in int(D) is the singleton {X 0 }. Therefore, by Lasalle's invariance principle [16], the disease free equilibrium X 0 is globally asymptotically stable if R c < 1. For R c > 1, the first term in (18) Consequently, Q > 0 in a neighbourhood of X 0 . Thus the disease free equilibrium X 0 is unstable, and using Theorem 2.2 of [25], the system (13) is uniformly persistent and hence imply there is at least one endemic equilibrium in the interior of D.
Existence and stability of the endemic equilibrium (EE)
From (17), it follows that the model (13) admits an endemic equilibrium. Thus the endemic equilibrium point of the system in terms of the control reproduction number R c is given by For R c = 1, it can be noted that the endemic equilibrium point reduces to disease free equilibrium point. Under the condition R c < 1, the quadratic equation (17) has no positive root. Hence, the model equation (13) has no positive endemic equilibrium whenever R c < 1. This consequently indicates that the backward bifurcation phenomenon does not occur whenever R c < 1. On the contrary, the disease will persist if R c exceeds unity, where a stable endemic equilibrium exists. The phenomenon, where the disease-free equilibrium loses its stability and a stable endemic equilibrium appears as R c increases through one, is known as forward bifurcation.
by Equations (19a)-(19l) is globally asymptotically stable, and the bifurcation of endemic equilibrium point is forward when R c > 1.
Proof: Detailed proof of this theorem is presented in Appendix 3.
Elasticity indices
In this section, we carried out sensitivity analysis to determine the model robustness to parameter values. This is a tool to identify the most influential parameters in determining model dynamics. Sensitivity analysis is used to obtain the sensitivity index that is a measure of the relative change in a state variable when a parameter changes. We compute the sensitivity indices of R c to the model parameters with the approach used by Chitnis et al. [6,20]. These indices could be computed numerically so as to figure out parameters that have high impact on control reproduction number R c , and the importance of each individual parameter in the disease transmission dynamics and prevalence. To perform the local sensitivity analysis, we use the normalized forward sensitivity index of a variable with respect to a parameter which is expressed as the ratio of the relative variation in the variable to the relative variation in the parameter. Thus the normalized forward sensitivity index (elasticity index) of a variable (R c ) with respect to a parameter p is the ratio of the relative change in the variable to the relative change in the parameter, given by We use data from the literature, and due to the great variation of some parameter values from region to region, without deviation from the range of parameter values obtained from literature, assumed (estimated) values were used for sensitivity analysis, as given in Table 1.
The parameter values were hypothetically chosen, because the intention for this work is not to validate the model results of the real situation obtained in a particular study, but for illustrative purposes only. With sensitivity analysis, we can get insight into the appropriate intervention strategies to prevent and control the spread of the disease. Sensitivity indices of the control reproduction number R c with respect to the model nine parameters are determined and presented in the table below. Ranges of these parameters are taken without deviation to the data obtained in the literature and assumed values. Natural birth rates and death rates of the populations are not considered for sensitivity analysis since these parameters are difficult to control. Table 2 gives the elasticity indices of R c with respect to key parameters of model (1)-(12) at the baseline values indicated in Table 1 and arranged in descending order of magnitudes. The sign of the elasticity index tells whether R c increases (positive sign) or decreases (negative sign) with the parameter, whereas the magnitude determines the relative importance of the parameter. From the magnitude of elasticity index, we can notice that four parameters (β es , β sd , δ, χ s ) have equal and the greatest influence for the transmission of the disease, followed by γ s , γ d and ν, and ρ has the least influence for the transmission of the disease.
Global sensitivity analysis
From the local sensitivity analysis, we observed that it is impossible to differentiate explicitly the most influential parameter(s) of the model. To determine which parameter(s) among the nine is (are) most influential in the dynamics of the disease, global sensitivity analysis is done. We employed the technique of Latin Hypercube Sampling (LHS) to test the sensitivity of the model to each input parameter, as described and implemented in 0.0001 0.00001−0.0002 0.032 Figure 2. Global sensitivity analysis displaying the partial rank correlation coefficients (PRCC) of control reproduction number R c . [19], and Partial Rank Correlation Coefficients (PRCCs) to assess the significance of each parameter with respect to each metric is used. Latin hypercube sampling is a stratified sampling technique that creates sets of parameters by sampling for each parameter according to a predefined probability distribution. To examine the dependence of R c on parameter variations, we determine the PRCC values by considering a range of parameters as given in Table 2, with sample size 1000. The result is depicted in Figure 2. The parameter with PRCC value far away from zero indicates the more strongly the parameter influence R c . The negative sign for PRCCs indicates inverse proportionality.
From Figure 2, it is observed that the transmission rate from sheep to dog (β sd ) and Echinococcos eggs contamination rate of the environment by infected dogs (δ) are the most influential parameters among the eight parameters in the disease dynamics. On the other hand, rate of losing vaccine induced immunity of sheep are the least sensitive parameter for the dynamics of the disease.
Numerical simulations
In this section, we carry out numerical simulations for mathematical model of cyst echinococcosis in the populations of sheep, dogs and humans. We use the total populations N * d = 590, N * h = 1014, N * s = 1760, and parameter values given in Table 1. This yields a control reproduction number R c = 0.72 < 1. Using different initial conditions, the time evolution of human, sheep and dog populations for model (13) is displayed in Figure 3. We can notice that all disease compartments E * d , h , E * s and I * s converge asymptotically to zero, while the noninfected compartments S * d , S * h and S * s + V * s converge to their respective total populations. This asserts the global stability of the disease free equilibrium as proved in Theorem 4.1. Figure 4 shows that the time evolution of human, sheep and dog populations for model (13) with parameter values given in Table 1 by increasing β sd = 0.00001 to β sd = 0.0001. In this case, the control reproductive number is R c = 2.28 > 1, and depict the global stability of the endemic equilibrium as proved in Theorem 4.2. It can be noticed that all the compartments of the dog, human and sheep populations converge asymptotically to their respective endemic equilibrium points irrespective of any initial conditions.
In the case of endemicity, the prevalence rate of human is Number of new cases of disease during specified period average population size × duration of follow up = 111 + 81 1014 × 1 × 100% = 18.9% Table 1, using different initial conditions which gives R c = 0.72. Table 1, using different initial conditions, except for β sd = 0.0001 which gives R c = 2.28, and with approximate equilibrium values The prevalence rate resulted in the numerical simulation is higher than the WHO published data, since the WHO report showed that the human prevalence rate is 5-10% (as indicated in [14]). This result has shown at least 8.9% discrepancy from WHO published data.
Effects of control strategies on R c
The numerical simulations are performed to illustrate the effect of vaccination of sheep and cleaning or disinfecting the environment in the dynamics of the disease transmission in the populations of sheep, dogs and humans while they are used alone or simultaneously. The effect of vaccination of sheep using baseline parameter values in Table 1 except for μ = 0, and when ν is varied from 0.005/10 to 0.005, is displayed in Figure 5. As a result the infectious sheep, dog and human populations are respectively reduced from 25 to 0, 13 to 0, and 80 to 0, where the control reproduction number is also reduced from R c = 1.53 to R c = 0.8. This result shows that increasing the rate of vaccination of sheep (ν) reduces the time evolution of infected human, sheep and dog populations. The effect of disinfection or cleaning the environment using baseline parameter values in Table 1 except for ν = 0, and when μ is varied from 0.001/10 2 to 0.001, is displayed in Figure 6. As a result the infectious sheep, dog and human populations are respectively reduced from 37 to 32, 18 to 16 and 93 to 79, where the control reproduction number is also reduced from R c = 1.8 to R c = 1.6. This result shows that increasing the rate of disinfection or cleaning the environment (μ) has very less effect to eradicate the disease transmission in human, sheep and dog populations.
Numerical simulation is also performed to illustrate the effect of vaccination of sheep and disinfection or cleaning the environment when the two control strategies are administered simultaneously, as displayed in Figure 7. The number of infectious sheep, dog and human populations are respectively reduced from 26 to 0, 13 to 0 and 80 to 0, where the control reproduction number is also reduced from R c = 1.5 to R c = 0.72. One can Table 1, with varying values of μ (ν = 0). observe that these combined effects allow to reduce the size of infected individuals. Thus increasing the vaccination rate of sheep alone or a simultaneous increase of the vaccination rate of sheep and rate of cleaning or disinfecting the environment is an effective control measure of cyst trichinosis.
Furthermore, we assess the impact of combined control strategies using contour plots of R c as function of the control strategies, and with varying rate of transmission from sheep to dog (β sd ), we estimate the least values of the two control parameters which will ensure the disease eradication in the populations. Figure 8(a) shows contour curves of R c as a function of ν and μ using baseline parameter values in Table 1. We can observe that, low control strategies are needed to ensure the eradication of the parasites, with range of R c ∈ [0.076, 0.96] and mean 0.518. Contour plots of R c as function of the control strategies, for rate of transmission from sheep to dog (β sd = 0.0001) is displayed in Figure 8(b). The least values of ν and μ that will ensure parasites eradication are estimated to be 0.01 and 0.01 so that R c = 1. In this case, the combined control strategies have effect in the disease transmission with range of R c ∈ [0.24, 3.03] and mean 1.635. Contour plots of R c as function of the control strategies, for rate of transmission from sheep to dog (β sd = 0.0004) is displayed in Figure 8(c).The least values of ν and μ that will ensure parasites eradication are estimated to be 0.045 and 0.01 so that R c = 0.99. In this case, the combined control strategies has effect in the disease transmission with range of R c ∈ [0.48, 6.06] and mean 3.27. These results have shown that a remarkable increase in the control reproduction number is observed with an increase rate of transmission from sheep to dog (β sd ). Hence, to ensure the eradication of parasites, we must introduce the least values of the controls that can bring the value R c < 1 with respect to the rate of transmission from sheep to dog (β sd ).
Conclusion
In this paper, we proposed and analysed a deterministic model for transmission dynamics of cyst trichinosis that incorporates two control strategies namely vaccination of sheep and disinfection or cleaning the environment. The model has a Disease Free Equilibrium point (DEF) which is both locally and globally asymptotically stable whenever the control reproduction number R c < 1. We also found the Endemic Equilibrium points (I) and proved that it is globally stable whenever the control reproduction number R c > 1. Moreover, we have performed sensitivity analysis on the control reproduction number with the two control strategies, from which we have noted that the most sensitive parameters are the transmission rate from sheep to dog (β sd ) and Echinococcus eggs contamination rate of the environment by infected dogs (δ). Numerical simulations of the model have shown that whenever the control strategies are carried out solely then vaccination of sheep is the better alternative to eradicate cyst echinococcosis, but when disinfection or cleaning the environment is carried out solely, its effect to eradicated the disease is very less. This indicates that to eradicate the disease from the three populations the model needs to incorporate other possible intervention strategies that can reduce β sd . Our finding has shown that the two control strategies are not enough to control the disease. We suggest that more controls, which focus more on dog population, should be incorporated to eradicate the spread of cyst echinococcosis. In our future, we will extend the model by incorporating additional control(s) and investigate the effectiveness and cost effectiveness of the control measures.
Funding
The author(s) reported there is no funding associated with the work featured in this article.
Disclosure statement
The authors declare that there is no conflict of interests regarding the publication of this paper.
Data availability
The numerical data used in our research are obtained from the published literature, which are cited therein. We also use reasonable estimate, for the data that are not available in the literature.
If e(t 1 ) = I d (t 1 ), then from (3) From continuity of the functions (the state variables), any of the variables can never be negative. Therefore, the solution of (1)- (12) is positive for all t ≥ 0.
Second, we prove the boundedness of the solutions as follows.
After some algebraic manipulation, the solution of this differential
Appendix 2. Calculation of the control reproduction number
According to the concepts of the next generation matrix and reproduction number presented in [8] and [27], we define The Jacobian matrix of the infection subsystem at X 0 can be decomposed as F−V, where F is a matrix of transmission rates given by and V is a matrix of transition rates given by Thus , and the next generation matrix is where φ = ρ+μ s ρ+μ s +ν . Thus the control reproduction number is the spectral radius given by
Appendix 3. Proof of Theorem 4.2
Proof: To prove the global asymptotic stability of the endemic equilibria, we use the method of Lyapunov functions combined with the theory of Volterra-Lyapunov stable matrices. To do this, we define a Lyapunov function: | 2022-06-01T06:26:10.405Z | 2022-05-30T00:00:00.000 | {
"year": 2022,
"sha1": "171db21a46a985d10aea73091cdcdb617d4ec084",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/17513758.2022.2081368?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "8d1c8e8b1b7f710d6fc8ced6c25fe4705ee198f4",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
81092323 | pes2o/s2orc | v3-fos-license | Traumatic Brain Injury and Cerebral Vascular Accident : Application of Rasch Analysis to Examine Differences in Disability and Outcome in Post-Hospital Rehabilitation
The purpose of this study was to demonstrate an application of Rasch analysis to identify differences in disability profiles resulting from traumatic brain injury (TBI) and cerebral vascular accident (CVA) and to examine outcome differences between the two groups following post-hospital residential rehabilitation. Participant data were collected from 32 facilities in 16 states. From 2990 neurologically impaired individuals with consecutive admissions from 2011 through 2017, 874 met inclusion criteria: TBI (n = 687) or CVA (n = 187), 18 years or older, minimum length of stay of one month, and maximum chronicity of 1 year. Participants were evaluated at admission and discharge on the Mayo Portland Adaptability Inventory-Version 4 (MPAI-4). Rasch analysis was performed to establish item reliability, construct validity and item difficulty. A Repeated Measures Multivariate Analysis of Covariance (RM MANCOVA) determined group differences and improvement from admission and discharge. Rasch Analysis demonstrated satisfactory construct validity and internal consistency (Person reliability > 0.90, Item reliability > 0.98 for admission and discharge MPAI-4s). Both groups showed significant improvement on the MPAI-4 (p < 0.0005). The TBI group was more impaired on the adjustment scale at both admission and discharge (p < 0.001). Rasch analysis identified two distinct impairment patterns. CVA participants exhibited deficits characteristic of focal impairment while the TBI group presented with deficits reflective of diffuse impairment. Rehabilitation was shown to be beneficial in reducing disability following neurologic injury in both groups. Importantly, Rasch Analysis accurately produced unique disability profiles that differentiated the treatment groups. This unique statistical How to cite this paper: Lewis, F.D. and Horn, G.J. (2018) Traumatic Brain Injury and Cerebral Vascular Accident: Application of Rasch Analysis to Examine Differences in Disability and Outcome in Post-Hospital Rehabilitation. Open Journal of Statistics, 8, 670-683. https://doi.org/10.4236/ojs.2018.84044 Received: July 3, 2018 Accepted: July 31, 2018 Published: August 3, 2018 Copyright © 2018 by authors and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0). http://creativecommons.org/licenses/by/4.0/ Open Access F. D. Lewis, G. J. Horn DOI: 10.4236/ojs.2018.84044 671 Open Journal of Statistics technique offers a promising prescriptive hierarchical model for guiding neurological rehabilitation treatment.
Introduction
The United States Center for Disease Control reports that approximately 4% of the American population is living with disability resulting from Cerebral Vascular Accidents (CVA) and Traumatic Brain Injury (TBI) [1], with TBI demonstrating a 3:1 higher incidence rate [2] [3].Survival rates continue to rise for both groups with improved medical management of hypertension mitigating the impact of stroke and advances in emergency medical technology saving lives following TBI [2] [4].As the survival rates have increased so to have the number of persons living with long-term disability, which is currently estimated to be 11.5 million people in the United States [1].Although the mechanism of injury differs for CVA and TBI, both types of injuries often result in impaired functioning in communication, mobility, vision, memory, information processing and behavioral control.
Concomitant with the increase in the number of persons living with disability has been the growth of post-hospital neurological rehabilitation programs [5].These programs may be either residential or outpatient and are designed to treat persons with acquired brain injuries such as TBI, CVA, brain tumors and anoxic/hypoxic injuries.Treatment typically involves 45 to 60 minutes each of physical, occupational, and speech therapies per day, up to five days per week [6].Medical management of medications and psychological counseling are included as indicated.Current clinical practice involves individual members from a multi-disciplinary rehabilitation treatment team conducting assessments, identifying deficits, and developing goals to improve performance in the identified problem areas [5].This approach, however, is accomplished without empirical knowledge of which deficits have the greatest impact on overall outcome.Rather, the approach considers deviations from non-impaired performance to targeted goals for remediation.A more focused approach would involve targeting those deficits that have the greatest effect on functional outcome and determining optimal treatment strategies by discipline to lessen their impact.A statistical approach utilizing Rasch modeling techniques offers an efficient means to accomplish the first step in this process: identifying the deficits most relevant to outcome.Rasch analysis, most commonly associated with Item Response Theory, is used to improve the accuracy and reliability of tests or questionnaires comprised of items with multiple response options.Rasch uses a logistical model of proba- The purpose of the present study was to extend this line of research by applying Rasch analysis of MPAI-4 data to examine differences in disability profiles for clinical groups.The current study identified TBI and CVA survivors treated in community-residential, post-hospital brain injury rehabilitation programs for analysis.Additionally, this study evaluated the effectiveness of these treatment programs in reducing disability from admission to discharge.
Subjects
The
Rehabilitation Treatment
The over-arching goal of the programs involved in the study was to maximize participants' functional independence for return to home and family.With this goal, each participant received physical therapy, occupational therapy, speech therapy, recreation and community integration, counseling (based on need) and medical management provided by nursing and physicians specializing in physical medicine and rehabilitation.Additionally, they received an average of 5 to 6 hours a day of life skills acquisition training including community integration.
Data Collection
Each participant was evaluated within approximately two weeks of admission using the MPAI-4 by treatment team consensus.Discharge MPAI-4s were completed in a similar fashion within the final week of the participant's stay.The results of all evaluations with demographic data were compiled into a national database for analysis.
Statistical Analysis
Rasch analysis was performed to determine reliability of MPAI-4 admission and discharge assessments and item difficulty profiles for the TBI and CVA samples.
A repeated measures multivariate analysis of co-variance (RM MANCOVA) was provided to evaluate change scores on Abilities, Adjustment, and Participation Indices from admission to discharge and to evaluate differences between groups at admission and discharge.Analyses were performed using SPSS version 25 for the RM MANCOVA and follow-up tests while Winsteps version 3.81 was used to conduct Rasch Analyses.
Rasch Item Difficulty Statistics
Rasch analysis orders items by identifying the probability of an item receiving a particular rating along the measurement scale (i.e.no limitation to severe limitation).For example, mean item difficulty is the point at which the highest and lowest categories have an equal probability of being observed [11].In the case of the MPAI-4, the more difficult items would be those that have a higher probability of a moderate to severe limitation being observed than no limitation.Difficulty measures are presented as logits with two decimal points.For a given population this measure is useful for ordering items from least to most likely to be impaired.
Construct Validity
Construct validity refers to the extent to which an evaluation tool measures the underlying construct that it is intended to measure.Rasch fit statistics accomplish this by evaluating expected values for an item to the actual value obtain from the data set.Fit statistics also provide an estimate of the distinct contribution for each item in describing the underlying construct and the extent to which they differentiate among people along the continuum of that construct [12].As applied to the MPAI-4, Rasch Infit and Outfit statistics illustrate the fit of each item representing unique contribution to a person's level of disability (latent construct).Fit values that are nearest to 1.0 indicate minimal distortion.Values falling below 1 indicate that persons are answering incorrectly when they are expected to answer correctly (Guttman error).Low fit values on the MPAI-4 suggest that high levels of limitation are observed when low levels would be expected for that person on those items.Values greater than 1 indicate that there is more random variation on an item than would be expected.Therefore, Fit values falling between 0.5 and 1.5 are considered productive for measurement use [7].
Items that fall outside those parameters may not reliably represent the latent construct being measured.
Reliability
Reliability refers to the consistency of a measure or the extent to which a meas- Separation values reveal how well items distinguish among people along a performance continuum (Person Separation) and the unique contribution of items to the construct being measured.Person Separation values indicate the number of performance levels detected by a measure.For example, a Person Separation index of 2.00 means that two levels of performance can be reliably identified.
Item Separation refers to the extent to which items on a test are consistently ranked from least difficult to most difficult.Low Item Separation (<3.00) implies that the item difficulty hierarchy is not reliable, whereas magnitudes exceeding 3.00 indicate greater consistency of item hierarchy.
Person and Item Reliability
Rasch person reliability coefficients for the MPAI-4 at admission were 0.91 for the TBI group and 0.88 for the CVA group.Admission MPAI-4 item reliability coefficients were 0.99 for both groups.At discharge person reliability was 0.95 and 0.93 respectively for TBI and CVA.Again, MPAI-4 item reliability was 0.99 for both groups at discharge.These findings indicate that MPAI-4 assessments effectively distinguished persons along the disability continuum (person reliability) and there was a consistent level of agreement within groups identifying easy through difficult items (item reliability).
Person and Item Separation
Rasch person separation values for admission MPAI-4 assessments were 3.With acceptable levels of reliability and validity established, further analyses were conducted to determine item difficulty profiles and performance differences admission to discharge.Figure 2 shows difficulty values for Adjustment Items.For both groups the impact of items on the Adjustment scale was less severe than those on the Abilities scale.The TBI group, however, experienced greater disability with emotional adjustment than the CVA group; TBI participants had higher disability ratings in 7 of 8 Adjustment items.Disability was most pronounced on Impaired Awareness (−0.43) for TBI and Fatigue (−0.18) for CVA.The CVA group received negative difficulty values on each of the 8 items on the scale.Difficulty values were negative on 7 of 8 items for the TBI group.
Item Difficulty
Transportation, Residence (home skills), money management presented the greatest difficulty (highest level of disability) for both groups.order of the first three items.For the TBI group leisure skills and memory were replaced by productivity (engagement in meaningful activity paid or unpaid) and impaired awareness in the top five.Transportation remained unchanged and presented the greatest magnitude of disability.
Change Admission to Discharge
With age entered as a covariate, a RM MANCOVA revealed a significant main effect for pre-post testing, F(1, 871) = 128.97p = 0.0005, Wilks Lambda = 0.87, partial eta 2 = 0.13, power to detect = 1.00.Follow-up paired sample t-tests found that MPAI-4 T-Scores were significantly lower (less disability) at discharge for both the TBI and CVA groups.
Discussion
After is not determined by the treatment team but by funding sources.These decisions are often based on short-term cost considerations rather than using an evidenced based model to determine appropriate length of stay to maximize disability reduction.Given the impact of potential funding limitation, treatment teams may be able to achieve greater disability reduction by using a prescriptive model in their rehabilitation treatments.Prescriptive modeling can target deficits in an established order thereby producing a greater impact on disability reduction.In addition this prescriptive modeling may also impact how and when remediation and compensatory strategies are used throughout the recovery process.
Rasch analysis assists in the meaningful targeting of treatment by identifying skills that have the highest probability of severe disability.The present study demonstrated that the CVA and TBI groups presented with different disability profiles at admission.The CVA group had a greater likelihood of experiencing disability in skills such as use of hands, mobility, visuospatial abilities, and novel problem solving.This pattern of disability is characteristic of focal lesions often seen in CVA.The TBI group was more likely to exhibit more diffuse disability including novel problem solving, memory, attention/concentration, impaired awareness and initiation.This constellation of cognitive and neurobehavioral symptoms is the hallmark of frontal and temporal lobe disruption associated with TBI.
Both groups experienced the greatest change with Abilities and Adjustment items, but the greatest challenge was within the applied skills of the Participation Index (e.g., instrumental activities of daily living).Rehabilitation within the first year of recovery tends to show the greatest gains with physical, cognitive, and communication skills along with moderate behavioral stability.However, application of skills into real-world settings and situations requires extensive learning and insight development that is often not evident until much later in recovery.
Limitations experienced in these skills for the current study were the result of different patterns of disability with regard to the physical, cognitive, and emo-tional/behavioral functions that were related to the neuropathology and mechanism of injury type.Application of skills tends to be the greatest limiting factor in recovery from neurological injury.
Although both groups saw improvement on participation skills at discharge, greater reduction in disability may have been achieved by targeting the high impact deficits identified at admission with longer and more frequent therapies.
Thus, this study provides an example of evidence-based hierarchical modeling with Rasch analysis to provide improved targeted treatment that is independent of time in recovery.The use of Rasch seems to be a promising application for the development of more hierarchical prescriptive treatment for persons recovering from TBI or CVA.
Figure 1 through
Figure 1 through Figure 3 illustrate item difficulty comparisons for the TBI and CVA groups on MPAI-4 Abilities, Adjustment, and Participation Indices.Again, difficulty values of zero indicate the there is an equal probability of observing high or low levels of disability.Items with values less than zero have a higher probability of receiving a severe disability rating.Positive values reflect a greater likelihood of receiving a rating of mild to no disability.The strength of the probability is reflected in the absolute value of the logit for a given item.See figures for the item analysis with difficulty values.
Figure 3
Figure 3 shows the Participation Items with the level of disability experienced in the home and community (e.g., application of skills outside of a facility).Figure 3 clearly illustrates that these items presented the greatest difficulty for both groups.
Figure 1 .
Figure 1.Item difficulty values for TBI and CVA on the Abilities Index.
Figure 2 .Figure 3 .
Figure 2. Item difficulty values for TBI and CVA on the Adjustment Index.
[7]is, G. J. Horn The model accounts for a response to a specific item in relationship to the probability of a specific response to other items on the measure[7].This enables the calculation of the metric distance between items and supports relia- DOI: 10.4236/ojs.2018.84044672 Open Journal of Statistics bility to identify a finite number of human traits that comprise a construct (e.g."disability").thors identified 29 functional areas that best illustrate the range of limitations experienced following neurologic injury.These 29 items were further organized into 3 subscales representing different domains of functioning.In a recent study, Lewis and Horn [9] extended that application of Rasch analysis to include identification of MPAI-4 items with the greatest probability of presenting with severe disability following a TBI.The results of this study demonstrated that persons in different stages of recovery presented with distinct and different disability profiles.
Table 1 .
Demographics and injury related variables for TBI and CVA samples.
[10] of 50 and a standard deviation of 10.Higher T-scores indicate greater disability.The MPAI-4 has undergone rigorous psychometric testing and has proven reliability and validity as determined through Rasch analysis, Item Cluster, Principle Component Analyses (PCA), and measures of concurrent and predictive validity[10].
Table 2 presents
Rasch Infit and Outfit statistics by diagnostic group that fell outside the 1.0 ± 0.5 parameter established for acceptable fit.
Table 2 .
MPAI-4 items infit and outfit values by program type outside acceptable parameters.
Each of the misfit items presented in the Table 2 exceeded 1.5, revealing significant unexplained variation in observations and a tendency for outlier responding.Not surprisingly, the CVA group, presented infit and outfit values greater than 1.5 for "paid work" at each assessment.For this group the combination of advanced age and disability resulted in return to work being rarely endorsed on the evaluations and contributing to the instability of the item.Paid work values were outside the criterion at admission (e.g., no one was working at the time of admission due to the impact of an acute injury), but not discharge for TBIs.For the TBI group, audition was the most unstable item, with infit and outfit values exceeding criteria at admission and discharge.Within this group, audition was rarely endorsed as a limitation and was thus more susceptible to outlier responding."Unpaid work" (e.g., home making, volunteering, school) at admission was the next most unstable item due to individuals not being involved in these activities.The remainder of misfit items were marginally above the 1.50 upper limit.Overall, for both groups, the majority of MPAI-4 items accurately contributed to measuring disability after brain injury supporting a high level of construct validity for this instrument.
10for TBI and 2.67 for CVA.At discharge, the values were 4.23 and 3.60 respectively for TBI and CVA groups.These values indicate the existence of at least three performance strata within each group at admission and discharge.Item separa-
Table 3
shows the top 5 most disabling items for both groups at admission and discharge.The data shows that at discharge the magnitude of difficulty was F. D. Lewis, G. J. Horn DOI: 10.4236/ojs.2018.84044678 Open Journal of Statistics
Table 4
presents the paired sample T-values, significance levels, and Cohen's d effect sizes for each pre-post comparison on the Abilities, Adjustment, and Participation measures.
Table 4 .
Mean MPAI-4 T-scores at Admission and Discharge by diagnostic group.
discharge from acute hospitalization, persons who have suffered a TBI or CVA often face a lifetime of significant disability.Post-hospital brain injury re-These areas are at greater risk from diffuse rotational injury associated with TBI than with focal injuries more common with CVA.Within the diagnosis of CVA, it is more common to impact a smaller region or isolated portion of the brain where the blood supply is dis- memory and problem solving), this finding is not surprising given that the frontal lobes (prefrontal cortex) and temporal lobes (limbic system) have a strong influence on both cognitive and behavioral control. | 2019-03-18T14:02:28.048Z | 2018-07-18T00:00:00.000 | {
"year": 2018,
"sha1": "f585f2a5049ddd99637298761d709a378804826d",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=86453",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "f585f2a5049ddd99637298761d709a378804826d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
26012307 | pes2o/s2orc | v3-fos-license | Hankel operators and the Dixmier trace on the Hardy space
We give criteria for the membership of Hankel operators on the Hardy space on the disc in the Dixmier class, and establish estimates for their Dixmier trace. In contrast to the situation in the Bergman space setting, it turns out that there exist Dixmier-class Hankel operators which are not measurable (i.e. their Dixmier trace depends on the choice of the underlying Banach limit), as well as Dixmier-class Hankel operators which do not belong to the $(1,\infty)$ Schatten-Lorentz ideal. A related question concerning logarithmic interpolation of Besov spaces is also discussed.
Introduction
Let T be the unit circle in the complex plane C and H 2 the standard Hardy space of all functions in L 2 (T) ≡ L 2 (with respect to the normalized arc-length measure) whose negative Fourier coefficients vanish. For φ ∈ L ∞ (T), the Hankel operator H φ with symbol φ is the operator from H 2 into its orthogonal complement L 2 ⊖ H 2 defined by H φ u = (I − P )(φu), where P : L 2 → H 2 is the orthogonal projection. Equivalently, H φ is an operator whose matrix with respect to the standard bases {e ikθ } ∞ k=0 of H 2 and {e −miθ } ∞ m=1 of L 2 ⊖ H 2 is constant on diagonals perpendicular to the main diagonal, the (k, m)-th entry being equal to the Fourier coefficientφ(−k − m − 1). One can define H φ even for φ ∈ L 2 as a densely defined operator, and one has H φ = 0 if φ ∈ H 2 , so that H φ effectively depends only on (I − P )φ, and thus it is enough to study H φ only for φ = f with f ∈ H 2 . Nehari's theorem then asserts that H f is bounded if and only if f ∈ P (L ∞ (T)) = BM OA(T); similarly, H f is compact if and only if f ∈ P (C(T)) = V M OA(T). The much finer question of the membership of H f in the Schatten classes S p , 1 ≤ p < ∞, was solved by Peller, who showed [15] that H f ∈ S p if and only if f belongs to the diagonal Besov space B p = B 1/p pp ; this was later shown to prevail also for 0 < p < 1 (see e.g. [17] and the references therein). Here B p can be characterized as the space of (the nontangential boundary values of) all holomorphic functions f on the unit disc D which satisfy (1) f (k),p := D |f (k) (z)| p (1 − |z| 2 ) kp−2 dz 1/p for some (equivalently, any) nonnegative integer k > 1/p; here dz stands for the Lebesgue area measure. Using real interpolation, it follows more generally that H f belongs to the Schatten-Lorentz ideal S p,q , 0 < p < ∞, 0 < q ≤ ∞, consisting of all operators T whose singular values s j (T ) satisfy (2) ∞ j=0 (j + 1) q/p−1 s j (T ) q < ∞, q < ∞, if and only if f belongs to the "Besov-Lorentz" space B pq consisting of (the nontangential boundary values of) all holomorphic functions f on D satisfying at least for 1 < p < ∞ (for 0 < p ≤ 1 one would again have to use higher derivatives of f as in (1)); see e.g. [11]. Here φ * denotes the nonincreasing rearrangement of a function φ on D with respect to the measure (1 − |z| 2 ) −2 dz. For p = q, the spaces B pp = B p agree with the Besov spaces above. There is also an equivalent "dyadic" description of the Besov and Besov-Lorentz spaces, which avoids the holomorphic extension into D and which runs as follows: for n ≥ 1, introduce the trigonometric polynomials W n on T by where a nk = 0 for k / ∈ (2 n−1 , 2 n+1 ), a nk = 1 for k = 2 n , and a nk depends linearly on k on the intervals [2 n−1 , 2 n ] and [2 n , 2 n+1 ]. Setting further W 0 (e iθ ) = 1 + e iθ , we thus have for any f = ∞ k=0 f k e kiθ on T Then f ∈ B s pq , 0 < p ≤ ∞, 0 < q ≤ ∞, s ∈ R, if and only if 1 and for 1 s = q = p this quantity is equivalent to (1). Similarly, f ∈ B pq , 0 < p < ∞, 0 < q ≤ ∞, if and only if the function φ f on T × N defined by 1 More precisely B s pq is the subspace of (the boundary values of) holomorphic functions in the full Besov space B s pq , i.e. of functions in B s pq (T) whose negative Fourier coefficients vanish; the full Besov norm in B s pq being defined upon adding to (5) also the terms n ≤ 0 (and replacing the factor 2 ns by 2 |n|s ), where W −n (e iθ ) := W n (e −iθ ) and W 0 must be changed to W 0 (e iθ ) = e −iθ +1 +e iθ . It is more customary to denote B s pq by B s pq , and our B s pq by A s pq or (B s pq ) + , cf. [10,17]; however, since the "full" Besov spaces B s pq will not be needed anywhere in this paper, we take the liberty to use the simpler notation B s pq just for the holomorphic Besov spaces. The same also applies to the "Besov-Lorentz" spaces B pq .
belongs to the Lorentz space L pq (T × N, dν) with respect to the measure dν given by 2 n dθ 2π on T × {n}, n ∈ N; that is, if and only if the nonincreasing rearrangement φ * f of φ f with respect to dν satisfies Furthermore, the quantities (7) and (3) are again equivalent. We refer to Peller [17], [16] and Krepkogorskii [11], [10] for further details on all these matters. In addition to the Hardy space H 2 , there are also (big) Hankel operators on weighted Bergman spaces A 2 α (D) on the disc, α > −1, consisting of all functions in makes sense as a densely defined operator even for any φ ∈ L 2 α , and one has H in fact depends only on (I − P (α) )φ; furthermore, for = 0; see Arazy, Fisher and Peetre [1]. Using real interpolation, one can deduce from this also that H (α) f ∈ S pq , 1 < p < ∞, 0 < q ≤ ∞, if and only if f ∈ B pq (though this seems not to be noted explicitly in the literature).
The Schatten-Lorentz ideals S pq satisfy S p 1 ,q 1 ⊂ S p 2 ,q 2 if p 1 < p 2 or if p 1 = p 2 , q 1 < q 2 . A notable operator ideal lying between S 1,∞ and all S p,q , p > 1, is the Dixmier ideal S Dixm , consisting of all operators T whose singular values satisfy (8) sup n n j=0 s j (T ) log(n + 2) =: T Dixm < ∞. for T positive, and extending to all T ∈ S Dixm by linearity. The operator is called measurable if tr ω T does not depend on the choice of the Banach limit ω. In view of the results mentioned in the last paragraph, it is natural to ask for which holomorphic f on D does H (α) f belong to S Dixm and what is its Dixmier trace. It was shown by Rochberg and the first author [8] for α = 0, and by Tytgat [19] for general α, that H (α) f ∈ S Dixm if and only if f ′ belongs to the Hardy 1-space H 1 , and in that case the modulus |H f ) 1/2 is measurable and (9) tr ω |H (α) The methods of [8], however, break down for A 2 α replaced by H 2 (which in a welldefined sense is the limit of A 2 α as α ց −1). The aim of the present paper is to characterize Hankel operators H f , f ∈ H 2 , on the Hardy space that belong to S Dixm , and to give estimates for the Dixmier trace of |H f |.
Our main results are as follows. For f ∈ H 2 , we denote by f also the holomorphic be the nonincreasing rearrangement of (1 − |z| 2 ) 2 f ′′ (z) with respect to the measure (1 − |z| 2 ) −2 dz on D, and similarly let be the nonincreasing rearrangement of the function φ f from (6) with respect to the measure dν on T × N. Theorem 1. The following assertions are equivalent: Moreover, the quantities on the left-hand sides of (i)-(iv) are equivalent, and are further equivalent to dist S Dixm (|H f |, S Dixm 0 ).
Note that the integral in (i) above is just f p (2),p , which by general theory is equal to F p L p (0,∞) ; similarly, the integral in (iii) is just f p dyadic, 1
Theorem 2.
Let ω be a dilation-and power-invariant Banach limit on R + , ω = ω • exp the corresponding translation-and dilation-invariant Banach limit on R, and tr ω the associated Dixmier trace on S Dixm . Then the following quantities are equivalent: Furthermore, the constants in the equivalences with can be chosen independent of ω.
Here and throughout the paper, two positive quantities X, Y are called equivalent (denoted "X ≍ Y ") if there exists 0 < c < 1, independent of the variables in question, such that cX ≤ Y ≤ 1 c X; and we refer to Section 2 below for the definitions and details concerning ω, ω and tr ω .
The first part of the next theorem is immediate from Theorem 1, which also implies equivalence of the corresponding quotient norms 2 of f with the quotient norm of H f in S Dixm /S Dixm 0 ; for the equivalence of the norm H f Dixm itself, some extra labour seems to be needed. 3 We remark that · (2),Dixm and · dyadic,Dixm are norms of f ′ and f , respectively, in certain Lorentz (or Marcinkiewicz) spaces; see [2, p. 69].
Theorem 4. There exist f ∈ H 2 and two dilation-and power-invariant Banach
In [8] it was also shown that in the setting of the weighted Bergman spaces (at least for α = 0, but the proof likely carries over to all α > −1), H f ∈ S Dixm already implies that H f even belongs to the smaller ideal S 1,∞ ⊂ S Dixm of operators The equivalence (i)⇔(v) in Theorem 1 is not new but goes back to Li and Russo [12], and was subsequently put into a more general picture in the works of Carey, Sukochev and coauthors [3], [4]. Combining the latter with Peller's results mentioned at the beginning and with standard facts from the theory of Besov spaces yields the other parts of Theorem 1 and Theorem 2; if ω and ω are replaced by ordinary limits, the ideas behind Theorem 2 go back at least to Connes [6, § IV.2, Proposition 4]. The proof of Theorem 3 relies on a result on logarithmic interpolation in the context of Besov spaces, which also provides an alternative proof of the equivalences (v)⇔(i)⇔(iii) of Theorem 1 and is of independent interest. 2 More specifically: the expressions in Theorem 1 of which limsup's are taken are functions belonging to L ∞ (1, 2) in parts (i) and (iii) (as functions of p), and to L ∞ (0, ∞) in parts (ii) and (iv) (as functions of t -and one has to replace log t by log(t + 2)), respectively. Theorem 1 then says that the norm of those expressions in the qoutient space L ∞ /L ∞ 0 (where L ∞ 0 denotes the subspace of functions essentially tending to zero as p → 1+ or t → +∞, respectively) is equivalent to the norm of H f in S Dixm /S Dixm 0 .
3 Adding f BM O = H f to the quotient norms from the previous footnote produces already norms equivalent to H f Dixm S + |f (0)|, by the Closed Graph Theorem; however, that they are equivalent to the other two norms mentioned in the theorem below seems not so straightforward.
The proofs of Theorem 1 and Theorem 2 are given in Section 3 and Section 4, respectively, after reviewing the necessary prerequisites on Banach limits and Dixmier traces in Section 2. Interpolation of Besov spaces and the proof of Theorem 3 are the subject of Section 5. The proof of Theorem 4 is furnished in Section 6, and some comments and concluding remarks, including Example 5, appear in the final Section 7.
For f a conformal map of the disc onto a Jordan domain Ω ⊂ C, the Hankel operator H f is essentially the "quantum differential" dZ from § IV.3 in Connes [6], where it is also shown that, up to a constant factor, the functional f → tr ω (f |dZ| p ), p > 1, is just the integration against the p-dimensional Hausdorff measure Λ p on ∂Ω. Similarly, [8] (see also [19]) shows that in the weighted Bergman space setting, f | equals the length of ∂Ω, i.e. Λ 1 (∂Ω). It would be interesting to know if there is some kind of connection with Hausdorff measures also for tr ω |H f |.
Banach limits and Dixmier traces
By a Banach limit on N, N = {0, 1, 2, . . . }, we will mean a positive (i.e. taking nonnegative values on sequences whose entries are all nonnegative) continuous linear functional on the sequence space l ∞ = l ∞ (N) which coincides with the ordinary limit on convergent sequences. Similarly, by a Banach limit on R + = (0, +∞), we will mean a positive continuous linear functional on L ∞ (R + ) which coincides with ess-lim t→+∞ whenever the latter exists. Such functionals (in both cases) are easily constructed using the Hahn-Banach theorem. Furthermore, one can get a Banach limit ω # on N from a Banach limit ω on R + by setting (10) ω and, in fact, any Banach limit on N arises in this way (again by the Hahn-Banach theorem). The dilation operator D n , n = 1, 2, 3, . . . , on l ∞ (N) is defined as then the ω # given by (10) will be D ninvariant on N. Given an arbitrary Banach limit ω on R + , its composition ω • M with the Hardy mean will automatically be D a -invariant for any a > 0.
By a Banach limit on R we will mean, by definition, a functional on L ∞ (R) of the form ω(f ) = ω(f • log), where ω is a Banach limit on R + . Thus ω is positive, continuous, and ω(f ) = ess-lim t→+∞ f (t) whenever the limit exists. Note the ω is The existence of (a lot of) Banach limits on R which are simultaneously dilation-, translation-and power-invariant (i.e. ω = ω • T c = ω • D a = ω • P α ∀a, α > 0 ∀c ∈ R) is a consequence of the Markov-Kakutani theorem; see [3]. The following proposition gives a simple recipe to produce translation-and dilation-invariant Banach limits ω on R (and, hence, dilation-and power-invariant Banach limits ω(f ) = ω(f • exp) on R + ).
is a translationand dilation-invariant Banach limit on R.
Proof. We already know that η • M • D a = η • M for any a > 0; since ρ + commutes with D a , it follows immediately that For translation invariance, consider first T c with c > 0. For t > 1, Since 1 y − 1 y−c is integrable over (1 + c, ∞) and f is bounded, we see that the difference of M ρ + T c f (t) and dy y tends to zero as t → +∞. Similarly, replacing the limits in the last integral by t 1 produces an error of order O( 1 log t ) → 0. Thus M ρ + T c f − M ρ + f → 0 as t → +∞, whence ω(T c f ) = ω(f ), proving the T c -invariance for c > 0. For c < 0 and assuming t > 1 + c, the argument is completely analogous.
For ease of notation, we will usually write ω-lim n→∞ f n and ω-lim t→+∞ f (t), instead of ω(f ), for a Banach limit ω on N or R + (or R), respectively, to make it clear which variable ω applies to.
Since the value of a Banach limit depends only on the behaviour of the sequence or function at infinity, we will frequently also take the liberty of applying it to sequences or functions which are undefined or take infinite values for small values of the argument (such as e.g. { 1 log n } n∈N ). For a positive operator T in S Dixm and a Banach limit ω on N, one sets (13) tr ω T = ω-lim n→∞ n j=0 s j (T ) log n .
If ω is D 2 -invariant, one can show that tr ω (A + B) = tr ω (A) + tr ω (B) for any A, B positive. This makes it meaningful to extend tr ω by linearity to all of S Dixm . We refer to [6, § IV.2], [7], [3], [4] and in general to the monograph by Lord, Sukochev and Zanin [13] for further details on the material in this section.
Throughout the rest of this paper, ω will be a Banach limit on R + which is D 2 -and P α -invariant for all α > 1; ω(f ) = ω(f • log) will be the corresponding Banach limit on R; ω # (f ) = ω(f # ) will be the Banach limit on N as in (10); and (abusing the notation slightly) tr ω will be the Dixmier trace given by (13) with ω # in the place of ω.
Proof of Theorem 1
The following proposition is proved in [4,Theorem 4.5] for the special case when H is the spectral counting function of an operator; however, the proof works without changes in general. We include the details here for the convenience of the reader.
In particular, H lim sup is finite if and only if H lim log is.
Proof. For any C > H lim sup , let q 0 > 0 be such that By Hölder's inequality, for any 0 < q < q 0 , If t > e 1/q 0 , we can take q = 1 log t , so that t q /q = e log t; thus 1 log t t 0 H(s) ds ≤ Ce for t > e 1/q 0 , so H lim log ≤ Ce. Hence H lim log ≤ C H lim sup . Conversely, assume that In other words, that is, G(s) ≺ C 1+s in the sense of majorization of Hardy-Littlewood; it therefore follows (see e.g. [2, p. 88]) that for any p > 1, and, by the Lebesgue Monotone Convergence Theorem, Since H(t) = G(t) for t ≥ t 0 , we thus obtain from (14) lim sup implying that H lim sup ≤ H lim log .
The proof below is likewise inspired by the proof of Theorem 4.5 in [4].
Proof of Theorem 1. (i)⇔(v) As recalled in the Introduction, it is known from
Peller [15,Theorem 4.4] that for each p > 1/2, there exists c p ∈ (0, 1) such that (15) c where · p stands for the norm in S p and · (2),p for the Besov seminorm (1) with k = 2. Now since both S p and B p , 0 < p < ∞, form an interpolation scale under complex interpolation, it follows by interpolation that one can even get (15) with c p = c independent of p for 1 ≤ p ≤ 2. (See [12, p. 24] for the details; cf. also [19].) Consequently, for some c ∈ (0, 1) independent of p.
On the other hand, it is well known that the limsup on the utmost left and right is equivalent to H f S Dixm . Indeed, first of all, if H f / ∈ S p 0 for some p 0 > 1, then, since S p increase with p and S Dixm ⊂ p>1 S p , both H f S Dixm and H f p ∀p ∈ (1, p 0 ) are infinite; thus we may assume that H f ∈ S p ∀p > 1. By the definition of the norm in S p , is obtained as in (11). Therefore by the last proposition, (i)⇔(ii) It is well known (see e.g. [2, Chapter 2, Proposition 1.8]) that for any function g on a measure space (X, µ), the norm of g in L p (X, µ) equals the norm of its nonincreasing rearrangement g * (with respect to µ) in L p (0, ∞). For g(z) = (1 − |z| 2 ) 2 f ′′ (z) on (X, µ) = (D, (1 − |z| 2 ) −2 dz), we thus get in particular An application of Proposition 7 (with H = F ) thus shows that (i)⇔(ii) and the corresponding quantities are equivalent.
(i)⇔(iii) Using one more time the equality of the L p -norms of a function and of its nonincreasing rearrangement, we see that (5)), which is known to be equivalent, for each p > 1/2, to the norm |f (0)| + |f ′ (0)| + f (2),p in B p ([17, Appendix 2, Section 6]). Appealing again to the fact that B p form an interpolation scale under complex interpolation, we can get (as in the proof of (i)⇔(v) above) the equivalence constants uniform in any compact subinterval of ( 1 2 , ∞), in particular, for 1 ≤ p ≤ 2. Multiplying by (p−1) and taking lim sup pց1 , the equivalence of the quantities in (i) and (iii) follows.
Proof of Theorem 2
We again closely parallel the proofs of Proposition 4.3 and Theorem 4.11 in [4], especially for parts (a) and (b) below. Proof. Observe first of all that by Hölder, for any p ∈ (1, p 0 ) and t > 0, so that H ∈ L 1 (0, t) ∀t > 0. Likewise, as H is nonincreasing, it follows from H ∈ L p (0, ∞) that lim t→+∞ H(t) = 0; thus µ H is finite on (0, ∞).
(a) Assume to the contrary that there exist t n ր +∞, t n ≥ 2, such that µ H (1/t n ) > ct n log t n . Then H(s) > 1/t n for 0 < s ≤ ct n log t n , and so (19) ct n log t n 0 H(s) ds ≥ ct n log t n t n = c log t n .
On the other hand, choosing δ > 0 such that c − δ > c H , we have for all n sufficiently large, as well as for all n sufficiently large. Thus for n large enough, Indeed, this is obvious for t ≤ µ H (1/t), while for s > µ H (1/t) one has H(s) ≤ 1/t so that
H(s) ds
where in the last term we used the P α -invariance of ω (and the equality log t α = α log t). Since α > 1 was arbitrary, (b) follows.
(c) Set for brevity T := µ H (1). Since Consequently, In view of part (b), the desired conclusion (c) follows.
Setting p = 1 + 1 r , dividing by r and applying ω, (22) gives the equivalence of the quantities in (i) and (iii) (with the same constant c), while (21) shows that the quantity in (i) is equivalent (still with the same constant c) to However, applying part (c) of Proposition 8 to the function H in (16), and arguing as in (17), shows that (23) equals proving the equivalence of (i) and (v), again still with the same constant c as in (21) above. Since neither (21) nor (22) involve ω in any way, this constant is thus independent of ω.
Logarithmic interpolation of Besov spaces
It is possible to give an alternative proof of the part (iii)⇔(v) of Theorem 1, i.e. First of all, if F is any interpolation functor and 1 < p < ∞, then it is known that where for a symmetric sequence space E on N, S E denotes the space of operators T whose singular value sequence {s j (T )} j∈N belongs to E (equipped with the norm T S E := {s j (T )} E ). For the special case when F is the real interpolation functor F(A 0 , A 1 ) = (A 0 , A 1 ) θ,q , this was proved already by Peller [16] (see also [17], Chapter 6, §4); the general case is conveniently summarized for our purposes in §2 of Krepkogorskii [10]. Likewise, one finds in §4 of [10] that, for the function Φ = (f * W · ) * from Theorems 1 and 2, (this is in fact stated there in (3) of §4 for the full Besov spaces B 1/p pp , but the result for the holomorphic subspaces B p follows by the standard theorem on interpolation of subspaces -see the penultimate displayed formula on p. 24 in [10]).
Next, if A 0 , A 1 are any (quasi-)Banach spaces that are both continuously contained in some topological vector space, recall that the K-functional of Peetre is defined on the algebraic sum A 0 + A 1 by Then by general theory, (A 0 , A 1 ) → (A 0 , A 1 ) log is an interpolation functor, and on any σ-finite measure space (an example of the Lorentz-Zygmund spaces, more precisely, the Marcinkiewicz (or Lorentz) space associated to the quasiconcave function t/log(2 + t), see [2], p. 69; the supremum gives the norm in L Dixm ), while where L stands for the space of all bounded linear operators; here the first equality is immediate from the well-known formula for the second see e.g. Cobos et al. [5]. Unfortunately, this is not directly applicable in our case, as one cannot take p = ∞ in (24) and (25). This can be circumvented by interpolating the pair (L 1 , L 2 ) instead.
Proof. Denote temporarily, for brevity, (L 1 , L 2 ) log =: Y. It is a result of Holmstedt [9,Theorem 4.1] that the K-functional for the pair (L 1 , L 2 ) satisfies where as previously f * denotes the nonincreasing rearrangement of f .
Conversely, let f ∈ L Dixm , so Then, first of all, Secondly, since f * is nonincreasing, (26) implies that On the other hand, since Thus from (28) and (29) Together with (27), this implies that f ∈ Y and L Dixm ⊂ Y continuously.
Proof of Theorem 3. Taking p = 2 in (24) and (25) yields with equivalence of norms, proving the claim.
Proof of Theorem 4
Consider the case of a lacunary series where c m is a nonincreasing sequence of positive numbers. Then f * W n (z) = c n z 2 n and the nonincreasing rearrangement is given by By Theorem 1, H f ∈ S Dixm if and only if t 0 Φ(s) ds = O(log t) as t → +∞, and by Theorem 2, for any dilation-and power-invariant Banach limit ω on R + , for some c ∈ (0, 1) independent of ω and f . Clearly, We prove Theorem 4 by constructing a nonincreasing sequence c k and two dilationand power-invariant Banach limits ω 1 , ω 2 on R + such that, firstly, implying that H f ∈ S Dixm and ω 1 -lim 1 log t t 0 Φ = 1 log 2 ω 1 -lim σ k k and similarly for ω 2 ; and secondly, Then by (30) tr ω 1 |H f | > tr ω 2 |H f |, establishing the nonmeasurability of |H f |.
Let us now give the details of the construction.
Proof of Theorem 4. Let A > B > 0, C > 0, a > 1 be constants to be specified later, and set Define c j by By the mean value theorem, 2 j c j = σ ′ (j + θ j ) for some θ j ∈ [0, 1], and Since by a short computation σ ′′ (x) = O(1/(x log x)), while Thus for all j large enough -say, j ≥ j 0 ≥ 3 -we will have c j+1 ≤ c j . Redefining c j to be equal to c j 0 for 0 ≤ j < j 0 and choosing we thus obtain a positive nonincreasing sequence c j , still given by (34) for j ≥ j 0 , and satisfying σ k ≡ k j=0 2 j c j = σ(k) ∀k ≥ j 0 .
Let us compute the Hardy mean (12) Clearly f → η(f • b j ) is a Banach limit on R + , thus by Proposition 6 ω 1 and ω 2 are dilation-and power-invariant Banach limits on R + . Since η reduces to the ordinary limit on a convergent sequence, we get from (35) where we have denoted for brevity q := log 2 a 1+log 2 a . Take now B = (1−δ)A, a = e 1/ √ δ ; then Bq = 1−δ 1+δ A and A + Bq A − Bq = 1 δ .
The limit as p ց 1 of (p−1) f p (1),p was studied by Tytgat [19], who showed that it equals the norm of f ′ in L 1 (T), i.e. the Sobolev W 1,1 norm; see also Triebel [18] and references therein for related results.
For the Besov seminorms f (k),p with k ≥ 3, on the other hand, Theorems 1 and 2 remain in force (with the same proof). The right analogue for k = 1 of the expresssions in Theorems 1(i), 2(i) might be (p − 1) 2 f p (1),p . 7.2 An example. Here is the promised Example 5 from the Introduction. Consider again the case of lacunary series as in Section 6, i.e. f (e iθ ) = ∞ m=0 c m e 2 m iθ , with c m a nonincreasing sequence of positive numbers, and with the nonincreasing rearrangement Φ of f * W · on T × N given by Φ(t) = c j for 2 j − 1 ≤ t < 2 j+1 − 1.
For the "Besov-Lorentz" spaces B pq from the Introduction, we thus get H f ∈ S pq ⇐⇒ f ∈ B pq ⇐⇒ {c k 2 k/p } k∈N ∈ l q , and, by Theorem 1(iii), as already noted in the preceding section, | 2016-01-24T20:21:59.000Z | 2016-01-24T00:00:00.000 | {
"year": 2016,
"sha1": "b61804edaed909173ced98fe875a618f04e6656b",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/1601.06428",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b61804edaed909173ced98fe875a618f04e6656b",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
244371230 | pes2o/s2orc | v3-fos-license | Digital Integration and Automated Assessment of Eye-Tracking and Emotional Response Data Using the BioSensory App to Maximize Packaging Label Analysis
New and emerging non-invasive digital tools, such as eye-tracking, facial expression and physiological biometrics, have been implemented to extract more objective sensory responses by panelists from packaging and, specifically, labels. However, integrating these technologies from different company providers and software for data acquisition and analysis makes their practical application difficult for research and the industry. This study proposed a prototype integration between eye tracking and emotional biometrics using the BioSensory computer application for three sample labels: Stevia, Potato chips, and Spaghetti. Multivariate data analyses are presented, showing the integrative analysis approach of the proposed prototype system. Further studies can be conducted with this system and integrating other biometrics available, such as physiological response with heart rate, blood, pressure, and temperature changes analyzed while focusing on different label components or packaging features. By maximizing data extraction from various components of packaging and labels, smart predictive systems can also be implemented, such as machine learning to assess liking and other parameters of interest from the whole package and specific components.
Introduction
Packaging and labels are the first points of contact between food and beverage products with consumers. Around 95% of food and beverage products that do not have consumer preference assessments for packaging will probably fail in the market [1]. The implementation of new and emerging digital technologies for sensory analysis of food, beverage, and packaging products, such as video acquisition for physiological [2][3][4][5][6], emotional [7][8][9], and eye-tracking data [10][11][12], requires multiple devices from different companies and respective software packages for data acquisition, handling, and analysis [13]. The latter makes the data analysis process more complicated since it requires specialized personnel to simultaneously manage multiple devices and software, making the whole process time-consuming and cost-prohibitive. Hence, many studies focus on only one or a couple of biometrics at most, which are usually recorded independently [6,13].
The integration of several technologies is frequently not straightforward due to proprietary rights from different companies concerning their analysis algorithms or even images (e.g., FLIR for infrared thermal data). One computer application that has already integrated self-reported sensory data with infrared thermal imagery and visible video acquisition is the BioSensory App [14] developed by the Digital Agriculture, Food and Wine Sciences group (DAFW), The University of Melbourne (UoM), Australia. The BioSensory App can obtain, besides the self-reported data, digital information to extract (i) physiological biometrics from video of panelists, such as heart rate, blood pressure, and temperature changes; and (ii) emotional response from videos. The latter is capable of analyzing three head orientation parameters, eight emotions, valence, engagement, 21 different facial movements and 12 emojis that resemble the participants' expressions.
Eye-tracking devices and software have been used as a tool to analyze the gaze of panelists when looking at imagery or video with multiple and varied applications, such as multimedia learning [15], aviation [16], tourism [17], and sports [18], among others. For food and beverages [19,20], eye tracking has been helpful in the research of warning labels on sugar levels [21], healthy labels and food choice [22], fixations in different areas of interest (AOI) [23], packaging design and type [24,25], and more complex situations, such as the influence of soundtracks on visual attention and food choice [26]. Other studies have combined eye tracking with contact sensors, such as electrodermal activity, to assess food perception [27]. However, contact sensors may introduce biases in the analysis due to participants' self-awareness [13,28,29].
Combining eye-tracking and other remote sensing biometrics, such as emotional response, has been used primarily in psychiatric research, with some research interpreting only eye-tracking data with negative emotions [30]. In food and beverage labels, eye-tracking data have been combined with self-reported data such as wine purchase intention [31]. However, combining eye-tracking data with emotional responses based on video analysis using computer vision is rarer and mainly focuses on the overall assessment of the whole label [32].
This study aimed to propose the integration of eye-tracking information and emotional response of sensory panelists to assess specific areas of interest (AOI) of labels, such as images, logos, and nutrition information, among others, and self-reported liking of the overall label. The integration system proposed and trialed relies on the timestamp synchronization between the eye tracker device and the BioSensory App to create digital time tags for automated processing using multivariate data analysis.
Sensory Session Description
A total of 55 participants (44% males, 56% females; 25-50 years old) were recruited from the pool of staff and students from UoM. Power analysis was conducted using the SAS Power and Sample Size 14.1 software (SAS Institute, Cary, NC, USA), the result (1 − β > 0.999; effect size: 0.59) was used to confirm that the number of participants was enough to find significant differences between samples.
The sensory session was conducted in the Faculty of Veterinary and Agricultural Sciences laboratory from UoM and approved by the Human Ethics Advisory Group (Ethics ID: 1545786.2). The sensory laboratory, which was designed according to the ISO 8589 Sensory analysis-General guidance for the design of test rooms, has 20 individual booths with uniform lighting, and each is equipped with a Samsung Galaxy View 18" tablet (Samsung Group, Seoul, Korea) and a Gazepoint GP3 eye tracker (accuracy: 0.5-1.0 degree of visual, frequency: 60 Hz; Gazepoint, Vancouver, BC, Canada). The BioSensory application (App; The University of Melbourne, Parkville, Australia) [14] was used to display the questionnaire and to record videos of participants while evaluating the samples.
Three food labels (Stevia, Potato chips and Spaghetti) with different AOIs (product's name, claims, nutrition facts, net content, nutrition squares, ingredients, image, manufacturer, suggested use, bar code, company logo and product's denomination) were selected randomly and used as samples to test the new system proposed through the integration of eye-tracking and emotional response techniques. The eye tracker was connected to a computer, and the Gazepoint software presenting the slideshow with the samples was displayed in the tablet using RemotePC™ (RemotePC™, Calabasas, CA, USA). Participants were required to do a nine-point calibration between samples and were instructed to see the label for 10 s using the RemotePC App, while the BioSensory App was recording videos in the background. Once the 10 s looking at the label passed, a screen with instructions to switch to the BioSensory App was displayed. To do this, participants were provided with a wireless keyboard to switch between Apps ( Figure 1). Once in the BioSensory App, participants had to rate the label for Overall liking (15 cm non-structured scale) and select the preferred AOI. displayed in the tablet using RemotePC™ (RemotePC™, Calabasas, CA, USA). Participants were required to do a nine-point calibration between samples and were instructed to see the label for 10 s using the RemotePC App, while the BioSensory App was recording videos in the background. Once the 10 s looking at the label passed, a screen with instructions to switch to the BioSensory App was displayed. To do this, participants were provided with a wireless keyboard to switch between Apps ( Figure 1). Once in the BioSensory App, participants had to rate the label for Overall liking (15 cm non-structured scale) and select the preferred AOI.
Biometrics
Videos from participants were acquired using the BioSensory App and analyzed through a computer application developed by the DAFW from UoM based on the Affectiva software development kit (SDK; Affectiva, Boston, MA, USA; Figure 2). The parameters obtained from this analysis were emotions such as (i) joy, (i) fear, (iii) disgust, (iv) Figure 1. A participant during the sensory session in an individual booth equipped with (1) a Samsung 18" Tablet containing the BioSensory App, (2) a GazePoint GP3 eye tracker, (3) a computer connecting the eye tracker, and (4) a keyboard to switch between applications in the tablet. The FLIR infrared camera is also visible on top of the tablet but was not used in this study.
Biometrics
Videos from participants were acquired using the BioSensory App and analyzed through a computer application developed by the DAFW from UoM based on the Affectiva software development kit (SDK; Affectiva, Boston, MA, USA; Figure 2). The parameters obtained from this analysis were emotions such as (i) joy, (i) fear, (iii) disgust, (iv) sadness, (v) anger, (vi) contempt, (vii) valence dimension, (viii) engagement, and (ix) smile facial expression.
sadness, (v) anger, (vi) contempt, (vii) valence dimension, (viii) engagement, and (ix) smile facial expression. Eye-tracking data was analyzed using the Gazepoint analysis software, and the parameters extracted per AOI for each participant were (i) time to first fixation, (ii) time viewed, (iii) fixations number, and (iv) revisits number. Eye-tracking data was analyzed using the Gazepoint analysis software, and the parameters extracted per AOI for each participant were (i) time to first fixation, (ii) time viewed, (iii) fixations number, and (iv) revisits number.
Using the timestamps from both analyses, the emotional responses and eye-tracking data, the values of emotions were matched for each AOI to assess the participant's reactions while viewing each area. Figure S1 in supplementary material shows an example of the emotions elicited per AOI.
Statistical Analysis
Data were analyzed for ANOVA to assess significant differences (p < 0.05) between samples using the Tukey honest significant difference (HSD) post hoc test (α = 0.05). Furthermore, a multivariate data analysis consisting of principal components analysis (PCA) and cluster analysis based on Euclidean distance was conducted using a customized code written in Matlab ® R2021a (Mathworks, Inc., Natick, MA, USA). A matrix was developed to assess significant (p < 0.05) correlations between emotional responses and the eye-tracking parameters using the latter software.
Results and Discussion
The analytical system proposed in this study allows the automated analysis of labels as a whole and to separate analysis from different label components. Below are presented the results from the new applications developed in the form of processed data for eye-tracking information and integrated analysis for eye tracking and emotional response based on videos from participants and computer vision algorithms.
The analyses presented in this paper are an example of how the data may be handled; however, each user of the proposed method would be free to analyze their own data according to their needs. ANOVAs may be conducted to assess differences per AOI as presented in this paper, but also per sample and the interaction of AOIs and samples; this will depend on the aim of the specific study. Figure 3 shows significant differences (p < 0.05) between samples for the overall liking. The chips label was the most liked, with the spaghetti and stevia labels being rated similarly. This may be due to the layout and colors of the labels and/or to the consumers preference for chips over spaghetti and stevia. Using the timestamps from both analyses, the emotional responses and eye-tracking data, the values of emotions were matched for each AOI to assess the participant's reactions while viewing each area. Figure S1 in supplementary material shows an example of the emotions elicited per AOI.
Statistical Analysis
Data were analyzed for ANOVA to assess significant differences (p < 0.05) between samples using the Tukey honest significant difference (HSD) post hoc test (α = 0.05). Furthermore, a multivariate data analysis consisting of principal components analysis (PCA) and cluster analysis based on Euclidean distance was conducted using a customized code written in Matlab ® R2021a (Mathworks, Inc., Natick, MA, USA). A matrix was developed to assess significant (p < 0.05) correlations between emotional responses and the eye-tracking parameters using the latter software.
Results and Discussion
The analytical system proposed in this study allows the automated analysis of labels as a whole and to separate analysis from different label components. Below are presented the results from the new applications developed in the form of processed data for eyetracking information and integrated analysis for eye tracking and emotional response based on videos from participants and computer vision algorithms.
The analyses presented in this paper are an example of how the data may be handled; however, each user of the proposed method would be free to analyze their own data according to their needs. ANOVAs may be conducted to assess differences per AOI as presented in this paper, but also per sample and the interaction of AOIs and samples; this will depend on the aim of the specific study. Figure 3 shows significant differences (p < 0.05) between samples for the overall liking. The chips label was the most liked, with the spaghetti and stevia labels being rated similarly. This may be due to the layout and colors of the labels and/or to the consumers preference for chips over spaghetti and stevia. Table 1 shows the mean and standard error values of the emotional responses for each AOI. There were non-significant differences (p > 0.05) between AOIs for different emotions. However, the variability in standard error (SE) shows some trends that can be used to predict liking among other parameters using machine learning modelling [6,33,34]. Figure 4 shows significant differences (p < 0.05) between samples for both the time to first view and time viewed. The AOI manufacturer was the one that took longer for participants to first view (4.53 s), which means it was the last AOI they see when evaluating the labels. On the contrary, the product's name took the least time to be first viewed (1.28 s), this being the first AOI that participants focus visual attention on the labels analyzed. On the other hand, participants spent the longest time (0.94 s) viewing the suggested use than the other AOIs, with net content being the element they spent the least time (0.06 s). The large SE values were expected due to differences in participants reactions; this is since subconscious responses are being evaluated and stimuli elicit different responses in each individual. Figure 4 shows significant differences (p < 0.05) between samples for both the time to first view and time viewed. The AOI manufacturer was the one that took longer for participants to first view (4.53 s), which means it was the last AOI they see when evaluating the labels. On the contrary, the product's name took the least time to be first viewed (1.28 s), this being the first AOI that participants focus visual attention on the labels analyzed. On the other hand, participants spent the longest time (0.94 s) viewing the suggested use than the other AOIs, with net content being the element they spent the least time (0.06 s). The large SE values were expected due to differences in participants reactions; this is since subconscious responses are being evaluated and stimuli elicit different responses in each individual. In Figure 5, it can be observed that there were significant differences (p < 0.05) between the AOIs for the number of fixations and revisits. Suggested use, nutrition facts, and image were the highest in the number of fixations (4.24, 3.85, and 3.75, respectively), while net content was the lowest (0.56). On the other hand, the image was the AOI with the most revisits (2.02), while net content had the least (0.13). In Figure 5, it can be observed that there were significant differences (p < 0.05) between the AOIs for the number of fixations and revisits. Suggested use, nutrition facts, and image were the highest in the number of fixations (4.24, 3.85, and 3.75, respectively), while net content was the lowest (0.56). On the other hand, the image was the AOI with the most revisits (2.02), while net content had the least (0.13). Figure 6 shows the combined data from eye trackers and emotional responses. Figure 6a shows that considering the first two principal components (PC), the PCA represented a total of 61. The preferred AOI was positively related to fear, disgust, and number of revisits and negatively related to time to first view. Revisits number, fixations number, and time viewed had a positive relationship among them and disgust. Associated with these were the AOIs nutrition facts, image, and product name. This association coincides with results reported in an eye-tracking study to evaluate olive oil dressing labels, in which higher fixations were found for product's name and image [25] and an eye-tracking study with organic food labels in which visual attention was higher when viewing the image [35]. On the other hand, time to first view was positively related to contempt, with AOIs manufacturer, bar code, company logo, and associated claims. Net content AOI was related to engagement, joy, smile, and valence. The other AOIs were more ambiguous as they are located more towards the center for the PCA. However, in Figure 6b, there are three main clusters, one of them with four subclusters. Product name, nutrition facts, and image conform one cluster; net content is independent of the other AOIs. The third cluster is composed of subgroups as (i) manufacturer, suggested use and bar code, (ii) product denomination, (iii) nutrition squares and ingredients, and (iv) company logo and claims. Figure 7 shows there were positive significant correlations (p < 0.05) between disgust and time viewed (r = 0.58), fixations number (r = 0.67), revisits number (r = 0.76), and preferred AOI (r = 0.74). Similar results were found by Schienle et al. [36]; in their study, participants had a higher number of fixations when evaluating disgust images. Furthermore, disgust was negatively correlated with time to first view (r = −0.63). Whilst contempt was positively correlated with time to first view (r = 0.62). The preferred AOI had a positive correlation with fixations number (r = 0.58) and revisits number (r = 0.70). Engagement was positively correlated with smile (r = 0.74) and joy (r = 0.83) as expected. The latter was also correlated with valence (r = 0.80) and smile (r = 0.93). The correlation between valence, smile, and joy, also found in the PCA (Figure 6a), was expected as a positive valence is a measure of happiness [37].
Integration and Analysis of Eye-Tracking and Emotional Response
The BioSensory App used in this study was further developed through specific software modules for the post-analysis of videos acquired from panelists. One of those mod-
Integration and Analysis of Eye-Tracking and Emotional Response
The BioSensory App used in this study was further developed through specific software modules for the post-analysis of videos acquired from panelists. One of those modules dealt with the integrated analysis of eye-tracking and emotional response output data by analyzing it based on timestamps and through a customized multivariate data analysis code for principal component (Figure 6a), cluster (Figure 6b), and correlation (Figure 7) analysis.
The use of multivariate data analysis such as PCA for the proposed system outputs to assess AOIs in labels may render critical information that may be picked up by the methods used separately. This may provide an overview of the specific AOIs from the labels that could require modifications in the design to satisfy consumers and, therefore, increase the overall acceptability of the labels. This is an advantage of the proposed system since the integrated method provides more precise information from consumers than traditional methods that use separate measures and focus on the overall emotional responses or other biometrics such as skin conductance elicited by the entire label [10,12,27]. This leads developers to fully redesign labels that may not be optimal to satisfy consumers and is more time-consuming and less cost-effective.
Not only self-reported data and emotional response can be integrated using the methodology proposed in this study, but also further digital data can be obtained with the BioSensory App system, such as physiological response based on heart rate, blood pressure, and temperature changes from panelists. The latter data were not presented in this study to avoid overcomplication of information presented. However, extra information can be used for more complex modelling strategies using artificial intelligence (AI).
The proposed system allows further analysis and the development of prediction models using machine learning techniques based on biometrics. The latter approach has been used in the case of consumer acceptability based on visual evaluation of beer pouring videos using eye-tracking, emotional and physiological responses [34] and for consumers acceptability towards beer tasting using biometrics such as emotions, heart rate, and body temperature [33]. Other authors have used machine learning modelling to predict food choice using eye-tracking gaze data when evaluating food images [38] and to predict participants age from their gaze patterns [39]. These digital and AI tools can be implemented in the design stage of packaging and labels rendering images or 3D representation of the same on screens for panelists or potential consumers. This could expedite the design and modification process since modifications can be readily assessed and applied digitally for immediate re-rendering. The latter will avoid the requirement of further sensory sessions and reduce costs. Previous research has shown that sensory analysis and liking of packaging and labels do not have statistical differences when packaging is rendered digitally on a screen compared to 3D physical prototypes for panelists to handle [40].
Conclusions
Further development of the BioSensory computer application has helped maximize the extraction of information from packaging and labels. The proposed system not only applies to the packaging and labels, but it can also give more specific information about the different components or areas of interest (AOI) and the overall acceptability of the products. A potential future application using artificial intelligence can be developed to assess which components are liked by consumers and which require modifications only from eye-tracking, facial expressions, and further biometrics. This AI system could expedite packaging design and secure the success of food and beverage products in the market.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/s21227641/s1, Figure S1. Example of a heatmap from a label showing the different emotions elicited in consumers by each area of interest. In the top left, the identified eye section of participant is shown. The label has been blurred to hide brands and participant's identity. | 2021-11-19T16:17:35.784Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "a07b3ad07dbf4b5ee9e0557fd4098b124c8f41de",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/21/22/7641/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "613a94e75e22858f24215f85c711e2f820f5dcc2",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
56188913 | pes2o/s2orc | v3-fos-license | Study of the gp-->etap reaction with the Crystal Ball detector at the Mainz Microtron(MAMI-C)
The gp-->etap reaction has been measured with the Crystal Ball and TAPS multiphoton spectrometers in the energy range from the production threshold of 707 MeV to 1.4 GeV (1.49 == 1.87 GeV). Bremsstrahlung photons produced by the 1.5-GeV electron beam of the Mainz Microtron MAMI-C and momentum analyzed by the Glasgow Tagging Spectrometer were used for the eta-meson production. Our accumulation of 3.8 x 10^6 gp-->etap-->3pi0p-->6gp events allows a detailed study of the reaction dynamics. The gp-->etap differential cross sections were determined for 120 energy bins and the full range of the production angles. Our data show a dip near W = 1680 MeV in the total cross section caused by a substantial dip in eta production at forward angles. The data are compared to predictions of previous SAID and MAID partial-wave analyses and to thelatest SAID and MAID fits that have included our data.
I. INTRODUCTION
The N * family of nucleon resonances has many well established members [1], several of which exhibit overlapping pole positions, very similar masses and widths, but different J P spin-parity values. Apart from the N (1535)1/2 − state, the known photo-decay amplitudes have been determined from analyses of single-pion photoproduction, so the ηN branching ratios are in general poorly known.
New, high quality data on γp → ηp are needed to shed light on these issues, and the tagged-photon hall at Mainz offers a state-of-the-art facility to obtain such data. Here we report on a new differential-cross-section measurement, covering incident photon energies from threshold (E γ = 707 MeV) up to E γ = 1400 MeV. The accumulation of 3.8 × 10 6 events for the process γp → ηp → 3π 0 p → 6γp has enabled the data to be binned finely in E γ (bin widths as small as ∼ 4 MeV) and in η production angle, which in the c.m. frame is fully covered. The present measurement is part of an extensive program at the Mainz Microtron to provide data of unrivaled quality on neutral meson photoproduction, which includes polarized beam and target observables in addition to cross sections.
Our energy range includes several well-established resonances and also some more questionable ones. Indeed, the excellent photon-energy resolution offers the potential to illuminate any narrow states, possibly of exotic structure. Most of the states presently covered appear to have very small coupling to the ηN channel, and this in itself can be puzzling. For example, it is unclear why the ηN branching ratio is so small for the second S 11 , N (1650)1/2 − , compared to the first N (1535)1/2 − . The data available for π − p → ηn are inadequate to study this question [2,3]. The reason for a small branching ratio of N (1520)3/2 − to ηN [1,4] has to be understood, too. The Particle-Data-Group (PDG) estimate for the A 3/2 decay amplitude of the N (1720)3/2 + state is consistent with zero, while the recent SAID determination gives a small but non-vanishing value [5]. The reason for the disagreement between the PDG estimate for the A 1/2 decay amplitude and the recent SAID determination [5] is also unclear. Other unresolved issues relate to the second P 11 and D 13 resonances [N (1710)1/2 + and N (1700)3/2 − ] that are not seen in the recent πN partialwave analysis (PWA) [2], contrary to other PWAs used by the Particle Data Group [1]. The ηN decay channel could be more favorable than πN for these states. The present data should have sufficient precision to allow reliable extraction of the ηN partial waves for these resonances, which will enhance our understanding of their internal dynamics. In addition, since the present data have good coverage of the ηp-threshold region, the Swave dominance of the threshold behavior can also be checked.
The paper is laid out in the following manner: the experimental setup is briefly described in Sec. II; the procedure to determine the differential cross sections is described in Sec. III; the estimation of our systematic uncertainties is given in Sec. IV; the experimental results are presented in Sec. V; analyses of the data in terms of SAID and MAID are described in Sec. VI; finally, the findings of our study are summarized in Sec. VII.
Since the present data on γp → ηp → 3π 0 p were also used in the determination of the slope parameter α for the η → 3π 0 decay [6], a more detailed description of the experiment and data handling can be found in Ref. [6].
The CB spectrometer is a sphere consisting of 672 optically insulated NaI(Tl) crystals, shaped as truncated triangular pyramids, which point toward the center of the sphere. Each NaI(Tl) crystal is 41 cm long, which corresponds to 15.7 radiation lengths. The crystals are arranged in two hemispheres that cover 93% of 4π sr, sitting outside a central spherical cavity with a radius of 25 cm, which is designed to hold the target and inner detectors. To allow passage of the beam, the regions of polar angle below 20 • and above 160 • are not populated. The energy resolution for electromagnetic showers in the CB can be described as ∆E/E = 0.020/(E[GeV]) 0.36 . Shower directions are determined with a resolution in θ, the polar angle with respect to the beam axis, of σ θ = 2 • -3 • , under the assumption that the photons are produced in the center of the CB. The resolution in the azimuthal angle φ is σ θ / sin θ.
To cover the forward aperture of the CB, the TAPS calorimeter [8,9] was installed 1.5 m downstream of the CB center. TAPS geometry is flexible and, for the present A2 experiment, it was configured as a "plug" for the forward-angle hole in CB acceptance. In this experiment, TAPS was arranged in a plane consisting of 384 BaF 2 counters of hexagonal cross section, with an inner diameter of 5.9 cm and a length of 25 cm, which corresponds to 12 radiation lengths. One counter was removed from the center of the array, which has an over-all, hexagonal geometry in the x-y plane, to allow the passage of the photon beam. TAPS subtends the full azimuthal range for polar angles from 1 • to 20 • . The energy resolution for electromagnetic showers in the TAPS calorimeter can be described as ∆E/E = 0.018 + 0.008/(E[GeV]) 0.5 . Because of the relatively long distance from the CB, the resolution of TAPS in the polar angle θ was better than 1 • . The resolution of TAPS in the azimuthal angle φ is better than 1/R radian, where R is the distance in centimeters from the TAPS center to the point on the TAPS surface that corresponds to the θ angle.
The upgraded Mainz Microtron, MAMI-C, is a four stage accelerator, and its latest addition (the fourth stage) is a harmonic double-sided electron accelerator [11]. An electron-beam energy of 1508 MeV was used for the present experiment. Bremsstrahlung photons, produced by electrons in a 10-µm Cu radiator and collimated by a 4-mm-diameter Pb collimator, were incident on a 5-cm-long liquid hydrogen (LH 2 ) target located in the center of the CB. The energies of the incident photons were analyzed up to 1402 MeV by detecting the post-bremsstrahlung electrons in the Glasgow Tagger [12][13][14]. The Tagger is a broad-momentumband, magnetic-dipole spectrometer that focuses postbremsstrahlung electrons onto a focal-plane detector, consisting of 353 half-overlapping plastic scintillators. The energy resolution of the tagged photons is mostly defined by the overlap region of two adjacent scintillation counters (a tagger channel) and the electron beam energy. For a beam energy of 1508 MeV, a tagger channel has a width about 2 MeV at 1402 MeV and about 4 MeV at 707 MeV (the η-production threshold). Tagged photons are selected in the analysis by examining the correlation in time between a tagger channel and the experimental trigger derived from CB signals.
The LH 2 target is surrounded by a particle identification (PID) detector [15] which is a cylinder of length 50 cm and diameter 12 cm, built from 24 identical plastic scintillator segments, of thickness 0.4 cm. In conjunction with the CB, this identifies charged particles by the ∆E/E technique, although this facility was not used in the present analysis.
The experimental trigger had two main requirements. First, the sum of the pulse amplitudes from the CB crystals had to exceed a hardware threshold that corresponded to an energy deposit of ∼ 320 MeV. Second, the number of "hardware" clusters in the CB had to be larger than 2. A "hardware" cluster is a group of 16 adjacent crystals in which at least one crystal has an energy deposit larger than 30 MeV.
III. DATA HANDLING
The photoproduction of η was measured by using the 3π 0 decay mode of this meson: The other main neutral decay mode, η → γγ, was not used in this measurement because a large number of η → γγ events did not satisfy the experimental-trigger requirements. The acceptance determination for the twophoton final state then becomes highly sensitive to the trigger efficiency, calculated using a Monte Carlo (MC) simulation. On the contrary, the trigger efficiency for η → 3π 0 events is close to 100%, so the systematic uncertainty in the acceptance caused by the trigger simulation is small. Additionally, the angular resolution for η → γγ events is worse than for η → 3π 0 , and there is a substantial background from the γp → π 0 p reaction, which requires a careful subtraction. Process (1) was investigated by analysis of events having six and seven "software" clusters reconstructed in both the CB and TAPS. The six-cluster sample was used to search for the events in which only six photons were detected, while for seven clusters the recoiling proton was also detected.
The kinematic-fitting technique was used to test the reaction hypotheses needed in our analysis and to select good candidates for the events of interest. The details of our parametrization of the detector information and resolutions are given in Ref. [6]. The events that satisfied the hypothesis of reaction (1) at the 2% confidence level, CL, (i.e., with a probability of misinterpretation less than 2%) were accepted as η → 3π 0 candidates. The kinematic-fit output was then used to reconstruct the kinematics of the reaction. Since each event in general included several tagger hits (due to the high rates in the tagger detector), the γp → ηp → 3π 0 p → 6γp hypothesis was tested for each tagger hit. Selection was based on the hit time which was required to be within a selected window (detailed below) and the equivalent photon energy, which was required to be above the reaction threshold of 707 MeV. The taggerhit time distribution for η → 3π 0 event candidates is shown in Fig. 1(a). If an η → 3π 0 event candidate from one trigger passed the 2% CL criterion for several tagger hits, they were analyzed as separate events. The width of the tagger-hit window was chosen to be substantially wider (80 ns for this analysis) than the peak caused by prompt coincidences between the tagger and trigger. The width of the prompt window, denoted by vertical lines in Fig. 1(a), was taken to be 10 ns in order to include all prompt events. Using a wider window for the random coincidences allowed the collection of a sufficient number of events to determine precisely the random-background distribution beneath the prompt peak. The experimental distributions analyzed with the pure random events were then used to subtract the random background from the prompt-plus-random event sample. For our experimental conditions and for the chosen tagger-hit window, 40% of all event candidates were selected for more than one tagger hit. Since, for an event, there can be only one prompt tagger hit with the proper E γ , there is no double counting of good events in the distributions with prompt candidates.
The Monte Carlo simulation of the γp → ηp → 3π 0 p reaction that was used for the determination of the differential cross sections assumed an isotropic distribution of the production angle and independence of the reaction yield on the incident-photon energy. The simulation of the η → 3π 0 decay was made according to phase space. The small deviation of the actual η → 3π 0 decays from phase space was not significant in our analysis. All MC events were propagated through a GEANT (version 3.21) simulation of the CB-TAPS detector, folded with resolutions of the detectors and conditions of the trigger. The resultant simulated data were analyzed in the same way as the experimental data. The resulting detector acceptance for the γp → ηp → 3π 0 p events selected by the kinematic fit at the 2% CL is shown in Fig. 1(b) as a function of the incident-photon energy, E γ . It varies from about 45% at the η threshold to about 25% at an E γ of 1.4 GeV. The agreement between the various experimental spectra and the spectra from the MC simulation has been illustrated in Ref. [6].
Besides the random-coincidence background, there are two more background sources. The first one comes from interactions of incident photons with the target walls, which was investigated by analyzing the data taken when the target was empty. It was determined that the fraction of the empty-target background that remained in our η → 3π 0 event candidates after applying the 2%-CL cut varied from 1% at reaction threshold to 2.7% at a beam energy of 1.4 GeV. The background spectra determined from the analysis of the empty-target samples were then subtracted from our full-target spectra.
Another source of background is the set of γp → 3π 0 p events that are not produced by η → 3π 0 decays. When the invariant mass of the three neutral pions in the final state is sufficiently close to the mass of the η meson, those events are selected as η → 3π 0 candidates. A phase-space simulation of γp → 3π 0 p was used for the subtraction of this background. The fraction of this direct 3π 0 background in our η → 3π 0 candidates was determined in each photon-energy bin via the normalization of the MC simulation for γp → 3π 0 p to the corresponding experimental spectra (Fig. 2). The 3π 0 invariant-mass distributions obtained from kinematic fitting to the γp → 3π 0 p hypothesis were used for this purpose. In Fig. 2, these distributions are shown for the experimental data and MC simulation at incidentphoton energies between 1150 MeV and 1200 MeV. The experimental spectrum after the subtraction of randomcoincidence and empty-target backgrounds is shown in Fig. 2(a). The experimental events that were then selected as η → 3π 0 candidates are shown in Fig. 2(b). The corresponding spectra obtained for the MC simulation of γp → 3π 0 p are shown in Figs. 2(c) and (d). The MC-simulation spectrum in Fig. 2(c) has a shape very similar to that of the experimental spectrum under the η peak in Fig. 2(a), and its normalization to the data provides a good estimate of the direct 3π 0 background in the η → 3π 0 candidates. The fraction of direct 3π 0 background that was subtracted from our experimental spectra was found to vary from 0.3% at the η threshold to 4.4% at 1.4 GeV.
The large number of events accumulated allowed the division of the data into 120 bins in E γ . From the reaction threshold to an E γ of 1008 MeV, the bin width was that of a single tagger channel (∼ 4 MeV). From 1008 to 1238 MeV, two tagger channels were combined to a single energy bin. Above 1238 MeV, an energy bin included from three to eight tagger channels. The γp → ηp differential cross sections were determined as a function of cos θ, where θ is the polar angle of the η direction in the c.m. frame. The cos θ spectra at all energies were divided into 20 bins.
The determination of the differential cross sections is illustrated in Figs. 3 and 4 for incident-photon energies of 760 and 1060 MeV, respectively. The experimental dis- . Since the simulated angular distribution is isotropic, the MC spectra reflect the experimental acceptance as a function of cos θ. The change in the acceptance with the incident-photon energy is caused by a more pronounced Lorentz boosting to forward angles, so that a larger fraction of particles impinge upon the overlap region between the CB and TAPS, where the metal framework of the CB aperture reduces detection efficiency. To obtain the γp → ηp differential cross sections, the experimental distributions were normalized for the acceptance, the η → 3π 0 branching ratio, the photon beam flux, and the number of target protons. These differential cross sections are shown in Figs. 3(c) and 4(c). The results are very close to the predictions of SAID [16] for the γp → ηp differential cross sections at the given energies. These predictions are shown by the solid lines in the same figures.
IV. SYSTEMATIC UNCERTAINTIES
The results presented for the total, σ t (γp → ηp), and differential cross sections include only statistical uncertainties. The largest contributions to the systematic uncertainty come from the calculation of the experimental acceptance by the MC simulation and from the determination of the photon-beam flux. A good test of the MC simulation is the determination of the γp → ηp differential cross sections using the two different modes of η decays: η → 3π 0 and η → γγ. As discussed above, the data used in the present analysis were taken with the trigger suppressing the events with low cluster multiplicity in the final state. To perform our test, we used a data sample that contained much fewer events, taken at a later stage with an almost open trigger. The results obtained for the γp → ηp total cross sections from the two different decay modes of η are in good agreement within their statistical uncertainties, the magnitude of which are ∼ 2% for η → 3π 0 and ∼ 1% for η → γγ. They also agree with the high-precision γp → ηp results presented here. Based on the comparison of our own results with each other and with the existing data in the region of the N (1535)1/2 − (the most well-known region), the general systematic uncertainty in our γp → ηp cross sections was estimated to be 4%. To take into account the statistical uncertainties in the estimation of the tagging efficiency of every individual tagger channel, used for the photon-flux calculation, those uncertainties were added in quadrature with our general systematic uncertainty. The typical magnitudes of the statistical uncertainties in the tagging efficiencies of the tagger channels for our data are between 1.4% at the η threshold and 2.5% at the largest energies. For every post-bremsstrahlung electron detected by a tagger channel, the tagging efficiency reflects a probability of the corresponding bremsstrahlung photon to pass through the photon collimator and to reach the target. The typical tagging efficiency of the tagger channels in the present experiment varied between 67% − 71%.
V. RESULTS
Since our results for the γp → ηp differential cross sections consist of 2400 experimental points, they are not tabulated in this publication, but are available in the SAID database [16] along with their uncertainties and the energy binning. In this section, we compare our results to the world data set.
In Fig. 5, our differential cross sections for four incident-photon energies are compared to previous measurements made at similar energies [17][18][19][20][21][22][23]. Some of these measurements [19,23,24] are quite recent, demonstrating the general desire of the resonance-physics community to obtain new γN → ηN data, which are needed for a better determination of the properties of the N * states. The lowest energy shown, E γ = 714.5 MeV (W = 1490.3 MeV), is close to the η-production threshold. The second energy, E γ = 772.9 MeV (W = 1526.7 MeV), is at the maximum of the total cross section. The third energy, E γ = 1026.8 MeV (W = 1675.4 MeV), is at a local minimum of the total cross section. The last energy shown, E γ = 1376.2 MeV (W = 1860.9 MeV) is close to the maximum of our incident-photon energy range. As seen in Fig. 5, all our results are in reasonable agreement with the previous measurements, but our statistical uncertain-ties are much smaller and the energy binning much finer. Larger discrepancies are observed between the data obtained close to the η-production threshold, but this can be explained by the difference in the energy binning of the data sets, bearing in mind the rapidly rising cross section close to threshold. The present total cross sections for γp → ηp are obtained by integration of the differential cross sections. In Fig. 6, our total cross sections are compared with previous measurements [17][18][19][20][21][22][23] over the full energy range presently measured. A part of this distribution is repeated in Fig. 7(a), showing the range from the threshold to the N (1535)1/2 − maximum in more detail. Our results for the total cross sections are in general agreement with the major previous results. The energy range lying above a c.m. energy of 1640 MeV (see also Fig. 7(b)) is especially important for untangling the six overlapping N * states and investigation of a possible narrow N * state in the mass range ∼ 1680 MeV. The N * (1680) was ex-tracted in Ref. [25] from the πN PWA and suggested to be a member of the exotic anti-decuplet. Indeed a resonant bump at ∼ 1680 MeV is observed in quasi-free γn → ηn [26][27][28]. However, for this reaction, the measured width of the bump was dominated by the experi-mental energy resolution. Inspection of our σ t (γp → ηp) energy dependence shows no evidence for a narrow bump related to a N * (1680) state in η photoproduction on a free proton. Rather our data show the existence of a shallow dip near W = 1680 MeV. However, such a situation may not contradict the existence of a narrow bump in η photoproduction on a neutron, since γn and γp couplings of the N * (1680) can be essentially different (as for an anti-decuplet member).
The full angular coverage of our differential cross sections allied with the small statistical uncertainties allows a reliable determination of the Legendre coefficients A i ,which was difficult to do with the previous data. This unprecedented detail of the energy dependence of the Legendre coefficients will be indispensable in untangling the properties of the N * states lying in the present energy range. In Fig. 8, we illustrate Legendre coefficients A 1 − A 3 (higher orders are relatively insignificant) as a function of the c.m. energy. The swing in A 1 from negative to positive values in the vicinity of W = 1680 MeV is intriguing. Since the first coefficient, A 0 , simply reflects the magnitude of the total cross section, it is not shown.
VI. IMPACT OF THE DATA ON PWA
To gauge the influence of our data and their compatibility with previous measurements, our differential cross sections have been included in a number of fits using the full SAID database for γp → ηp up to E γ = 2.9 GeV. The impact of our data on the SAID PWA can be understood from the comparison of the new SAID fit GE09, which involves our data, with the previous SAID fit E429 [16]. The other data included in the GE09 fit involve all previously published data except recent CLAS-g11a [19], CB-ELSA/TAPS [23], and LEPS [24] differential cross sections. Our data were also included in the PWA under the Reggeized η-MAID model (Regge-MAID) [30] that was extended to a photon energy E γ = 3.7 GeV (W = 2.8 GeV) by adding new resonances in the s-channel. Besides the resonances used in the original Regge-MAID [30], the new model includes five additional states from the fourth resonance group, namely N (1900)3/2 + , N (2000)5/2 + , N (2080)3/2 − , N (2090)1/2 − , and N (2100)1/2 + , which are needed to describe the latest data from CLAS-g11a [19] and CB-ELSA/TAPS [23]. The influence of these five states on the description of our data is very small. For the MAID solution without our data, we choose the η-MAID fit [29], in which E γ is limited to 1.9 GeV (W < 2.1 GeV). The η-MAID analysis involves only the data published up to 2002. The details of the new SAID and Regge-MAID PWAs will be the subject of future publications.
To search for the minimum χ 2 value in the SAID fits, an overall rescaling of the differential cross sections was permitted within limits specified by the experimental systematic uncertainties [31]. A similar rescaling of the data, but without possible adjustment of the partial waves, was applied in the η-MAID and Regge-MAID fits. Comparison of the χ 2 values from the two SAID fits, E429 and GE09, and from the two MAID fits, η-MAID and new Regge-MAID, is given in Table I. The separate contri- butions of individual data sets to the total χ 2 value are listed for each of the four γp → ηp analyses in Table II.
These indicate that the more recent data sets display a greater degree of consistency. However, the description of the CLAS-g11a data is worse with the new fit GE09, compared to the previous solution E429. Although in the overlapping energy range W = 1690 to 1875 MeV, our data and the CLAS-g11a data are in good agreement. In Fig. 9, we show our differential cross sections at 40 [19] energies and compare them with the results of each of the four PWA fits. In Fig. 10, a similar comparison is made for the excitation functions for eight production angles and for the full angular range. The number of the distributions shown is enough to illustrate the quality of our data, the main features of the γp → ηp dynamics at the measured energy range, and the impact of the present data on PWAs. The most noticeable effect of the present data on the new GE09 and Regge-MAID is due to very good measurements of the forward-angle cross sections for W in the range between 1545 and 1675 MeV. Earlier, this forward region either had been measured with worse accuracy or could only be reached by extrapolation. For completeness, in Fig. 11 we compare the GE09, E429, η-MAID, and new Regge-MAID solutions for the γp → ηp excitation function at the extreme production angles: forward (θ = 0 • ) and backward (θ = 180 • ). The new data along with the new fits definitely indicate the existence of a dip structure around W = 1670 MeV, which has already been seen in our total cross section (see Fig. 6) and becomes very pronounced at forward production angles of η (see Figs. 10 and 11). This feature was missed or questionable in the analysis of the previous data.
Traditionally, to illustrate resonance masses and widths, the total cross section is plotted as a function of the c.m. energy. As seen in Figs. 6 and 7, the γp → ηp total cross section rises sharply above the reaction threshold. Such behavior is usually attributed to the dominance of the N (1535)1/2 − resonance, having a mass close to the η production threshold (W = 1487 MeV) and a strong coupling to the ηN channel. Generally, the cross section for any process with a two-particle final state has the form [(p*/W) F(W)]. The first factor comes from the phase-space integration, and p * is the final-state relative momentum in the c.m. frame. The second factor F(W) is determined by amplitudes. The essential point is that W and F(W) depend explicitly only on (p * ) 2 , not on p * . In terms of the final-state parameters, W depends on masses and (p * ) 2 . Therefore, the nearthreshold structure of the cross section should look as a series in the odd powers of p * . Our data for γp → ηp are well described up to p * η ∼ 200 MeV/c as a 1 p * η + a 3 (p * η ) 3 with a 1 = (6.79 ± 0.09) × 10 −2 µb/(MeV/c) and a 3 = −(7.24 ± 0.22) × 10 −7 µb/(MeV/c) 3 (see Fig. 12). Contributions to the cubic term come both from P-wave amplitudes and from W -dependence of the S-wave amplitude, which is essential due to the near-theshold dominance of the S-wave resonance N (1535)1/2 − . Note that the characteristic momentum for changes in our fit is |a 1 /a 3 | ∼ 300 MeV/c, while the maximum of the resonance peak corresponds to p * η ∼ 175 MeV/c (see Fig. 12). The good quality of our data reveals itself in very small fluctuations of experimental points with respect to the fit. For comparison, a similar threshold behavior has been also observed in π − p → ηn [4], but the fluctuations there were larger due to lower precision of the data, and the coefficient a 3 could not be determined reliably. In addition, our fit gives implicit confirmation of small coupling of the ηN channel with the D-wave resonance N (1520)3/2 − , which could generate an essential term (p * η ) 5 .
Due to the dominance of the low-energy S-wave multipole, this contribution is nearly model-independent up to W = 1650 MeV. The modulus of the corresponding amplitude is plotted in Fig. 13 for both the SAID and MAID solutions. Phase differences are possible -these can be resolved in coupled-channel fits [32]. Figure 13 also shows the Breit-Wigner parameters, masses and widths, of two S 11 resonances as found in the SAID PWA solution SP06 for the πN elastic scattering [2]. Note that N (1650)1/2 − seems to be purely elastic, i.e., coupled only to the πN channel. If so, its contribution to η photoproduction should be small.
VII. SUMMARY AND CONCLUSIONS
The γp → ηp differential cross sections have been measured at the tagged photon facility of the Mainz Microtron MAMI-C using the Crystal Ball and TAPS spectrometers. The data span the photon-energy range 707 MeV to 1402 MeV and the full angular range in the c.m. frame. The accumulation of 3 × 10 6 γp → ηp → 3π 0 p → 6γp events allows the fine binning of the data in energy and angle, which will enable the reaction dynamics to be studied in greater detail than previously possible. The present data agree well with previous equivalent measurements, but are markedly superior in terms of precision and energy resolution.
The present cross sections for the free proton show no evidence of enhancement in the region W ∼ 1680 MeV, contrary to recent equivalent measurements on the quasifree neutron [26][27][28]. However, this does not exclude the existence of an N * (1680) state as hypothesized in Ref. [25]. In the region around W = 1680 MeV, we rather observe a dip structure that becomes more pronounced at forward production angles of η. This feature was missed or questionable in the analysis of the previous data. The interpretation of this dip depends on dynamics.
Our γp → ηp data points have been included in a new SAID (GE09) and Regge-MAID PWAs, to which they made an substantial contribution, particularly for forward angles. Comparing to the previous SAID fit, E429, and to the η-MAID fit, the description of all existing data by the new solutions, GE09 and Regge-MAID, is more satisfactory in the entire energy range.
We expect that the data presented in this paper will be invaluable for future partial-wave and coupled-channel analyses, in that they can provide much stronger constraints on the properties of the nucleon resonances from our energy region. Notations for the amplitude curves are the same as in Fig. 9. The vertical arrows indicate WR (Breit-Wigner mass) and the horizontal bars show the full and partial width Γ for ΓπN associated with the SAID solution SP06 for πN [2]. | 2010-09-25T15:05:44.000Z | 2010-07-05T00:00:00.000 | {
"year": 2010,
"sha1": "ca211bf47559f40472a328508d69bef9d6cf1ee9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1007.0777",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "213e8fd8b19b7e509d82b97e8a33b1ac26bcab19",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
17484707 | pes2o/s2orc | v3-fos-license | Mcm10 proteolysis initiates before the onset of M-phase
Background Mcm10 protein is essential for initiation and elongation phases of replication. Human cells proteolyze Mcm10 during mitosis, presumably to ensure a single round of replication. It has been proposed that anaphase promoting complex ubiquitinates Mcm10 in late M and early G1 phases. Results In contrast to the previous work, we report that the degradation of Mcm10 is initiated at the onset of mitosis. Immunoblotting and immunofluorescence assays display that Mcm10 levels are low in all phases of mitosis. We report that Mcm10 degradation is not dependent on anaphase promoting complex. Further, the proteolysis in M-phase can be independently mediated by non-overlapping regions of Mcm10, apparently employing a redundant mechanism to ensure downregulation. Conclusions It is believed that the proteolysis of Mcm10 during mitosis is a vital mechanism to prevent aberrant initiation of replication and the present study describes the regulation of Mcm10 during this phase of the cell-cycle.
Background
DNA replication in eukaryotes begins with the assembly of the pre-replicative complex comprising of replication initiators, the origin recognition complex, Cdc6, Cdt1, and the replicative helicase, Mcm2-7 complex [1]. Increase in the activity of cyclin dependent kinases and loading of replication factor, Mcm10 marks the transition from G 1 to S phase. These events promote the loading of Cdc45, RPA and DNA polymerases at the replication origins to initiate DNA synthesis. The essential requirement of Mcm10 in DNA replication initiation and elongation has been exhibited across species [2][3][4][5][6][7]. Mutations in Mcm10 cause a decrease in initiation of replication, slow progression of DNA synthesis and stalling of replication forks during elongation [8]. Since Mcm10 is essential for replication initiation and elongation, its activity is regulated in a cell-cycle manner to ensure a single round of replication. S. cerevisiae Mcm10 protein is present in all phases of the cell cycle though its association with chromatin is regulated to ensure replication licensing [3].
Human Mcm10 protein is known to decrease in the early G 1 phase [9]. The levels of MCM10 mRNA have been evaluated as the cells pass from M-phase into G 1 phase. Though the Mcm10 protein decreased, the levels of MCM10 mRNA increased within the same time period. These results demonstrate that the decrease in the Mcm10 activity during the G 1 phase is not due to decrease in transcription but because of protein turnover. In this communication, we report that proteolysis of human Mcm10 protein initiates before the onset of mitosis. In an immunoblot with anti-Mcm10 antibody, we observed that the M-phase blocked cells have downregulated the Mcm10 protein. On the basis of single cell immunofluorescence, we show that asynchronously growing cells in different phases of mitosis have reduced levels of Mcm10. Also, Mcm10 downregulation in Mphase is independent of the APC recognition motifs: the destruction and the KEN box. We demonstrate that Mcm10 degradation is not dependent on anaphase promoting complex or on its recognition motifs but rather mediated by non-overlapping regions, apparently employing a redundant mechanism to ensure downregulation.
Cell-cycle proteolysis of Mcm10 initiates before the onset of mitosis
Published reports indicate that the nocodazole blocked cells retain Mcm10 protein which decreases in late Mphase and starts increasing 6 h after release [9]. When MG132 was added to nocodazole released cells, there was an increase in Mcm10 levels suggesting that the degradation that was occurring during the late M-phase was blocked by proteasome inhibitors. In another report by the same group, a stable HeLa cell line which expressed green fluorescent protein-tagged Mcm10 from a human cytomegalovirus immediate early promoter was established and authors observed that the GFP-Mcm10 protein was detectable by immunofluorescence during mitosis [10]. We looked at the levels of Mcm10 and observed that they were significantly reduced in nocodazole blocked U2OS cells ( Figure 1A). Mcm10 appeared after 4-6 hours of release from nocodazole block. This is in contrast to the results obtained by Hanaoka and coworkers, who observed Mcm10 band by immunoblotting with anti-rabbit antibody, which was raised against 127-512 aa region of Mcm10. Therefore, we wanted to rule out that the absence of Mcm10 signal was due to limited immunoreactivity of the antibody used by us. We have used a polyclonal rabbit antibody, Ab (N), which is raised against the full-length protein, but showed weak immunoreactivity against the Cterminal protein [11]. However, similar results were obtained with another polyclonal rabbit antibody, Ab (FL), which shows immunoreactivity against all regions of Mcm10 ( Figure 1A). As reported previously, the specificity of both these antibodies in immunoblotting assays have been established by RNAi (Additional file 1 and file 2: Figure S1A and S5A) [11]. In order to rule out that the decrease of Mcm10 is an artifact generated by nocodazole toxicity, we blocked the cells with other reagents in Mphase and assayed the stability of Mcm10. Vincristine binds to tubulin dimers and inhibits the assembly of microtubule structures, blocking cells in metaphase. Colchicine inhibits microtubule polymerization by binding to tubulin and thereby blocks cells in metaphase. Taxol hyper-stabilizes the beta subunit of tubulin, which is then unable to disassemble inducing a cell-cycle block at the metaphase/anaphase transition. U2OS cells blocked with any of these drugs displayed low levels of Mcm10 ( Figure 1C). On the basis of the above data, we conclude that human cells have low levels of Mcm10 in mitosis.
As reported previously, we observed that the drop in Mcm10 levels during mitosis is dependent on the proteasome (Additional file 3: Figure S2B). To rule out that loss of recognition of Mcm10 during mitosis could be due to epitope masking, we established that the Mcm10 antibody recognizes the mitotic forms of Mcm10. Since Mcm10 is naturally proteolyzed during mitosis we expressed the HA-tagged NTD+ID domain which is resistant to cell cycle-regulated degradation and tested whether the anti-Mcm10 antibody (Ab [FL]) recognizes this forms of Mcm10 during M-phase. U2OS cells expressing the NTD+ID domain of Mcm10 were blocked with nocodazole, released and harvested at regular intervals in order to collect cells in different phases of the cell cycle ( Figure 2A). As observed previously, an anti-HA antibody immunoblot confirmed that the NTD +ID domain of Mcm10 was resistant to degradation in nocodazole blocked cells. An anti-Mcm10 antibody immunoblot displayed that the levels of endogenous Mcm10 levels were low in M phase and increased around 8 h after nocodazole release but the NTD+ID domain recognized through the same antibody did not show a decrease in levels, displaying that the Mcm10 antibody recognizes the mitotic forms of Mcm10.
Single-cell imaging displays reduced levels of Mcm10 during mitosis
We also looked at Mcm10 levels in single cells as they passed through interphase and various phases of mitosis. HeLa cells were fixed and visualized for endogenous Mcm10 using rabbit polyclonal anti-Mcm10 antibody, Ab (N), in combination with an anti-rabbit TRITC secondary antibody while microtubules were visualized with a mouse monoclonal antibody to alpha-tubulin conjugated to FITC ( Figure 3A). The specificity of the antibody in immunofluorescence assays has been established by RNAi (Additional file 1: Figure S1B). From the asynchronously growing culture, we identified cells in prophase, metaphase, anaphase and telophase, and assayed for Mcm10 and alpha-tubulin localization (Figure 3A). We observed that the cells in prophase or prometaphase, with condensed chromatin and bi-polar centrosomes, were negative for Mcm10 signal ( Figure 3A, top panel). In the same fields, we could observe interphase cells that were positive for Mcm10 signal, ruling out errors in our visualization method (arrows in fields 1, 2 and 3 point to prophase cells while other cells in same fields are in interphase). As previously reported, not all interphase cells are positive for Mcm10 signal and presumably these are the cells in early G 1 phase. Similarly, cells in metaphase, identified by the presence of equatorial plate and polar microtubules, were also negative for Mcm10 signal ( Figure 3A, middle panel). Cells in anaphase and telophase identified by segregated sister chromatids and shortened kinetochore microtubules, were also negative for Mcm10 signal ( Figure 3A, third and bottom panel). To confirm that the loss of detection of Mcm10 is not due to the protein becoming soluble during mitosis and that our immunofluorescence protocol can detect soluble proteins during mitosis, we have compared the levels of Mcm10 and cyclin B in nocodazole blocked cells. HeLa cells arrested in prometaphase (marked by arrowheads) were negative for Mcm10 signal but retained the cyclin B signal, establishing the accuracy of our immunofluorescence assay (Additional file 3: Figure S2A). Therefore, our results demonstrate that the levels of Mcm10 protein are significantly reduced during mitosis.
It is known that nocodazole interferes with the polymerization of microtubules and cells treated with nocodazole enter mitosis but cannot form metaphase spindles thereby activating the spindle assembly checkpoint which causes an arrest in prometaphase. As explained earlier in Figure 1, HeLa cells were treated with colchicine, nocodazole, taxol or vincristine for 15 h to obtain cells arrested in prometaphase. Incubation with any of these drugs resulted in disorganized microtubules and condensed chromosomes, confirming a Mphase block ( Figure 3C). Prometaphase cells have been marked by arrowheads and we observed that these cells were negative for Mcm10 signal confirming that Mcm10 degradation has been initiated at this stage of mitosis. We determined the levels of endogenous Mcm10 and cyclin A in an asynchronously growing culture of HeLa cells by immunofluorescence using a rabbit anti-Mcm10 antibody, Ab (N), and mouse anti-cyclin A antibody respectively ( Figure 4A). Utilizing the same antibodies, we have demonstrated that the cyclin A levels begin to increase in the S phase and it is degraded in the Mphase (Additional file 4: Figure S3). We observed that Mcm10 was present till the G 2 phase as almost all the cells (94%) that retained cyclin A signal also stained positive for Mcm10 ( Figure 4A). Combining the above results, we establish that Mcm10 is present till the G 2 phase and is degraded around the G 2 /M boundary.
Live-cell imaging demonstrates that Mcm10 is absent during the M-phase
An alternate approach to address the levels of Mcm10 during mitosis would be to evaluate the cycling of Mcm10 in live cells. Since it has been reported that Mcm10 protein expression is not mainly regulated at the transcriptional level, we expressed EGFP-tagged Mcm10 from cytomegalovirus immediate early promoter [9]. HeLa cells were transfected with pEGFP C3-Mcm10 and two days after transfection, we observed a 140 kDa band with Mcm10 immunoblot, confirming the expression of EGFP-tagged Mcm10 (Additional file 5: Figure S4A). HeLa cells transfected with blank pEGFP-C3 vector displayed cytoplasmic EGFP signal while EGFP expressed in fusion with Mcm10 was guided to the nucleus (Additional file 5: Figure S4B). We evaluated the EGFP signal during mitosis, when the cells appear rounded in phase-contrast microscope. We expressed the EGFP-tagged full-length Mcm10 in 293 cells and tracked the EGFP signal and cell division of 293 cells . DNA was stained with DAPI (second panel) while the right panel is a merge of images obtained from FITC, Alexa 594 and DAPI immunofluorescence. Cell 1 is positive for FITC and negative for Alexa 594 signal while cell 2 is positive for Alexa 594 but negative for FITC signal and cell 3 is positive for both demonstrating that there is no bleed-through of fluorescent signals. (B) RNAi confirms the cyclin A immunofluorescence signal while Mcm10 signal has been authenticated previously [11]. HeLa cells were transfected with GL2 or CYCLIN A siRNA oligos and later processed for immunofluorescence with the mouse anti-cyclin A antibody. The scale bar is 20 microns.
( Figure 5). The M-phase is known to last around 1-1.5 h and as shown in Figure 5, we located a cell that is in mitosis and undergoes cytokinesis after 1 h 40 min (the two daughter cells have been marked by arrows). We interphase and therefore the progression of cells from G 2 to M could not be observed. We have reported that the 61 aa ZF motif (783-843aa) of Mcm10 is sufficient for M phase proteolysis and therefore we expressed EGFP-tagged ZF motif of Mcm10 and tracked the EGFP signal and cell division of HeLa cells by fluorescence and phase-contrast microscopy respectively as the cells progressed through mitosis. As shown in Figure S4C (Additional file 5), we located a mitotic cell that undergoes cytokines after 1 h 20 min (the two daughter cells have been marked by arrows). We observed that the cell was negative for Mcm10 during mitosis but the daughter cells accumulated Mcm10 after around 5 h. Therefore, utilizing a different cell-line, we establish that the full-length Mcm10 is low during mitosis. Summing up, using this antibodyindependent approach, we demonstrated that the Mcm10 levels are low during mitosis.
APC is not required for cell cycle degradation of Mcm10
APC/C or cyclosome is a complex of many proteins that functions as an E3 ubiquitin ligase during mitosis. Since Mcm10 is low in the M-phase, there is a distinct possibility that APC is involved in degradation of Mcm10. Notably, Mcm10 contains destruction box sequence (REQLAYLES) and a KEN box, motifs that are required for recognition of substrates by APC [12]. To test whether APC mediates the M-phase degradation of Mcm10, we silenced APC3 subunit of APC, which would effectively debilitate its ubiquitination ability. HeLa cells transfected with APC3 siRNA for three consecutive days were blocked in the M-phase with nocodazole for 15 h and subsequently released into nocodazole free medium. RNAi against APC3 decreased the target protein and mRNA but that did not increase the levels of Mcm10 in nocodazole blocked cells ( Figure 6). RNAi effectively inhibited the APC activity as evidenced by increase in the levels of cyclin B (compare lanes 3 with 6 in Figure 6A). This strongly suggests that APC is not involved in the cell-cycle degradation of Mcm10. We have assayed the Mcm10 levels in asynchronous populations of cells transfected with APC3, CUL1, CDH1, CDC20, BETA-TRCP and FBXW7 or control GL2 siRNA and observed that only minor variations were observed in Mcm10 levels (Additional file 2: Figure S5). We have previously reported that UV-irradiation specifically proteolyses Mcm10 and we have identified the E3 ubiquitin ligase mediates the UV-triggered and M-phase proteolysis of Mcm10 [11].
APC recognition motifs are not required for cell cycle degradation of Mcm10
We have previously utilized stable U2OS cells expressing HA-tagged Mcm10 (utilizing the pMX-retroviral vector that is based on the moloney murine leukemia virus) to determine the segments of Mcm10 that are essential for cell cycle-regulated degradation [11]. Fulllength Mcm10 can be broadly divided into N-terminal (NTD), inner (ID), linker (LNK) and C-terminal (CTD) domains (Figure 7). A coiled-coil motif is present within the N-terminus of Mcm10, which is required for homodimerization. The inner and C-terminal domains, which contain zinc finger (ZF) and winged helix (WH) motifs, bind to single-and double-stranded DNA, and the p180 subunit of DNA polymerase-alpha [13,14]. As reported previously, we observed that the HA-tagged full length Mcm10 protein behaves like the endogenous protein [11]. U2OS cells expressing HA-tagged full length Mcm10 were blocked with nocodazole, released and harvested at regular intervals in order to collect cells in different phases of the cell cycle. Endogenous Mcm10 levels were low in M phase and increased around 8 h after release from nocodazole block ( Figure 2B). HAtagged full-length Mcm10 showed a degradation pattern similar to that of the endogenous protein, which demonstrates that our assay for evaluating the degradation of different regions of Mcm10 is accurate.
We observed that the NTD+ID domain of Mcm10, which contains the KEN box, was resistant to cell cycleregulated degradation. In the present study, we tested whether the WH motif, that contains the destruction box, is proteolyzed during the M-phase. Stable U2OS cells expressing WH motif of Mcm10 (707-770 aa) was blocked with nocodazole, which arrested the cells in Mphase. The obtained mitotic cells were released from the block and harvested at regular intervals in order to collect cells in different phases of the cell cycle. The flow cytometry profile of propidium iodide-stained DNA and the levels of cyclin B demonstrate a block in M-phase and subsequent progression through the cell cycle (Figure 2D). As noted previously, Mcm10 was absent in nocodazole-released cells, demonstrating the natural proteolysis of Mcm10 in M phase which began to increase after 8 h, displaying the natural cycling of Mcm10 levels ( Figure 2C). Full-length Mcm10 expressed from the retroviral vector showed a degradation pattern similar to that of the endogenous protein, validating our assay for evaluating the cell cycle-regulated degradation of Mcm10. We observed that the WH motif fragment was resistant to cell cycle-regulated degradation. We confirmed that the nuclear localization signal expressed in fusion with the WH motif steered it to the nucleus and therefore change in cellular localization is not the reason for resistance to proteolysis ( Figure 2E). We have previously reported that the LNK domain which does not have either the KEN box or destruction box is proteolyzed as cells pass through M-phase. Therefore, we conclude that the KEN and destruction box of Mcm10 are neither essential nor sufficient for its M-phase proteolysis.
Mcm10 proteolysis utilizes independent non-overlapping signals
To identify the domains required for Mcm10 proteolysis during M-phase, we expressed different regions of Mcm10 and assayed their stability. As noted previously, the NTD+ID domain was resistant to cell cycle-regulated degradation, but the LNK and CTD domains were proteolyzed in M phase ( Figure 8A). Though the ZF motif was sufficient for M phase proteolysis of Mcm10, the CTD domain lacking the ZF motif was also degraded in M phase, signifying that though ZF motif is sufficient, it is not essential for M phase proteolysis of Mcm10 ( Figure 9A). We now wanted to identify the degron that is essential for degradation.
We further divided the CTD (607-875 aa) into three fragments 607-707 aa, 707-770 aa (WH motif) and 770-875 aa. We observed that the 607-707 aa and 707-770 aa fragment were resistant to proteolysis ( Figure 8A, panel 1 and results of WH motif in Figure 2C). However, the 770-875 aa fragment decreased in M-phase, demonstrating that the signal for degradation lies in this fragment. In order to identify the minimum sequence required for Mcm10 degradation, we further divided the 770-875 aa fragment into three parts: 770-783 aa, 783-843 aa and 843-875 aa. 770-783 aa fragment was partially stable indicating that there could be an incomplete degron in this fragment ( Figure 8A, panel 3). The 783-843 aa (ZF motif) and 843-875 aa decreased in M-phase demonstrating that there are non-overlapping regions that are sufficient for Mcm10 proteolysis ( Figure 8A, panel 4 and results of ZF motif in [11]). The ZF motif was divided into 3 segments each of 20 amino acids: 783-803 aa, 803-823 aa and 823-843 aa. We observed that 783-803 aa region was essential and sufficient for degradation while 803-823 aa and 823-843 aa were not proteolyzed ( Figure 8C and 8E). Therefore, we have narrowed to two regions that contain the degron: a 20 aa region (783-803 aa) within the ZF motif and C-terminal end of Mcm10 (843-875 aa).
Another independent region that is sufficient for Mcm10 downregulation is the linker domain (440-607 aa). We divided the linker domain into three parts: 440-471 aa, 471-525 aa and 525-607 aa. We observed that though the linker domain is decreased during M-phase, none of the three constituent fragments were proteolyzed ( Figure 9A). However, when we combined the 440-471 aa and 471-525 aa fragments, the degradation ability was restored ( Figure 9C, panel 1). Combining 440-471 aa with 525-607 aa or combing 471-525 aa with 525-607 aa did not result in M-phase degradation (Figure 9C, panel 2 and 3). Therefore, the degron in the linker domain lies within the stretch of 440-525 aa. Hence, it seems that Mcm10 degradation is mediated by nonoverlapping regions, apparently to ensure downregulation of Mcm10 even if proteolysis at any one region is somehow blocked. There is no apparent sequence similarity between the three minimal regions that are competent for M-phase proteolysis: 440-525 aa, 783-803 aa and 843-875 aa. This would suggest that the ubiquitination machinery is adaptable to recognize multiple sequences to ensure Mcm10 proteolysis. Recognition of many biological targets is defined by similar 2-D structures rather than primary sequence and that possibility cannot be ruled for recognition of different domains of Mcm10.
Discussion and Conclusions
We have observed that the Mcm10 levels are significantly reduced in nocodazole blocked cells. This is in contrast to the previous report where on the basis of estimation of Mcm10 levels in asynchronous and nocodazole released cells by immunoblots, the authors inferred that Mcm10 degradation begins after metaphase [9]. It is possible that the Mcm10 protein observed after nocodazole treatment was contributed by cells trapped at G 2 /M boundary. The stabilization of Mcm10 by MG132 in nocodazole released cells observed by them is likely due to non-degradation of Mcm10 in these cells. It has been suggested that a destruction box sequence (719-727 aa) could be recognized by the anaphase-promoting complex for ubiquitination. Additionally, Mcm10 also contains a KEN box sequence (71-73 aa) that is present in targets ubiquitinated by APC. However, the NTD domain, which contains the KEN box is not degraded during M-phase [11]. Similarly, the WH motif which contains the destruction box is stable during mitosis ( Figure 2C). However, the linker region and ZF motif, which do not have either destruction or KEN box, are proteolyzed during M-phase. In summation, the above data would suggest that the APC recognition motifs are not required for M-phase degradation of Mcm10. In this study, we have also demonstrated that inhibition of APC does not block cell-cycle degradation of Mcm10.
This study has discovered at least three independent regions of Mcm10 that are sufficient for proteolysis during M phase. Since the non-overlapping regions of Mcm10 sufficient for M-phase proteolysis did not display sequence similarity, it suggests of a limited degeneracy of the E3 ligase to allow for recognition of multiple substrates. CRL4 is known to mediate the ubiquitination of almost two dozen substrates, including Cdt1, PCNA and histones that do not share sequence similarity [15]. It is widely believed that the proteolysis of Mcm10 during mitosis is a mechanism to prevent aberrant initiation of replication and therefore it is possible that utilization of independent regions is a means to ensure downregulation of Mcm10 even if proteolysis at any one region is somehow blocked. The cloning of stable mutants of full length Mcm10 would help us evaluate the effect of stable Mcm10 on cell-cycle progression and genomic stability. It is believed that the proteolysis of Mcm10 during mitosis is a vital mechanism to prevent aberrant initiation of replication and the present study describes the regulation of Mcm10 during this phase of the cell-cycle.
Methods
Cell culture, chemicals, antibodies, cell synchronization and FACS analysis Cell lines were maintained in DMEM supplemented with fetal bovine serum and antibiotics. Specific chemicals and antibodies used in this study are mentioned in the supplementary methods (Additional file 6). HeLa cells were transfected with specific siRNA oligos consecutively for three days and were blocked with 40 ng/ml nocodazole for 15 h. Later, cells were released in drug-free medium and cells arrested in the M-phase were collected by 'mitotic-shake off'. The obtained mitotic cells were then re-plated in drug-free medium and harvested till 12 h to collect cells in different phases of the cell cycle. HeLa and U2OS cells were incubated with 40 ng/ml or 0.10 μg/ml nocodazole, 0.05 μg/ml or 0.2 μg/ml vincristine, 0.15 μg/ml or 0.5 μg/ml colchicine or 0.1 μg/ml or 0.3 μg/ml taxol for 15 h or 16 h respectively to obtain mitotic cells. For cell cycle analysis, the cells were fixed with 70% ethanol after washing with 1× PBS. Subsequently, the cell pellet was resuspended in 1× PBS with 0.1% Triton X-100, 20 μg/ml RNase and 70 μg/ml propidium iodide and the flow cytometry was performed.The flow cytometry data was acquired on Becton Dickinson FACS Calibur machine by Cell Quest Pro software. Cell cycle analysis was done by Dean/Jett/Fox method of FlowJo software. Finally the results were assayed using the Enhanced Chemiluminescence method.
RNAi silencing and reverse-transcriptase PCR
Silencing of genes was done by transfecting specific 40-80 nM siRNA duplex on three consecutive days and cells were harvested 24 h post the last transfection.The levels of protein and mRNA were evaluated by immunoblotting and reverse-transcriptase PCR respectively. For reverse-transcriptase PCR, RNA was extracted using TRIzol method and 0.25-1 μg RNA was used for cDNA synthesis. The primers used for PCR are described in the supplementary methods (Additional file 6)
Plasmid construction, transfection, immunoblotting and immunofluorescence
Full-length Mcm10 was subcloned into BglII and SalI sites of pEGFP-C3 vector, which carries the GFP sequence at the N-terminal of the vector. Similarly, Mcm10 cDNA was digested with either BglII and EcoRI or BglII and MfeI and cloned in BamHI and EcoRI sites of pMX-puro-NLS-HA vector. The sequences of the cloning primers are provided in the supplementary methods (Additional file 6). Stable U2OS cells expressing Mcm10 and its fragments were generated as described previously [11]. Cells of almost equal confluency were lysed in proportionate volumes of Laemmli buffer for immunoblotting. To demonstrate equal protein loading in each lane, immunoblotting was performed and a nonspecific protein band was displayed. For indirect immunofluorescence studies, HeLa cells grown on glass coverslips in DMEM supplemented with 10% fetal bovine serum (FBS) were fixed with 4% formaldehyde in PBS for 10 min and then permeabilized with 0.2% Triton X-100 in PBS for 5 min. Later, the cells were blocked with 10% FBS in PBS with 0.1% Tween-20. The cells were then stained with primary antibody (1:100 or 1:500 dilution) for an hour followed by either fluorescein isothiocyanate, Alexa-488 or Alexa-594-conjugated anti-rabbit or anti-mouse antibody (1:500 dilution) at room temperature. The coverslips were then mounted with a mounting reagent with DAPI and viewed under the Nikon TE2000-S inverted fluorescence microscope. Images were captured on Evolution VF (Media Cybernetics) 12-bit color digital camera using the 'Q capture Pro' software and contrast enhancements were identically done for all the images of a particular antibody/protein in an experiment. Some images were captured using the Zeiss LSM 510 confocal microscope and viewed using the Zeiss LSM Image Browser Version 4,2,0,121 software.
an AxioCam HRm digital CCD camera. (C) Time-lapse imaging analysis of asynchronous culture of HeLa cells expressing EGFP-Mcm10. HeLa cells were transfected with pEGFPC3-ZF Mcm10 and 24 h after transfection, cells were placed on a live cell imaging stage (37°C with 5% CO 2 ), and images were captured at 20 min intervals as described in Figure 4. Some of the representative images have been shown along with the time elapsed since the start of imaging. Top rows are phase-contrast images while the bottom rows are corresponding EGFP fluorescent images in dark field. The two arrows indicate the daughter cells after cytokinesis at 1 h 20 min. | 2014-10-01T00:00:00.000Z | 2010-10-28T00:00:00.000 | {
"year": 2010,
"sha1": "41055b6c4d14d89ecaaf714c0a74464502bfd7c7",
"oa_license": "CCBY",
"oa_url": "https://bmccellbiol.biomedcentral.com/track/pdf/10.1186/1471-2121-11-84",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "41055b6c4d14d89ecaaf714c0a74464502bfd7c7",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
37087696 | pes2o/s2orc | v3-fos-license | Models, postulates, and generalized nomic truth approximation
The qualitative theory of nomic truth approximation, presented in Kuipers in his (from instrumentalism to constructive realism, 2000), in which ‘the truth’ concerns the distinction between nomic, e.g. physical, possibilities and impossibilities, rests on a very restrictive assumption, viz. that theories always claim to characterize the boundary between nomic possibilities and impossibilities. Fully recognizing two different functions of theories, viz. excluding and representing, this paper drops this assumption by conceiving theories in development as tuples of postulates and models, where the postulates claim to exclude nomic impossibilities and the (not-excluded) models claim to represent nomic possibilities. Revising theories becomes then a matter of adding or revising models and/or postulates in the light of increasing evidence, captured by a special kind of theories, viz. ‘data-theories’. Under the assumption that the data-theory is true, achieving empirical progress in this way provides good reasons for the abductive conclusion that truth approximation has been achieved as well. Here, the notions of truth approximation and empirical progress are formally direct generalizations of the earlier ones. However, truth approximation is now explicitly defined in terms of increasing truth-content and decreasing falsity-content of theories, whereas empirical progress is defined in terms of lasting increased accepted and decreased rejected content in the light of increasing evidence. These definitions are strongly inspired by a paper of Gustavo Cevolani, Vincenzo Crupi and Roberto Festa, viz., “Verisimilitude and belief change for conjunctive theories” (Cevolani et al. in Erkenntnis 75(2):183–222, 2011).
Introduction
The idea of truthlikeness or verisimilitude amounts to the claim that one theory may be closer, or more similar, to the truth than another.It was made an interesting topic in the philosophy of science by Popper (1963), who presented a very plausible, but failing explication of it.He proposed that 'closer to the truth' holds when the one theory has more true and less false consequences than the other.Miller (1974) and Tichý (1974) independently proved that a false theory, in the sense of a theory with at least one false consequence, could according to Popper's definition never be closer to the truth than another one.Ever since, there have been developed other accounts that circumvent this problem successfully.See Kuipers (1987) for an incomplete collection of approaches.See Niiniluoto (1998) for an important survey and Oddie (2014) for the most recent survey.Some global distinctions are in order.(1) Authors may put the emphasis on a quantitative definition, based on a distance-from-the-truth measure, notably Niiniluoto (1987), or on a qualitative definition of the comparative closer-tothe-truth claim (e.g.Oddie 1986;Zwart 2001).It is important to note that in a qualitative approach most theories will be incomparable, that is, in most cases one theory will be neither closer, nor less close, to the truth than another.The advantage of such an approach is that it focusses on safe cases of comparison, providing a sound point of departure for concretizations, e.g. a quantitative approach.(2) As a rule, authors focus on (truthlikeness with respect to) 'the actual truth', that is, the truth about what is (or was) actually the case.However, from a philosophy of science point of view, one may argue that scientists are aiming at theories that capture what is physically possible and what is not.In other words, they are aiming at 'the physical truth' or, more broadly, 'the nomic truth', which makes nomic truthlikeness the topic of investigation.(3) Finally, usually authors opt for a logical approach and conceive theories primarily as sets of sentences or propositions, but one may also opt for a structuralist approach and conceive theories primarily as sets of conceptual possibilities, represented by models or, more generally, by set-theoretic structures.
The 'nomic truth' can perhaps best be illustrated by my favorite toy example of theory oriented science (see Kuipers 2000, p. 143).To represent an electric circuit with several switches and bulbs one may use a language with elementary propositions that enable to indicate which switches are on and which are off and also to indicate which bulbs give light and which do not.Several of the conceptually possible states will be physically possible, and one of them will be the actual state.
Referring to Fig. 1, let p i for 1 ≤ i ≤ 4 indicate that switch i is on ( ) and ¬ p i that it is off ( ).Let q(¬q) indicate that the bulb lights (does not light).It is assumed that the bulb is not defective and that there is enough voltage.A possible state of the circuit can be represented by a conjunction of negated and un-negated p i 's.It is clear that there is just one true description of the actual state of the circuit as it is depicted, q& p 1 &¬ p 2 & p 3 & p 4 , according to the standard propositional representation.Hence, the example nicely illustrates, among others, that we consider 'the actual world' pri- marily as something partial and local, i.e., one or more aspects of a small part of the actual universe.However, it need not be restricted to a momentary state, it may also concern an actual trajectory of states in a certain time interval.In sum, the actual world is the actual world in a certain context.To represent the physically possible states by one proposition or theory one will have to design a complex proposition, the nomic truth, in the above case: All states in which this proposition is true are physically possible, all others are not.
All propositions that can be formulated in the indicated language may be considered as candidates for being this nomic truth and, at least intuitively, one proposition may be closer to the truth than another, viz.when it captures more physically possible states and less impossible ones.This may be a toy example for representing theory oriented science, but in present day epigenetics there is a close analogy: in fact, genes are considered as switches that may be on or off.However this may be, only the general tenet of the example is relevant: theory oriented science is ultimately aiming at characterizing what is e.g.physically, biologically or economically possible, and theories are tested by experiments which are realizations of possibilities.
In Kuipers (2000) I have presented a qualitative theory of nomic truth approximation in a structuralist way, in which 'the nomic truth' is more specifically conceived of as the true boundary between nomic, e.g.physical, biological, etc., possibilities and impossibilities within a target domain and a theory amounts to a classification of all conceptual possibilities as either nomically possible or nomically impossible.In the basic form of my account, a theory is defined as closer to the truth than another when it makes less classification mistakes, not merely in numbers, but in the strong sense that all its classification mistakes are shared by the other theory, which makes in addition some extra mistakes.
However, it recently turned out that the motivation of this formal definition is based on a very restrictive, but unnecessary assumption, namely that theories give a complete classification of all conceptual possibilities.In Kuipers (2014b) I have shown that the formal definition can already be motivated when a theory is only supposed to claim that it includes all nomic possibilities, and hence that all its non-members are nomic impossibilities.This new motivation is in terms of increasing truth-content and decreasing falsity-content and is strongly stimulated by a paper of Cevolani et al. (2011).Formally, it is not difficult to see that the formal definition can also be motivated in a similar way by assuming that a theory only claims that all its members are nomic possibilities (instead of claiming that it includes all nomic possibilities).
123
This paper exploits the possibility of dropping the restrictive assumption in a conceptually very attractive way by combining both indicated ways.The resulting theory of nomic truth approximation becomes conceptually in complete harmony with two prima facie opposing functions of theories.On the one hand, there is the Popperian or exclusion view that theories exclude certain (conceptual) possibilities from occurring or being realizable.On the other hand, there is the inclusion or representation view that theories represent, in relevant respects, certain possibilities as realized or realizable. 1he exclusion function is typically associated with speaking of the axioms, principles or postulates of theories.They have to be satisfied and hence they exclude together everything which does not satisfy them all.The representation function on the other hand is typically associated with speaking of (specified) models of theories.Seen from this perspective, I assume in Kuipers (2000) that the set of (representing) models of a theory coincides with the set of models, in the formal sense, of the (excluding) postulates of the theory.Although such a 'maximal' theory may be the ultimate aim of nomic theorizing, viz. the strongest true theory, it is not realistic to assume this of 'theories in development'.In sum, dropping the restrictive assumption makes it perfectly possible to separate models and postulates, in order to fully recognize the twofold function of theories in general and in aiming at truth approximation in particular.
Hence, in this paper a theory will in general be taken as 'non-maximal' in the following two-sided sense, viz.as a combination of a set of Models and a set of Postulates, where the former have to satisfy the latter, but need not exhaust the set of models of the joint Postulates.It will be convenient to represent the Postulates by the set of all its models: models (Postulates).Hence, a theory becomes a tuple of the following form: <M, P> with P = models(Postulates) and M(odels) being a subset of P This two-sided approach to theories can do justice to three different views in philosophy of science, viz.theorizing is mainly a matter of (1) formulating and revising postulates, or (2) designing and redesigning models, or (3) the two-sided combination of them.The generalized theory of truth approximation to follow will provide additional support for the two-sided view, but will leave room for both one-sided views.
It is important to note, as an aside, that the two-sided view on theories is also in perfect agreement with the hypothetical-deductive (HD) and deductive-nomological (DN) views on prediction and explanation, respectively.For the prediction and explanation of an event we start with representing (modelling) the situation in the relevant terms, as far as possible, but without the crucial event.This is derived by applying the relevant postulates, which amounts to completing or closing the model as far as required.Hence, prediction and explanation naturally appear as co-production of (partial) models and postulates, that is, of representation and exclusion.
The generalized theory of truth approximation presented below is based on the twosided view on theories and technically reduces to that of Kuipers (2000) by assuming M=P throughout.The representation claim will be associated with M and the exclusion claim with P. Truth approximation is construed in terms of increasing truth-content and decreasing falsity-content of the claims by adding or revising models and/or postulates in the light of increasing evidence.The generalized theory further follows Kuipers (2000) in reconstructing evidence as a 'data-theory' based on realized, hence nomic, possibilities and inductive generalizations based on them (and implying induced impossibilities).The evidence will guide the comparative assessment of the success of theories and the subsequent planning of new experiments to be performed leading to increasing evidence.Ultimately, the comparative success assessment may give good reasons not only for the inductive conclusion that empirical progress has been made by a revised version of the theory relative to the original one but even for the abductive conclusion that it is closer to the truth than the original and hence that truth approximation has been achieved.
We start with presenting an example of a two-sided theory (Sect.2) and then outline the essentials of any theory of (nomic) truth approximation (Sect.3).A relatively detailed presentation of the generalized theory follows (Sects.4-7), with emphasis on what is conceptually new relative to Kuipers (2000).We close by indicating a number of perspectives on concretization, an alternative interpretation, and a link with belief revision (Sect.8).
An example of a two-sided theory
The following, simplified, example illustrates the idea of a two-sided theory.Newton's theory can be represented on the one hand by the three laws of motion, i.e. by its (general) postulates, and on the other by various sets of models, e.g.(cpm-)models of classical particles mechanics (cpm) and models of classical rigid body mechanics, with specific sub-classes.
Focusing on the set or universe of conceptually possible systems of classical particle mechanics, U cpm , the general postulates determine together the set of all models satisfying them, P cpm (⊆U cpm ).Various sub-classes of P cpm are based on specific assumptions about the nature of such systems, notably so-called special force laws, e.g. the set of models P gcpm satisfying the law of gravitation, but also on many system specific assumptions.Hence, a specific model for a specific system is built up by starting bottom-up with system specific details and is top-down completed by applying the general and the special postulates, here called the GSP-closure of the model.Such a specific model may or may not be generalized to a subset of models for a specific subtype of cpm-systems.2For example, let M fo indicate the GSP-closure of objects of medium size and weight, falling from a medium distance on the earth. 3s soon as a particular model is claimed to represent a particular system (e.g a falling brick) it predicts behavior of the system in accordance with that model. 4If that Fig. 2 The classical theory of gravitational particle mechanics (still) restricted to falling objects model is disconfirmed some specific (bottom-up) assumption may have to be revised, but if similar models for similar systems are also disconfirmed, the whole relevant class of particular models may have to be revised, e.g. by revising the particular force law (top-down).In this way we see that revising a two-sided theory may be a matter of adding or revising postulates that exclude things (top-down) or of adding or revising models that are supposed to represent and hence to be included (bottom-up), or both.
Let us now assume that there is a sharp, but not yet specified, distinction or boundary between the set T gpm of physically possible systems of gravitational particle mechanics, as represented in U cpm , and the set of physically impossible systems of gravitational particle mechanics, that is, the complement of T gpm .Figure 2 illustrates the four subsets of U cpm introduced: P cpm , the subset P gcpm , and the latter's subset M fo , and finally the, unknown, target set T gpm .Now we may reconstruct truth approximation as a matter of aiming at establishing the true boundary between nomic possibilities and impossibilities, or the truth about such a boundary, in the present case, starting from the two-sided theory <M fo , P gcpm > with the claim M fo ⊆ T gpm ⊆ P gcpm , and revising it by adding or revising the models and/or adding or revising the postulates in the light of increasing evidence, with the ideal end the 'maximal and true' or the strongest true theory <M # , P # >, i.e. the theory for which M # = T gpm = P # .
Outline of a theory of (nomic) truth approximation
The paper is set up by following the cornerstones of any theory of truth approximation as developed in Kuipers (2000) and enriched by Cevolani et al. (2011) and Cevolani et al. (2013).
I Initial step: description of the target
• Assuming a domain of research and a vocabulary, clarify the target you are aiming at, that is, about which you aim to approach the truth.• Here, the target is the nomic truth, i.e. which conceptual possibilities are nomically possible in the domain of research and which are not.
II Logico-semantic steps: define closer to the truth (greater verisimilitude or truthlikeness) • Define your notion of theory, which we did already, and define next the notion of true (false) theories and hence of the strongest true theory, i.e. the truth.• Define the notions of the truth-content and the falsity-content of a theory.
• Define closer to the truth in terms of suitably specified notions of a larger truth-content and a smaller, or otherwise less problematic, falsity-content.III Epistemological steps: define more successful • Identify the kind of evidence that experiments provide, up to and including inductive generalizations.• Assuming that the evidence is accepted, define the therewith accepted content (or the successes) and rejected content (or the failures) of a theory.• Define more successful in terms of a larger accepted content and a smaller, or otherwise less problematic, rejected content.IV Theoretical step: from verisimilitude to success • Assuming the truth of the (accepted) evidence, prove the strongest 'success theorem'.Ideally this theorem amounts to: 'closer to the truth' unconditionally entails 'at least as successful' and in the long run even 'more successful'.V Methodological steps: from success to verisimilitude • Assuming that a new theory is at a certain moment more successful than the old one, propose and test the empirical progress hypothesis: the new theory (is and) remains more successful than the old one.• Assuming that after 'sufficient confirmation'5 the empirical progress hypothesis is accepted (for the time being), argue on the basis of the success theorem that the best explanation for this case of empirical progress is the truth approximation hypothesis that the new theory is closer to the truth than the old one, i.e. that this is a case of truth approximation.• Abductively conclude (for the time being) that the new theory is closer to the truth than the old one, i.e. that truth approximation has been achieved.
In this paper I will put the formal emphasis on steps II and III, corresponding to Sects. 5 and 6, because they are conceptually new, whereas essentially similar versions of the remaining steps have been elaborated in Kuipers (2000).Sections 4 and 6.1 introduce the crucial points of departure from Kuipers (2000) regarding the target of nomic research and of how evidence is conceived, respectively.
I Initial step: description of the target
Let U indicate the set of conceptual possibilities in a given context (e.g. the possible states, trajectories or transformations 6 of a system, or possible kinds of systems), Fig. 3 The set of conceptual possibilities U and the (unknown) subset of nomic possibilities T
U cT
T generated by a descriptive vocabulary V in which U is characterized as V's set of settheoretic structures, and in which subsets of U, e.g.X, Y, R, S, can be characterized.
Complements of sets will be indicated by a preceding 'c': e.g.cX.By the way, U should not to be taken as a set of possible worlds in the standard 'there is only one, all-inclusive, world' sense.Our possibilities are relative to a certain context or (type of) system(s), sometimes called 'small worlds'.They are only mutually exclusive and jointly exhaustive in the same case in the given context, e.g. in one state of a system.In this respect they can best be compared with the possible 'elementary outcomes' of an experiment in probability theory, e.g.throwing a die: although the six faces are mutually exclusive (and jointly exhaustive) in one experiment, they are not when different experiments are considered.However, there is also an important difference: instead of the six faces, not all our conceptual possibilities are supposed to be physically possible.
Let (bold) T indicate the subset of nomic, e.g.physical, possibilities, and hence cT the subset of nomic impossibilities.By the bold 'T' we indicate that we do not (yet) dispose of a characterization of it in terms of V, see the dashed ellipse in Fig. 3.The target of research is identifying, if possible, T's boundary in V-terms, indicated by (non-bold) T, hence T=T, assuming such a characterization exists, which I will do throughout in this paper.T will be called 'the (explicit) (nomic) truth', for reasons that will become clear.
Theories, their claims, and 'the truth'
For the logico-semantic steps we start with defining the notion of theories.In the present nomic context, theories are intended to (at least partially) characterize T. Recall that for that purpose a theory is conceived as a tuple <M, P> of subsets of U, defined in V-terms, with P = Models (Postulates) and the claims: "M ⊆ T", the inclusion or representation claim: all members of M are nomic possibilities, "T ⊆ P", i.e. "cP ⊆ cT", the exclusion claim: all non-members of P are excluded from being nomic possibilities (are nomic impossibilities), or, equivalently, no nomic possibility is excluded by the postulates of the theory.
The members of M are called the models of the theory and those of P the models of the postulates of the theory. 7 theory is consistent if M ⊆ P, i.e. when its two claims are compatible; it is inconsistent otherwise.Note that this notion of inconsistency is not the standard logical one, but here plausible and of course related.8This paper is restricted to consistent theories, leaving inconsistent ones for a future paper, for they seem interesting as well.A theory is maximal if M=P; it is non-maximal otherwise.Kuipers (2000) is restricted to maximal theories.Maximal theories in this sense seem also characteristic for the model-theoretic and structuralist9 or semantic view on theories.
It will be useful to also define two other extreme kinds of theories, besides maximal ones, so-called pure (or one-sided) theories.A theory <M, P> is a pure theory of postulates, or a pure exclusion theory, if M = ∅ and it is a pure theory of models, or a pure inclusion theory, if P=U.In these terms, a (non-pure or two-sided) theory <M, P> is a combination of a pure theory of models <M, U> and a pure theory of postulates <∅, P>, also called the M-side (or inclusion) theory and the P-side (or exclusion) theory, respectively.
Finally, a theory <M, P> is true if both claims are true, i.e.M ⊆ T ⊆ P, false otherwise.Now it is easy to see that there is at most one maximal or strongest true theory, called the true (nomic) theory or simply the (nomic) truth, viz. the one for which M=T=P.From now on, 'the truth' will refer to 'the nomic truth', except when otherwise stated.It results from the characterization of T in V-terms, if it exists.It will be indicated by <T, T>, or simply T, with non-bold 'T'.This T is the target of (theory-oriented) research.
It is also easy to check that there is at most one strongest true pure theory of models, viz.<T, U> and that there is at most one strongest true pure theory of postulates, viz.<∅, T>.Hence, <T, T> is the strongest true (two-sided) theory.
Truth-and falsity-content
In view of the two claims of a theory it is plausible how to define the truth-content and the falsity-content of a theory.Consider the M-side claim M ⊆ T. As far as M ∩ T Similarly, regarding the P-side claim T ⊆ P, that is, cP ⊆ cT, the sub-claim cP ∩ cT ⊆ cT is true and the additional sub-claim cP − cT ⊆ cT is false.Hence we define cP ∩ cT as its truth-content and cP − cT as its falsity-content.Their union equals cP, in which Popper's idea of the empirical content can be recognized easily: the set of possibilities which are excluded by the postulates.Table 1 summarizes these definitions.
In combination with Table 1 and Fig. 4 makes graphically clear which subsets are involved.
Closer to the truth (greater verisimilitude or truthlikeness)
The notions of truth-and falsity-content just defined allow for a straightforward definition of "closer to the truth" in terms of greater truth-content and smaller falsity-content.Focusing first on pure M-side theories, for example, one can define: <M * , U> is at least as close to the truth as the M-side theory <M, U> iff TC-clause: the truth-content of <M, U> is a subset of the truth-content of <M * , U> and FC-clause: the falsity-content of <M * , U> is a subset of the falsity-content of <M, U> A similar definition can of course be given for pure P-side theories. 123 Table 2 presents the combined definition of "<M * , P * > is at least as close to the truth as <M, P>".
It is not difficult to check that the four single clauses are independent as long as the theories are non-maximal.Moreover, as indicated, on both sides the two single difference clauses can be combined into the corresponding combined symmetric-difference clauses.It is easy to check that for maximal theories (M=P) the two combined clauses are formally equivalent 10,11 .
Figure 5 illustrates "<M * , P * > is at least as close to the truth as <M, P>".In combination with Table 2 the separated figures make graphically clear by shading which subsets have to be empty at the M-side (left) and at the P-side (right).Of course, the figures have to be conceived as combined, taking into account that M ⊆ P and M * ⊆ P * .Hence, there will appear single and double shaded subareas.
Although the two figures look formally similar, it is important to note that corresponding shaded areas are empty due to 'opposite' clauses.E.g. in the left figure the shaded area on the left is empty due to the TC-clause for the M-side, viz.M ∩ T ⊆ M * ∩ T, whereas in the right figure it is empty due to the FC-clause for the P-side, viz.cP * − cT ⊆ cP − cT, or, equivalently T − P * ⊆ T − P.
It is also not difficult to check that the P-side can be summarized by 'having at least as many true consequences (for P * ∪ T is included in P ∪ T12 ) and correctly allowing Finally, it is plausible to define 'closer to the truth' if, in addition to 'at least as close to the truth', at least one of the four required single set-theoretic inclusions can be replaced by a proper inclusion and hence if at least one of the two required combined inclusions in terms of symmetric differences is proper.
Empirical data
So far, we have been engaged with the logical problem of verisimilitude, that is, explicating 'closer to the truth', assuming that we dispose of the truth in one way or another.Now we turn to the epistemological problem of specifying empirical conditions that support the conclusion, at least for the time being, that one theory is closer to the unknown truth than another.As we will see, empirical progress is such a condition.
In the nomic context the empirical data are asymmetric in the following sense (cf.Kuipers 2000, p. 157).We can establish by experiments nomic possibilities.In fact, every experiment realizes by definition a nomic possibility.However, it is evident that we cannot establish nomic impossibilities in such a direct way.But in the empirical sciences we use to 'induce' nomic impossibilities indirectly by inductive (empirical) generalizations that we accept, for the time being, on the basis of 'sufficient' experimentation, notably by trying to realize counterexamples.For example, the observation of a black raven is the realization of a nomic possibility, whereas concluding that nonblack ravens do not exist is an inductive generalization, viz.all ravens are black.
Footnote 12 continued that all true consequences of <M, P> (at the P-side) are (true) consequences of <M * , P * >.For a detailed comparison between Popper's failing consequence-based approach and the 'model-based' approach, both for maximal theories, the reader is referred to Kuipers (2000, Chap. 8.1), where the latter is also translated in terms of consequences, leading to the identification of Popper's bad luck.We indicate the (asymmetric) data at a certain moment by <R, S>, where R indicates the set of realized nomic possibilities (e.g.realized physical possibilities) and S indicates the strongest law induced on the basis of R, more precisely, the set of conceptual possibilities that are not excluded by (the combination of) the accepted inductive generalizations.In this setup cS indicates the set of induced nomic impossibilities.Of course, we may always assume that R ⊆ S, for, if not, any element in R-S would represent a realized counterexample to S and hence to at least one of its constituting inductive generalizations.Moreover, if the experiments are correctly described by R (relative to the vocabulary) and if, in addition, S is correctly induced, we may conclude that R ⊆ T ⊆ S, whatever T is.Hence, we not only assume that <R, S> is a theory, a 'data-theory', but by accepting it we even assume that it is a true theory, that is, we have accepted the claims R ⊆ T and T ⊆ S (≡ cS ⊆ cT).Of course, the correctness assumptions are very substantial, in particular that of correct inductive generalizations.
Figure 6 displays <R, S> as a true theory.
Accepted and rejected content
To get plausible definitions of accepted and rejected content of a theory <M, P> in the light of an accepted data-theory <R, S> we have to confront the claims of the former, that is, M ⊆ T and cP ⊆ cT, with the accepted claims of the latter, that is, with the claims R ⊆ T and cS ⊆ cT.Let us first look at the M-side.Of the theory claim M ⊆ T, recall, with content M, we have, by accepting R ⊆ T, accepted the sub-claim M ∩ R ⊆ T, and hence the accepted M-content is M ∩ R, which may be called the set of realized examples.13However, by accepting cS ⊆ cT, we have also rejected the sub-claim M ∩ cS ⊆ T, and hence the rejected M-content is M ∩ cS(= M − S), which may be called the set of induced counterexamples.
Similarly for the P-side.Of the theory claim cP ⊆ cT, recall, with content cP, we have, by accepting cS ⊆ cT, accepted the sub-claim cP ∩ cS ⊆ cT, and hence the accepted P-content is cP ∩ cS, which may be called the set of induced examples.However, by accepting R ⊆ T, we have also rejected the sub-claim cP ∩ R ⊆ cT, Undecided (M-resp P-) content and hence the rejected P-content is cP ∩ R(= R − P), which may be called the set of realized counterexamples.Table 3 gives a full survey, adding the also plausible terminology of 'true and false positives (negatives)',14 the undecided content on both sides,15 and a numbering of the most relevant subsets.
Figure 7 illustrates all these concepts graphically, using the numbering of subsets in Table 3.
At least as successful relative to <R, S>
It is rather plausible to define now the idea that (the revised or new) theory <M * , P * > is at least as successful as (the initial) theory <M, P>, relative to accepted data-theory <R, S>, by requiring, that all successes (realized and induced examples) of <M, P> are successes of <M * , P * > and all failures (induced and realized counterexamples) of <M * , P * > are failures of <M, P>.Or, equivalently, at least as much accepted content (AC-clauses) and at most as much rejected content (RC-clauses) of the *theory on both sides, of course, not in terms of numbers but of subset conditions.The result is given in Table 4.
Fig. 7 Accepted and rejected M-and P-content of theory <M, P> in the light of accepted data-theory <R, S>.See Table 3 for the legend.Note It may be interesting to note that P-S represents the wrongly not excluded nomic impossibilities, of which the subset M-S represents the wrongly included nomic impossibilities a Since the clause is also equivalent to S ∪ P * ⊆ S ∪ P it can also be paraphrased by "all induced laws entailed by <M, P> are entailed by <M * , P * >, for any superset of S ∪ P is a superset of S ∪ P * .See Note 12 Of course, 'more successful' is defined by requiring, in addition, that at least one of the four single clauses is a proper inclusion.
Remaining steps: from verisimilitude to success, and vice versa
At this point we just repeat the remaining steps in setting up a theory of nomic truth approximation and briefly give some particular features of the specific case.
IV Theoretical step: from verisimilitude to success • Assuming the truth of the (accepted) evidence, prove the strongest 'success theorem'.Ideally this theorem amounts to: 'closer to the truth' unconditionally entails 'at least as successful' and in the long run even 'more successful'.
In the present case it is not difficult to check that 'ideally' applies, i.e. if <M * , P * > is closer to the truth than <M, P> and if the data-theory <R, S> is 123 true, this entails that <M * , P * > is at least as successful as <M, P> relative to <R, S>, and, under some probabilistic test assumptions,16 that it will become more successful in the long run.
V Methodological steps: from success to verisimilitude • Assuming that a new theory is at a certain moment more successful than an old one, propose and test the empirical progress hypothesis: the new theory (is and) remains more successful than the old one.
Note that the two claims of a theory lead to different kinds of predictions: whereas the "T⊆ P" claim leads to 'this is impossible' (hence, 'that must happen') predictions, the "M ⊆ T"-claim leads to 'this is possible / may happen'-predictions.With plausible consequences for differential predictions between <M, P> and <M * , P * >.
• Assuming that after 'sufficient confirmation' the empirical progress (EP-)hypothesis is accepted (for the time being), which is an inductive conclusion, argue on the basis of the success theorem that the best explanation for this case of empirical progress is the truth approximation (TA-) hypothesis that the new theory is closer to the truth than the old one, i.e. that this is a case of truth approximation.
The reverse ('from success to verisimilitude') consequences of the success theorem, i.e. the consequences of that theorem in view of an accepted EPhypothesis (and the data-theory on which it is based), are such that this situation not only suggests the TA-hypothesis, they also justify it to a substantial extent: If <M * , P * > is, in view of <R, S>, accepted as empirically progressive relative to <M, P>, then 1. the success theorem not only makes it perfectly possible that <M * , P * > is closer to the truth than <M, P>, due to it the TA-hypothesis would even explain the greater success, 2. it is impossible that <M * , P * > is further from the truth than <M, P> (and hence <M, P> closer to the truth than <M * , P * >), for otherwise, so shows the success theorem, <M * , P * > could not be more successful, 3. it is also possible that <M * , P * > is neither closer to nor further from the truth than <M, P>, in which case, however, another specific explanation has to be given for the fact that <M * , P * > has so far proven to be more successful, e.g.due to a biased choice of experiments.
Note that the TA-hypothesis provides a typical default explanation of EP, that is, an adequate explanation unless there turns out to be reason for another diagnosis.The third reverse consequence provides the room for 'unless' conditions and hence for future 'divided success', i.e. the old theory may get extra successes relative to the new one.For example, the experiments so far may turn out to have been biased in favor of the new theory, and hence new experiments breaking this bias may turn out to be in favor of the old one.
• Abductively conclude (for the time being) that the new theory is closer to the truth than the old one, i.e. that truth approximation has been achieved.
This final step will now be no surprise.In fact, it is a special case of a sophisticated form of 'inference to the best explanation' or 'inference to the best theory', viz.not as a, or even the, true theory, but as the theory which is the closest to the truth of the available theories.Of course, 'inference to the best theory' should be read as: inference to the best theory of all available theories beyond the data-theory <R, S>.17
Perspectives
To be sure, the above analysis is based on the simplest assumptions about the further nature of theories and their claims, for which reason I call it the basic version of the generalized theory.In this section I will briefly indicate four perspectives for concretization, as far as relevant, in line with those of Kuipers (2000).I will conclude with indicating an alternative interpretation and a connection with belief revision.
Refinement, capturing idealization and concretization
There are two plausible qualitative concretizations of the basic version of the two-sided approach, refinement and stratification (cf.Kuipers 2000, Chaps. 9-10, respectively).
Refinement makes it possible to account for the fact that one counterexample may be less severe than another, e.g. in the sense that it is less idealized than the other.
In Kuipers (2000, Chap. 10) I have presented such a refined approach to empirical progress and (nomic) truth approximation based on an underlying ternary similarity relation between possibilities, that is, structures, and hence called a structurelikeness relation.The relation "one structure is more similar to a third than another structure" may in particular take the form "one structure is less idealized, relative to a still more realistic structure, than another".In this way it was possible to explicate the idea of 'truth approximation by idealization and concretization'. 18Of course, the point of departure was then the strong claim, combining the inclusion and the exclusion method in an extreme complementary way.
From the present two-sided perspective, the refined definitions of 'closer to the truth' and 'more successful' seem to be primarily concretizations of the clauses corresponding to the representation function of inclusion theories.In the refined inclusion method new theories revise old theories by changing the models of the theories to some extent, e.g. by taking new factors into account that have been neglected before.The new models are claimed to be more similar to 'the true ones' than those before.Hence, although the exclusion method may appeal to Popperian intuitions, revising theories in the suggested refined way seems to reflect the representation part of scientific common sense.Therefore, it is plausible to think that an adequate modeling of scientific common sense has to take into account both intuitions.This is perfectly possible from the present perspective: two-sided theories for which the basic clauses are used for the exclusion subtheories, whereas the refined clauses are used for the inclusion subtheories.In terms of Zwart (2001), this is a way of combining the (Popperian) 'content-approach', with the 'similarity approach' to truth approximation, à la Niiniluoto (1987) and Oddie (1986).One may even speculate that Lakatosian research program thinking can be represented by such two-sided theories: progress is achieved by revising the corresponding inclusion theory more in particular by revising, e.g.concretizing, auxiliary hypotheses, however within the boundaries of the corresponding exclusion theory, forming the hard core.The suggested asymmetric way of dealing with refinement may seem plausible, in the practice of science refinement occurs on both sides.E.g.Einstein's postulates refine, relative to Newton's postulates, which possibilities are excluded from being realizable.
Since it is perfectly possible to translate the refinement of the model-side of theories to their postulates, by taking suitable complements, leading to the refinement of the postulate side, we prefer to conceive the refinement of truthlikeness of two-sided theories primarily in a symmetric way (Kuipers, manuscript).In this symmetric version, 'closer to the truth' is, roughly, defined by requiring on both sides a larger truth-content and a less problematic falsity-content, the latter in the sense that the falsity-content of the one theory is, in terms of the structurelikeness relation, more similar to (part of) the truth than the falsity-content of the other.From this symmetric version it is easy to derive the above suggested asymmetric version by 'idealizing' the exclusion side in order to get its basic version back.This can be done by assuming a 'trivial' similarity relation.Finally, it turns out that 'more successful' can best be refined in a somewhat weaker, but more plausible way than before, with the attractive consequence that the Success Theorem remains unconditionally valid: truth approximation in the refined sense entails at least as successfulness in the refined sense.
Quantification
It is important to note that 'closer to the truth' and 'more successful' in all forms so far dealt with are partial order relations.Hence, even in the basic version theories will frequently not be comparable in either direction.There are at least two plausible way-outs.From a methodological point of view it seems important to have a strategy to deal with cases of 'divided success', that is, when the one theory is more successful in some respects and the other in other respects.The qualitative ideal suggests to try to apply in this situation a kind of 'principle of dialectics', that is: try to improve both theories in one stroke.In other words, try to design a new theory, a synthesis, that is and remains more successful than both, that is, try to achieve genuine empirical progress, and hence, presumably, truth approximation with respect to both theories.
Another way to deal with the non-comparability problem is to design a quantitative concretization, in the present context, to begin with of the basic version.In a finite context it is even plausible to just count number of elements, leading to the quantitative symmetric difference definition of 'closer to the truth', e.g.referring to the P-side | (P * , T)| ≤ | (P, T)|.However, as soon as one wants to differentiate between the weight of 'successes' and 'failures' or if U is infinite, ad hoc elements, e.g.weighing factors and other parameters, are unavoidable, witness Niiniluoto's otherwise impressive approach (Niiniluoto 1987).In Kuipers (manuscript) I nevertheless present a general quantitative (two-sided basic) approach, viz. a so-called measure-theoretical one.It is largely in the spirit of Kuipers (2000, Chap. 12).It leads almost always to an ordering of two theories, however, with a non-deductive 'success theorem', in terms of expectation values, about the relation between truth approximation and the corresponding quantitative notion of empirical progress.But after sufficient confirmation of the corresponding empirical progress hypothesis, the theorem will support the abductive 'closer to the truth' conclusion substantially.
Stratification
The second important qualitative concretization of the basic version of the two-sided approach deals with stratification in terms of an observational and a theoretical level.It is more or less crucial for the realism/instrumentalism debate.In Kuipers (2014b) I have already presented stratification for exclusion theories, in line with Kuipers (2000, Chap. 9).It leads to some substantial weakening of the connection between empirical progress and truth approximation, but the connection remains remarkable.In Kuipers (manuscript) stratification for two-sided theories is elaborated.The crucial question for both sides is to what extent 'closer to the truth' on the theoretical level is projected on the observational level.Though formally similar, the possible exceptions have different methodological impact for the two sides, due to the asymmetric nature of evidence in the nomic context.Of course, stratification is also possible for the refined and quantified versions of 'closer to the truth', with specific limits to its projection on the observational level.
Inconsistent theories
Inconsistent two-sided theories, that is, theories where M is not a subset of P and hence some models are excluded by the postulates, may be based on good reasons for the models as well as for the postulates.Such theories may well be very useful for truth approximation.To begin with, starting with inconsistent theory <M, P> one may be heading for a consistent theory <M * , P * > such that the latter is 'side-wise' closer to the truth than the former.And even an inconsistent two-sided theory may be side-wise closer to the truth than another one, and hence be a step in the direction of the truth.This is formally perfectly possible.However, the question is to what extent <M, P> can still be meaningfully considered to be one theory, though inconsistent.How can M and P still be substantially related, i.e., share more than the vocabulary, when the models do not satisfy (all of) the postulates?Of course, one option is that the models may satisfy some approximate version of the postulates or only the most fundamental postulates.This is clearly something which needs to be investigated further.
I like to conclude with two other perspectives.
A monadic existential and a monadic nomic interpretation
The formal story with the above 'nomic' interpretation can easily be given a monadic (existential) interpretation,19 in which the members of U represent Q-predicates. 20A theory <M, P> now says that the members of M are instantiated and the members of cP are not, i.e. a theory corresponds to a 'partial or complete constituent' in the standard logical sense.However, not only a monadic existential interpretation is possible, but also a monadic nomic interpretation: <M, P> is then assumed to be claiming that it is nomically possible to instantiate the Q-predicates in M but not those in cP.The periodic table of elements can be seen as an example of the first interpretation, but even better of the latter.
Connection with belief revision
The main message of Kuipers (2014a) 21 is that Sven Owe Hansson's adaptation of AGM-rules for belief base revision (BRR) provides adequate means to connect belief revision with a very general form of basic truth approximation. 22From the perspective of the present paper it is now not difficult to derive from that paper how revision of two-sided theory <M, P> in the light of data theory <R, S> can be reconstructed as a combination of expansion and retraction.Assuming both theories to be consistent, it is easy to check that by contraction, i.e. weakening, of the claims of <M, P> as far as in conflict with those of <R, S>, we get the two-sided revised theory <M∩S, P∪R>.By successive expansion, i.e. strengthening, of the claims of this intermediate theory by the extra claims of <R, S> relative to those of <M, P> we get <(M∩S)∪R, (P∪R)∩S>.
It is also not difficult to check that the final theory is not only more successful than the original and the intermediate one (it is even maximally successful), but also that it is even closer to the truth than both, assuming of course that the data theory is true: R ⊆ T ⊆ S. Further investigation is needed to connect the refined version of two-sided nomic truth approximation, suggested above, and a correspondingly refined version of belief base revision.
Fig
Fig. 4 Truth-and falsity-content.The indicated intersections constitute the relevant truth-content and the indicated difference sets the relevant falsity-content
Fig. 5
Fig. 5 <M * , P * > is at least as close to the truth as <M, P>: shaded areas are empty
Fig. 6
Fig. 6 Data-theory <R, S> depicted as a true theory U
Table 3
Accepted and rejected M-and P-content of theory <M, P> in the light of accepted data-theory <R, S> <M, P> in the light of <R,
Table 2
<M * , P * > is at least as close to the truth as <M, P> <M * , P * > is at least as close to the truth as <M, P>
Table 4
<M * , P * > is at least as successful as <M, P>, relative to <R, S> <M * , P * > is at | 2018-04-03T03:56:56.628Z | 2015-09-28T00:00:00.000 | {
"year": 2015,
"sha1": "b2010fbf7cb6484ecc7f046669bfc7548e0ac0bf",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11229-015-0916-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1bcb5e4410a0a0668acf159afdc7560ece35f508",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": [
"Computer Science",
"Philosophy"
]
} |
55637384 | pes2o/s2orc | v3-fos-license | Correlation Pattern of Serum Lipid Parameters and a Biological Anti-Oxidant Potential Between Premenopausal and Perimenopausal Healthy Women
Objectives. Improved understanding of the associations among cardiometabolic, antioxidative, and menopausal status is crucial to prevent cardiovascular disease (CVD). For preventing the development of CVD in women, the association of serum lipid profile and antioxidant parameters during menopausal transition is of interest. The aim of the study is to evaluate the correlation between lipid and antioxidant levels especially in premenopausal and perimenopausal women. Methods. A total of 130 CVD-free healthy women; the premenopausal group (n = 51, mean 41 years) and perimenopausal group (n = 79, mean 49 years) were studied. A biological antioxidant potential (BAP) test was utilized for measuring antioxidant levels. The association between lipid and BAP levels was examined by linear correlation analyses. Results. The perimenopausal group showed a significantly higher low-density lipoprotein cholesterol (LDL-C) level than the premenopausal group (mean 123 vs. 111 mg/dL, p < 0.05), while there were no significant differences in triglyceride, high-density lipoprotein cholesterol and BAP levels between the groups. A significant inverse correlation existed between LDL-C and BAP levels in the perimenopausal group (β = -0.30, p < 0.05), but not in the premenopausal group. Conclusions. The correlation patterns between lipid parameters and antioxidant levels demarcated the premenopausal from perimenopausal stage. Increased LDL-C associated with decreased antioxidant levels in perimenopausal women may call early attention for cardiovascular
Introduction
A cardiovascular disease (CVD) risk in premenopausal women was well documented to be lower than postmenopausal women [1].One possible explanation for it is due to the sex hormonal changes during menopausal transition [2], and another is due to the alteration of serum lipid profile, for instance an increase of low-density lipoprotein (LDL) cholesterol (LDL-C) [3][4][5].Acceleration of atherogenic lipid profile is currently recognized among postmenopausal women, and so in preventing CVD, there is an increasing importance on understanding the characteristics of lipid profile especially at earlier stages of menopausal transition such as perimenopause [4].Little information is, however, available about the characteristics especially in perimenopausal women.In particular, antioxidant conditions were not sufficiently mentioned as a part of the pathophysiology of CVD development in perimenopausal women [5].
In general, antioxidant conditions are known to be associated with the CVD development [6] and positively associated with symptoms and clinical manifestations related to menopause [7].Recently, a biological antioxidant potential (BAP) test has been used as an easy handling and reliable assay in the clinical setting to measure total antioxidant capacity, which can identify the ability to reduce ferric ions to ferrous ions [8,9].The BAP test are widely recognized and used in clinical studies just as the ferric-reducing ability plasma assay [9] and the test is used to study oxidative stress-related diseases [10].Given an importance to discuss the association between blood lipid profile and BAP levels besides generic CVD-related parameters during menopausal transition for preventing CVD development, the present study aimed to investigate their correlation patterns between premenopausal and perimenopausal healthy women.
Studied subjects
A total of 130 Japanese women, who were diagnosed in the premenopausal group (n = 51, mean 41 years) and in perimenopausal group (n = 79, mean 49 years) were enrolled in this study.The study was approved by the institutional ethics committee and informed consent was obtained from all subjects.Subjects were recruited from women who were visiting our clinic for general medical examinations.Eligible subjects were healthy with no history of CVD, acute infectious disease, or severe liver/kidney disease, and were non-smokers, who were not taking medications including antioxidant supplements.Body mass index (BMI), mean blood pressure (MBP) and blood parameters were measured during a fasting period.Blood was sampled from premenopausal women during the follicular phase.
For a precise determination of menopausal status, subjects were diagnosed into premenopausal and perimenopausal groups based on the classification of the Stages of Reproductive Aging Workshop (STRAW) [11].According to the classification, the premenopausal group corresponded to the reproductive stage (Stage -3), and the perimenopausal group corresponded to the early and late menopausal transition stage (Stage -2 to -1).
Blood parameters
The serum levels of LDL-C, triglyceride (TG), high-density lipoprotein cholesterol (HDL-C), estradiol (E2) and follicle-stimulating hormone (FSH) were measured with standard methods.Hemoglobin A1c (HbA1c) level was measured with a high-performance liquid chromatographic method.These analyses were supplied in a single laboratory facility certified in Japanese laboratory system (Mitsubishi BCL Laboratory Co. Ltd., Tokyo, Japan).The BAP tests were implemented by the Free Radical Analytical System (Diacron, Grosseto, Italy) according to the analytical manual.In brief, a 20 μL of blood sample was dissolved in a colored solution obtained by mixing a source of ferric ions (FeCl3, ferric chloride) with a chromogenic substrate (a sulphur-derived compound).After a 5-minute incubation, the intensity of the discolored change was assessed by a photometer, and the amount of reduced ferric ions calculated.The BAP unit was expressed as mol/L of reduced Fe/L.
Statistical analyses
The difference between the groups was examined by student t-test.A simple correlation between outcome (lipid parameters) and other variables was examined by a Pearson's correlation test, and subsequently, a stepwise multiple regression analysis was performed in order to extract the variables correlated with outcome variables (lipid parameters).The data of TG, E2 and FSH were log-transformed for these analyses because of their skewed distributions.A p-value < 0.05 was considered significant.
Results
As listed in Table 1, the perimenopausal group showed a significantly higher level of age, MBP, LDL-C, HbA1c and FSH, as well as a significantly lower level of E2, than the premenopausal group.There were no significant differences in TG, HDL-C and the BAP levels between the groups.
As listed in Table 2, simple correlation tests and stepwise multiple-regression analyses revealed a significant positive correlation between LDL-C and TG or HbA1c levels in the perimenopause group.Furthermore, there found to be a significant inverse correlation between LDL-C and BAP levels in the perimenopause group.Any relative significant correlations of LDL-C with other parameters were not observed in the premenopause group.
Similar analyses revealed a significant positive correlation between TG and LDL-C, as well as a significant inverse correlation between TG and HDL-C levels in the perimenopause group.Any relative significant correlations of TG with other parameters were not observed in the premenopause group.
Similar analyses revealed a significant positive correlation between HDL-C and FSH, as well as a significant inverse correlation between HDL-C and BMI or TG levels in the perimenopause group.Any relative significant correlations of HDL-C with other parameters were not observed in the premenopause group.
Discussion
The present study investigated the association of serum lipid parameters with the BAP, besides generic CVD-related parameters, in premenopausal and perimenopausal healthy women.While most correlations between lipid parameters and generic CVD-related parameters were as expected, the significant inverse correlation pattern between LDL-C and BAP levels was found in perimenopausal women, but not in premenopausal women.This seems to demarcate a potential antioxidant linkage with the CVD development between the premenopausal and perimenopausal stage.
A high blood cholesterol, especially LDL-C, concentration is a CVD risk [2].The incident CVD after menopause can be partly induced by changes in the blood lipid levels that occur following menopausal transition [12].In the present study, of note, an inverse correlation between LDL-C and BAP levels was found only in the perimenopausal stage.While the increased trend of LDL-C with menopausal transition was also observed in the present study, this appears to be consistent with the result that such an increase level of LDL-C is significantly associated with damage to antioxidant molecules [13].Even though the LDL-C level was not so high in perimenopausal women of the present study, speculatively, LDL-C and/or LDL particle might be oxidatively modified under the perimenopausal state [12].Importantly, the present study finding may also provide a proper timing for the management of LDL-C levels in women [14].An increase of LDL-C can be caused by not only a simple biological aging but an alteration of sex hormones with menopausal transition [12], and the change in sex hormones is assumed to affect the correlation between LDL-C and BAP levels.A reduction of LDL-C was, indeed, documented in subjects with hormone replacement therapy using exogenous estrogens [15].In the preset study, E2 was not significantly extracted as an independent parameter of LDL-C.This might be partially due to a significant but small increase of LDL-C in perimenopausal women relative to that in premenopausal women in the present study.
A decrease of antioxidative conditions can also stem from an alteration of sex hormones during menopausal transition, and blood antioxidant capacity and antioxidant enzyme expression at a gene level were documented to positively correlate with E2 levels [16][17][18].On the other hand, the association between (anti)oxidative stress-related markers and sex hormones has been still controversially reported; that is, urinary isoprostane excretion was not correlated with endogenous estrogen levels in perimenopausal women [19] or total antioxidant ability was not correlated with E2 levels during menopausal transition [20].In the present study, expectedly, perimenopausal women exhibited a lower level of E2 than premenopausal women, while BAP levels were not significantly correlated with E2 both in premenopausal and perimenopausal women.Our present results were likely to coincide with the later studies [19,20].The discrepant results might be due to the difference in (anti)oxidative stress-related marker types measured across studies (the BAP test reflects a global antioxidative condition [8,9] and there are currently few studies using this test).There is also a thought that the blood E2 level is much lower than the necessary concentration of chemical antioxidants [21].Thus, the relationship between the antioxidant system and E2 levels has to be more investigated in humans.
The beneficial effect of hormone replacement therapy on the CVD development in earlier stages of menopausal transition was reported [22,23]; however, there is currently no full explanation and a long-term debate about the benefit and risk for CVD by hormone replacement therapy [24,25].Although our present study showed no apparent significant correlation between E2 and LDL-C or BAP levels, further investigations on changes in various biochemical factors, including lipids and (anti)oxidative stress-related markers, by hormone replacement therapy may provide relevant consideration of the therapeutic effect on the CVD development.
The present study had several limitations.The sample size was relatively small, the study design was cross-sectional, and CVD outcomes were not evaluated.While a single anti-oxidant biomarker was used in the present study, comparative studies using the other anti-oxidant biomarkers would be interesting.A prospective study in a larger population with long-term follow-up periods and/or intervention trail with various anti-oxidant biomarkers and antioxidants would be necessary to confirm the results of the present study.
In conclusion, the present study investigated the association of serum lipid parameters with the BAP level between premenopausal and perimenopausal healthy women.There was an increase of LDL-C associated with antioxidant conditions in perimenopausal women, and this might present us to require early attention for the management of cardiovascular health from this stage.
Table 2 .
Correlations between lipid and other parameters NE: not extracted, BMI: body mass index, MBP: mean blood pressure, LDL-C: low-density lipoprotein cholesterol, TG: triglyceride, HDL-C: high-density lipoprotein cholesterol,HbA1c: hemoglobin A1c, BAP: biological anti-oxidant potential, FSH: follicular stimulating hormone.Statistical significance: p < 0.05.Data are r-coefficients (by a Pearson's correlation test) and β-coefficients (by a stepwise multiple regression analysis).Triglyceride, estradiol, and FSH levels were log-transformed in these analyses because of their skewed distributions. | 2018-12-07T11:10:03.758Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "60123a6a9c915ef908b8f5eb17c3aaaabd7c153f",
"oa_license": "CCBYNC",
"oa_url": "http://www.jbiomed.com/v02p0034.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "60123a6a9c915ef908b8f5eb17c3aaaabd7c153f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52017342 | pes2o/s2orc | v3-fos-license | Usefulness of the Whole Blood Passage Time as a Predictor of Primary Cardiovascular Events in Patients With Traditional Cardiovascular Risk Factors
Background Recent clinical studies have reported that impaired hemorheology is a significant cardiovascular risk factor, but there has been no prospective study of its relationship with cardiovascular events. The aim of this prospective study was to assess the efficacy of whole blood passage time (WBPT), measured by a microchannel array flow analyzer (MC-FAN), as a predictor of primary cardiovascular events in patients with traditional cardiovascular risk factors. Methods The study enrolled 1,134 outpatients with traditional cardiovascular risk factors but no history of cardiovascular events (438 men and 696 women; mean ± standard deviation age, 67 ± 11 years). Based on the value of WBPT, the patients were assigned to one of three groups: L (low, WBPT < 50 s; n = 499), M (medium, WBPT 50 - 70 s; n = 295), or H (high, WBPT > 70 s; n = 340). The utility of the WBPT as a predictor of primary cardiovascular events was evaluated. Results During the follow-up period (median 81.9 months), major adverse cardiovascular events (MACE) occurred in 95 cases (L, 21 cases (4.2%); M, 24 cases (8.1%); H, 50 cases (14.7%); P < 0.001, log-rank test). In multivariate Cox regression analyses, the risk for MACE was significantly higher in group H than in group L (hazard ratio, 2.32; 95% confidence interval, 1.31 - 3.20; P < 0.01). A WBPT cut-off of 72.4 s yielded the largest area under the curve of 0.705 (95% confidence interval: 0.678 - 0.732), with a sensitivity of 51.7% and specificity of 85.4% for discriminating between those who did and did not experience MACE during the follow-up period. Conclusion This study showed that WBPT evaluated by a MC-FAN was a predictor of primary cardiovascular events in patients with traditional cardiovascular risk factors.
Introduction
In the management of outpatients, reducing the risk of cardiovascular event is based mainly on addressing traditional cardiovascular risk factors such as hypertension, diabetes mellitus, dyslipidemia, obesity, and smoking habits. However, although these risk factors are important, they do not explain every cardiovascular event [1][2][3]. It is therefore important to explore novel biomarkers of cardiovascular diseases.
The impairment of hemorheology is considered to be a further important factor in the incidence of cardiovascular events, in addition to atherosclerosis [4,5]. In recent years, a commercial device to evaluate hemorheology using microscopic images, the microchannel array flow analyzer (MC-FAN), has been introduced to the clinical setting [6]. The MC-FAN is simple and is superior to other methods of hemorheological evaluation in terms of the accuracy of channel dimensions and high reproducibility. MC-FAN can be used to measure the whole blood passage time (WBPT). Cross-sectional studies have reported significant relationships between an increase in WBPT and cardiovascular risk factors or cardiovascular disease [7][8][9][10]. However, there have been no prospective studies of the clinical usefulness of WBPT as a predictor of cardiovascular events. The aim of this prospective study was therefore to assess the efficacy of WBPT evaluated by the MC-FAN as a predictor of primary cardiovascular events in patients with traditional cardiovascular risk factors.
Participants
Between January 2008 and December 2009, 1,134 outpatients (696 women (61.4%) and 438 men (38.6%)) with traditional cardiovascular risk factors but no history of cardiovascular events were prospectively enrolled at the Hitsumoto Medical Clinic, Yamaguchi, Japan. The mean (± standard deviation) age was 67 ± 11 years. WBPT was measured as described below and the participants were assigned accordingly to one of three groups: L (low, WBPT < 50 s; n = 499), M (medium, WBPT = 50 -70 s; n = 295), or H (high, WBPT > 70 s; n = 340). The study protocol was approved by Local Ethics Com-
Evaluation of hemorheology using an MC-FAN
The participant's hemorheology was evaluated by measuring WBPT using an MC-FAN HR300 rheometer (MC Healthcare Inc., Tokyo), as previously described [6,8]. Briefly, the microchannel passage time for 100 µL of physiologic saline was first measured as a control. Then, the same measurement was determined for blood obtained from the participant with 100 µL of the heparinization sample. The WBPT was corrected for the passage time of the physiologic saline, SPT, as WBPT × 12/ SPT. The flow of blood cells through individual microchannels was observed and recorded using an inverted metallographic microscope, video camera, and video recorder. The width, length, and depth of the microchannel formation were 7, 30, and 4.5 µm, respectively. Examination was performed within 60 min of blood sampling. The inter-and intra-assay coefficients of variation for WBPT were 8% and 5%, respectively.
Evaluation of clinical parameters
The participant's body mass index was calculated as the weight in kilograms divided by the square of the height in meters; obesity was defined by the Japanese criteria of body mass index ≥ 25 kg/m 2 . Current smoking was defined as smoking at least one cigarette per day during the previous 28 days. Hypertension was defined as systolic blood pressure ≥ 140 mm Hg, diastolic blood pressure ≥ 90 mm Hg, or the use of anti-hypertensive medication. Dyslipidemia was defined as a low-density lipoprotein cholesterol level ≥ 140 mg/dL, a highdensity lipoprotein cholesterol level ≤ 40 mg/dL, a triglyceride level ≥ 150 mg/dL, or the use of anti-dyslipidemic medication. Diabetes mellitus was defined as a fasting blood glucose level ≥ 126 mg/dL or the use of anti-diabetic medication. The following blood parameters were measured: blood cell counts, plasma glucose, serum lipid concentrations, and serum highsensitivity C-reactive protein (hs-CRP) concentration. Blood samples were collected from the antecubital vein in the morning after 12 h of fasting. Total cholesterol and triglyceride concentrations were measured using standard enzymatic methods. High-and low-density lipoprotein cholesterol concentrations were measured using selective inhibition and Friedewald's formula, respectively [11]. Participants with a serum triglyceride concentration ≥ 400 mg/dL were excluded from the analysis, considering the accuracy of this method. Glucose concentrations were measured by the glucose oxidase method. The hs-CRP concentration was measured using high-sensitivity latexenhanced immunonephelometry. As a physiological marker of arterial function, the cardio-ankle vascular index (CAVI) was measured using a VaSera CAVI system (Fukuda Denshi), as described previously [12]. Briefly, brachial and ankle pulse waves were determined using inflatable cuffs, with the pressure maintained between 30 and 50 mm Hg to ensure that the cuff pressure exerted minimal effects on systemic hemo-dynamics. Of note, systemic blood and pulse pressures were simultaneously determined with the participants in the supine position. CAVI was measured after a 10-min rest in a quiet room. The value used for CAVI was the mean of the values for the left and right sides.
Follow-up
For this study, the follow-up period terminated in January 2018. The endpoint for this study was a major adverse cardiovascular event (MACE), a composite of cardiovascular death, non-fetal myocardial infarction, and non-fetal ischemic stroke. The median follow-up period to determine the incidence of MACE was 81.9 months (range, 4 -120 months).
Statistical analysis
Data were analyzed using Stat View-J 5.0 (HULINKS, Tokyo, Japan) and MedCalc for Windows version 14.8.1 (MedCalc Software, Ostend, Belgium). Data are presented as mean ± standard deviation. One-way analysis of variance (ANOVA) and the Kruskal-Wallis test were used for comparisons of the three groups. Post-hoc testing was performed using Fisher's protected least significant differences or the Mann-Whitney U-test with the Bonferroni correction. Event-free survival rate curves were plotted using Kaplan-Meier analysis and the differences between the curves were evaluated using the longrank test. Multivariate analysis was performed using multivariate Cox regression analysis. Receiver operating characteristic (ROC) curves were constructed and the Youden Index was used to determine the optimal cut-off for WBPT for predicting the participants who experienced a MACE. P < 0.05 was considered statistically significant. Table 1 presents the characteristics of the participants at registration. The mean WBPT for groups L, M, and H were 40.3, 59.6, and 79.4 s, respectively. The following factors were significantly higher in H than in M or L: the proportions of men, current smokers, and participants with diabetes mellitus, and the mean hematocrit, fasting blood glucose concentration, hs-CRP concentration, and CAVI values. Figure 1 shows the Kaplan-Meier curve for the incidence of MACE. The median follow-up period was 81.9 months. During follow-up, 95 participants experienced at least one MACE (L, 21 participants (4.2%); M, 24 participants (8.1%); H, 50 participants (14.7%)). The Kaplan-Meier curve confirmed that group H had a higher incidence of MACE compared to groups M and L (log-rank test, P < 0.001). pants who experienced MACE than in those who did not, and levels of renin-angiotensin system (RAS) inhibitor and statin use were considerably lower.
Discussion
Previous studies have reported relationships between the incidence of cardiovascular disease and traditional risk factors Continuous values are mean ± SD. WBPT: whole blood passage time; SBP: systolic blood pressure; DBP: diastolic blood pressure; LDL: low-density lipoprotein; HDL: high-density lipoprotein; FBG: fasting blood glucose; hs-CRP: high sensitivity C reactive protein; CAVI: cardio-ankle vascular index; RAS: renin-angiotensin system. *P < 0.001 vs. group L, **P < 0.01 vs. group L, ***P < 0.05 vs. group L, # P < 0.001 vs. group M, ## P < 0.05 vs. group M. such as male sex, aging, and diabetes mellitus [13][14][15]. Consistent with these previous findings, the results of the present study found these factors to be independent predictors for MACE. In addition, this study found that high WBPT was a further independent predictor for MACE. WBPT showed sig-nificant associations with smoking habits, hs-CRP concentration, and CAVI. The study also indicated the clinical efficacy of using a combination of the WBPT and CAVI as a predictor for MACE. Several cross-sectional studies have assessed WBPT in healthy populations or in patients with cardiovascular risk factors, with the results suggesting that the cut-off value for cardiovascular risk is 50 -70 s [10,[16][17][18]. Based on these previous results, the present study prospectively divided the participants into three groups according to two simple cut-off levels of 50 s and 70 s. The results showed that participants with WBPT > 70 s were a population at high risk of primary cardiovascular events. In the ROC analysis, a WBPT cut-off of 72.4 s yielded the largest area under the curve of 0.705 for discriminating between those who did and did not experience MACE during the follow-up period. Thus, for primary cardiovascular disease prevention, we perform examinations or intervention therapy for any patients with WBPT exceeding approximately 70 s.
MC-FAN and Primary
There are several possible mechanisms to explain why smoking affects hemorheology, such as platelet activation, an increase in leukocyte adhesion ability, and elevation of plasma viscosity [19,20]. There have been several clinical studies of the relationship between smoking habits and WBPT [8,16,18,21]. Shimada et al. reported a positive correlation between WBPT and the daily consumption of tobacco or the Brinkman index, and three months of smoking cessation significantly reduced WBPT [21]. In contrast, in this study the prevalence of smoking increased with an increase in WBPT. Even though a smoking habit was not found to be an independent predictor for MACE in this study, the results of this and previous studies have indicated that smoking cessation is strongly recommended to improve hemorheology.
Hs-CRP is used as a marker of inflammation, and several epidemiological studies have indicated that a high hs-CRP level is a predictor of cardiovascular disease [22,23]. The results of the present study also found that high hs-CRP (≥ 0.1 mg/ dL) was an independent predictor for MACE in patients with traditional cardiovascular risk factors. One explanation for hs-CRP level being a cardiovascular risk factor is thought to be chronic inflammation in the arterial walls, which contributes to the development of atherosclerosis, including plaque instability [24][25][26]. Several mechanisms by which inflammation causes impairment of blood rheology have been proposed, including platelet aggregation and the elevation of plasma viscosity [27,28]. In this study, the hs-CRP levels were significantly higher in group H than in groups M and L. RAS inhibitors and statins have been reported to reduce inflammation in vivo [29][30][31], with a reported reduction in the incidence of primary cardiovascular events [32,33]. However, the use of medications such as RAS inhibitors or statins was approximately 30% degree in group H. Positively using such drugs with anti-inflammatory effects for patients with high hs-CRP concentration, especially those with high WBPT, may help reduce the incidence of cardiovascular.
CAVI provides a novel marker of systemic arterial stiffness that is independently associated with blood pressure [12]. A number of studies have reported the clinical usefulness of CAVI as a cardiovascular risk factor [34][35][36]. In the present study, CAVI > 9 was one of the strongest predictors of MACE of all the explanatory variables. In addition, CAVI levels were higher in the participants with higher WBPT values. There have been several reports of a significant relationship between WBPT and physiological markers of arterial stiffness [37,38]. In addition, a study has reported a significant relationship between WBPT and endothelial dysfunction [39], and Endo et al reported that CAVI reflected endothelial dysfunction as es- Figure 3. Multivariate Cox regression analysis for major adverse cardiovascular events using a combination of the WBPT and CAVI. The participants were divided into four groups according to cut-off levels of WBPT = 72.4 s and CAVI = 9, and a multivariate Cox regression analysis was performed. Having values above the cut-off for one of these factors (WBPT > 73.4 s or CAVI > 9) was associated with significantly higher HRs for major adverse cardiovascular events (HR: 3.18, 95% CI: 1.29 -7.44, P < 0.01; HR: 3.36, 95% CI: 1.31 -7.60, P < 0.01, respectively) than having values below these cut-offs. The HR was higher still when both factors were above the cut-off levels (HR, 10.62; 95% CI, 5.38 -21.31; P < 0.001) compared with both factors being below the cut-offs. Adjustment factors are sex, age, diabetes mellitus, hs-CRP, and statin use. *P < 0.01 vs. patients with WBPT as ≤ 72.4 s and CAVI as ≤ 9; **P < 0.001 vs. patients with WBPT as ≤ 72.4 s and CAVI as ≤ 9. WBPT: whole blood passage time; CAVI: cardio-ankle vascular index; HR: hazard ratio, CI: confidence interval; hs-CRP: high sensitivity C reactive protein. with a sensitivity of 51.7% and specificity of 85.4% for discriminating between those who did and did not experience major adverse cardiovascular events during the follow-up period. WBPT: whole blood passage time.
MC-FAN and Primary Cardiovascular Events
Cardiol Res. 2018;9(4):231-238 timated by brachial artery flow-mediated vasodilatation [40]. Thus, the results of this and previous studies suggest that impaired hemorheology affected arterial function such as arterial stiffness or endothelial dysfunction, thereby increasing the incidence of cardiovascular disease.
To evaluate the clinical efficacy of using a combination of hemorheology and arterial function biomarkers, the participants of this study were divided into four groups based on the cut-off values of WBPT = 72.4 s and CAVI = 9 and a multivariate analysis was performed. The HR for experiencing a MACE with both high WBPT and high CAVI compared with both low WBPT and low CAVI was approximately 3 points higher than the HR for either high WBPT or high CAVI but not both. Thus, the participants with high WBPT and CAVI were considered to be a high-risk population for primary cardiovascular events. Previous studies have shown that lifestyle or medical interventions affected both hemorheology and arterial function [8,21,[41][42][43][44][45][46], and the methods for measuring WBPT and CAVI are simple and take little time in clinical practice. We therefore check these two markers in our patients and administer intervention therapy such as lifestyle modification or medication for those with high WBPT and CAVI. In this way, we expect to efficiently reduce primary cardiovascular events in patients with traditional cardiovascular risk factors.
Limitations
This study had several limitations. First, it was conducted at a single center and its findings cannot be generalized to all medical centers. Second, WBPT was measured only once, at registration. A further investigation of the association between serial changes in WBPT and primary cardiovascular events is needed. Finally, further studies concerning patients with high WBPT and traditional cardiovascular risk factors are warranted to determine whether aggressive intervention therapy, such as lifestyle modification or medication, reduces the incidence of primary cardiovascular events.
Conclusions
This study demonstrated that WBPT evaluated by MC-FAN as a marker of hemorheology was a predictor of primary cardiovascular events in patients with traditional cardiovascular risk factors. The predictive value for the incidence of cardiovascular events was increased by using a combination of WBPT and CAVI as a marker of arterial function. | 2018-08-18T21:15:58.427Z | 2018-08-01T00:00:00.000 | {
"year": 2018,
"sha1": "317ad4a0f4ea16cdc4cf1906c1d2aeeb2efa3fc9",
"oa_license": "CCBYNC",
"oa_url": "https://cardiologyres.org/index.php/Cardiologyres/article/download/763/814",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "317ad4a0f4ea16cdc4cf1906c1d2aeeb2efa3fc9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267850452 | pes2o/s2orc | v3-fos-license | Association of non-obstructive dyspnoea with all-cause mortality and incident chronic obstructive pulmonary disease: a systematic literature review and meta-analysis
Background Controversy exists regarding the association between non-obstructive dyspnoea and the future development of chronic obstructive pulmonary disease (COPD) and mortality. Therefore, we aimed to evaluate the association of non-obstructive dyspnoea with mortality and incident COPD in adults. Methods We searched PubMed, Embase, and Web of Science to identify studies published from inception to 13 May 2023. Eligibility screening, data extraction, and quality assessment of the retrieved articles were conducted independently by two reviewers. Studies were included if they were original articles comparing incident COPD and all-cause mortality between individuals with normal lung function with and without dyspnoea. The primary outcomes were incident COPD and all-cause mortality. The secondary outcome was respiratory disease-related mortality. We used the random-effects model to calculate pooled estimates and corresponding 95% confidence interval (CI). Heterogeneity was determined using the I² statistic. Results Of 6486 studies, 8 studies involving 100 758 individuals fulfilled the inclusion and exclusion criteria and were included in the study. Compared with individuals without non-obstructive dyspnoea, individuals with non-obstructive dyspnoea had an increased risk of incident COPD (relative risk: 1.41, 95% CI: 1.08 to 1.83), and moderate heterogeneity was found (p=0.079, I2=52.2%). Individuals with non-obstructive dyspnoea had a higher risk of all-cause mortality (hazard ratio: 1.21, 95% CI: 1.14 to 1.28, I2=0.0%) and respiratory disease-related mortality (hazard ratio: 1.52, 95% CI: 1.14 to 2.02, I2=0.0%) than those without. Conclusions Individuals with non-obstructive dyspnoea are at a higher risk of incident COPD and all-cause mortality than individuals without dyspnoea. Further research should investigate whether these high-risk adults may benefit from risk management and early therapeutic intervention. PROSPERO registration number CRD42023395192.
INTRODUCTION
Chronic obstructive pulmonary disease (COPD) is a multifaceted pulmonary ailment distinguished by persistent respiratory symptoms, including dyspnoea, cough, expectoration, and/or exacerbation. 1 2The Global Burden of Disease study 2017 indicated that COPD is the third leading cause of death and disability worldwide. 1 24][5] Accurate and early identification of individuals at risk of COPD, also known as pre-COPD, is the foundation for effective management. 3AT IS ALREADY KNOWN ON THIS TOPIC ⇒ Previous studies have explored the association between non-obstructive dyspnoea and incident chronic obstructive pulmonary disease (COPD), but they have yielded inconsistent results.A pooled analysis of the association of non-obstructive dyspnoea with incident COPD and mortality has not been performed.
WHAT THIS STUDY ADDS
⇒ In individuals with normal spirometry, the presence of dyspnoea was associated with higher risks of incident COPD and all-cause mortality.
HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY
⇒ In this review, we found that non-obstructive dyspnoea is related to an increase in incident COPD and mortality.This indicates that this type of individual can be considered as a special clinical subtype of pre-COPD, which has guiding significance for early screening, follow-up, and management.
Open access
Dyspnoea is the subjective experience of lack of air or breathing discomfort. 6 7We usually use the modified Medical Research Council (mMRC) Dyspnoea Scale to quantify dyspnoea severity. 8Dyspnoea is the main symptom of COPD, but some individuals with normal lung function also have dyspnoea (non-obstructive dyspnoea). 3 9-13][11][12] Using data from the European Community Respiratory Health Survey, De Marco et al found that dyspnoea was not associated with incident COPD when lung function was normal. 9In 2018, Kalhan et al reached the same conclusion. 10However, other studies have produced mixed results. 11 12Therefore, whether normal lung function with dyspnoea is associated with COPD development remains controversial.][15] To our knowledge, a pooled analysis of the association of non-obstructive dyspnoea with incident COPD and mortality has not been performed.To accurately identify individuals with pre-COPD, an up-to-date synthesis of data from existing studies is needed to quantitatively evaluate these associations.Bearing this in mind, we aimed to perform a comprehensive systematic review and meta-analysis to assess the association of non-obstructive dyspnoea with incident COPD and mortality.
Data sources and search strategy
In this systematic review and meta-analysis, two reviewers (YH and HF) independently performed a comprehensive search of Embase, Web of Science, and PubMed to identify studies published from inception to 13 May 2023, with the following search terms: 'dyspnoea', 'shortness of breath', 'normal lung function', 'normal pulmonary function', 'normal spirometry', 'without airflow obstruction', 'without airflow limitation', 'preserved lung function', 'preserved pulmonary function', and 'preserved spirometry'.The references of relevant studies were also manually checked to identify other potentially related studies.Online supplemental material 1 shows the search details used for all of the databases.The Preferred Reporting Items for Systematic reviews and Meta-Analyses statement was followed in the conduct and reporting of this study. 16he protocol is registered in the International Prospective Register of Systematic Reviews (registration number: CRD 42023395192).The abstract of this study was previously presented at the 27th congress of the Asian Pacific Society of Respirology. 17
Study selection
Two reviewers (YH and HF) independently reviewed the studies.Discussions or consultations with a third researcher (FW) were used to resolve any disagreements or uncertainties.For primary inspection, the titles and abstracts were screened.Studies were mainly excluded due to the analysis of obstructive dyspnoea and the presence of data that could not be extracted.The second inspection involved full-text review and article selection based on the inclusion and exclusion criteria.Studies were included if they (1) provided data to calculate hazard ratios (HRs) and 95% confidence intervals (CIs) for all-cause mortality and respiratory-related mortality, or relative risks (RRs) and 95% CIs for COPD development, in individuals with non-obstructive dyspnoea compared with individuals with normal lung function without dyspnoea; (2) were independent studies; and (3) were prospective cohort studies or retrospective cohort studies.Studies that replicated previously published research data were not considered independent.Exclusion criteria include (1) acute dyspnoea or dyspnoea with a clear cause; (2) The primary endpoints of the study were COPD development and all-cause mortality.The secondary endpoint was respiratory disease-related mortality.This review adopted the Global Initiative for Chronic Obstructive Lung Disease (GOLD) definition of COPD. 1 A postbronchodilator forced expiratory volume in 1 s (FEV 1 )/forced vital capacity (FVC) ratio of <0.70 was the preferred definition of COPD.Previous studies have shown that prebronchodilator lung function and postbronchodilator lung function have the same ability to assess the risk of long-term mortality. 18Therefore, studies using a prebronchodilator FEV 1 /FVC of <0.70 to define COPD were also considered for inclusion in this study.Dyspnoea was defined by self-reporting in surveybased studies or studies using the mMRC Dyspnoea Scale Questionnaire, with a minimum rating of ≥1, or even≥2. 8
Data extraction and risk-of-bias assessment
The data included first author, year of publication, region, study design, sample size, age of study participants, follow-up period, dyspnoea definition, normal lung function definition, COPD definition, RRs and 95% CIs, and HRs and 95% CIs, which were extracted and entered by two independent reviewers (YH and HF).Two reviewers independently checked the accuracy of the extracted information.The Newcastle Ottawa Scale (NOS) score ranges from 0 to 9 (9 being the best quality), with a total score of ≥7 being considered good quality. 19Two reviewers (YH and HF) independently conducted the risk-of-bias assessment based on the NOS score, including the selection of the cohort study, comparability of the study, and outcome of the study.Any divergences were settled by discussion or by consulting a third researcher (FW).
Data synthesis and statistical analysis
To describe the outcome of incident COPD, we used RRs and 95% CIs for quantitative synthesis.The calculation formula (RR=OR ÷ [(1 − p0) + (p0×OR)]) was employed to convert ORs to RRs in instances where ORs were used in the studies.The p0 is the incidence of results of interest in the reference group. 20We used HRs and 95% CIs in our quantitative synthesis to depict the results of mortality due to all causes and respiratory diseases.From the eligible studies, we preferentially extracted and used the results of multiple-factor adjustment.However, we also used single-factor uncorrected results when only the results without multiple-factor correction were found in the study.To calculate the pooled effect sizes and 95% CIs, we used the random-effects model based on the fact that these studies were conducted in a variety of settings and among different populations. 21Heterogeneity was determined using the I² statistic.Values of 0%-24% represented no heterogeneity, 25%-49% were considered low heterogeneity, 50%-74% were considered moderate heterogeneity, and values of ≥75% indicated substantial heterogeneity. 22If the number of included studies reached≥10, we planned to perform a funnel plot analysis by plotting the ORs of the individual studies against their variance to detect the risk of publication bias. 23Egger's test was also used to assess the funnel plot asymmetry for incident COPD, all-cause mortality, and respiratory mortality with at least 10 studies included.Further subgroup analyses were planned to examine crucial variables that might affect incident COPD and mortality, and to assess sources of heterogeneity.The planned subgroups included smoking status, follow-up year, baseline age, and sex.We used Stata/SE V.15.1 (Statacorp LP, College Station, TX, USA) to conduct this meta-analysis.A p-value of <0.05 was considered statistically significant, and all statistical tests were two-sided.
Patient and public involvement
No patients were involved.
Search results and study characteristics
As shown in figure 1, the flow diagram represents the systematic selection and search process.Of the 6479 studies identified on PubMed, Embase, and Web of Science, as well as the seven additional studies from previous meta-analyses and systematic reviews (online supplemental material 2), 5214 studies remained after duplicate removal.After checking the titles and abstracts, 16 articles remained eligible for full-text reading.We ultimately included eight studies that fulfilled the inclusion Open access and exclusion criteria. 9-15 24The reasons for exclusion included not distinguishing participants with normal lung function, unavailable full texts, and not reporting the outcome of interest.
Table 1 shows the characteristics of the included studies.A total of 100 758 individuals were included in the meta-analysis.The average follow-up period was more than 5 years.With the exception of one study, which was a retrospective study, all studies were prospective studies. 12All individuals included in this review were from the general population.One study recruited only women, whereas all other studies included both men and women. 12All studies were published between 2005 and 2023.This review included six studies with a prebronchodilator FEV 1 /FVC of <0.70 as the main definition of COPD, one study with a postbronchodilator FEV 1 /FVC of <0.70 and an FEV 1 of ≥80% predicted value as the main definition of COPD, and one study with a physician-based diagnosis of COPD as the main definition of COPD. 12 24ix studies with mMRC scores of ≥1 as the main definition of dyspnoea, one study with an mMRC score of ≥2 as the main definition of severe dyspnoea, and one study with self-reported breathing difficulties as the main definition of dyspnoea were included. 12 24Table 2 presents the results based on the NOS scores of the included studies.The included studies scored 7-9 on the NOS, indicating good methodological quality.
Association between non-obstructive dyspnoea and incident COPD Four studies involving 12 273 individuals examined the association between non-obstructive dyspnoea and incident COPD.All four of these studies adjusted for multiple confounding factors.The results were presented as RRs and 95% CIs in one study, and as ORs and 95% CIs in the remaining three studies.We accounted for the incidence of COPD by converting the OR to the RR.Compared with normal lung function without dyspnoea, the pooled analysis identified a higher risk of incident COPD in individuals with non-obstructive dyspnoea (RR: 1.41, 95% CI: 1.08 to 1.83, p=0.011) with moderate heterogeneity (I 2 =52.2%,Tau 2 =0.044, p=0.079) (figure 2).Fewer than 10 studies were included, which was not sufficient to evaluate publication bias.
Association between non-obstructive dyspnoea and all-cause mortality/respiratory disease-related mortality Three studies involving 88 485 individuals examined the association between non-obstructive dyspnoea and allcause mortality.Multiple confounders were adjusted for in all studies, and the results were presented as HRs and 95% CIs.In individuals with normal spirometry, the presence of dyspnoea was associated with a higher risk of allcause mortality (HR: 1.21, 95% CI: 1.14 to 1.28, p<0.001) with no heterogeneity (I 2 =0.0%,Tau 2 =0.000, p=0.618) compared with individuals without dyspnoea.The association between non-obstructive dyspnoea and respiratory disease-related mortality was examined in two studies.Compared with individuals with normal lung function without dyspnoea, individuals with non-obstructive dyspnoea had a higher risk of respiratory disease-related mortality (HR: 1.52, 95% CI: 1.14 to 2.02), with no heterogeneity (I 2 =0.0%,Tau 2 =0.000, p=0.340) (figure 3).Only eight studies met the inclusion requirements and therefore we did not evaluate publication bias.
Subgroup analysis
As a result of the limited number of studies included, the subgroup analysis was not conducted.
DISCUSSION
To the best of our knowledge, this systemic review and meta-analysis is the first to quantitatively synthesise current evidence on the prognosis of non-obstructive dyspnoea and respiratory health in adults.In this comprehensive meta-analysis of eight studies involving more than 100 000 participants, a major finding emerged.In individuals with normal spirometry, the presence of dyspnoea was associated with higher risks of incident COPD and all-cause mortality.
The GOLD Report in 2001 proposed an 'at risk' stage (GOLD stage 0), which only included the respiratory symptoms of chronic cough and sputum production. 25owever, not all individuals with normal lung function and respiratory symptoms will develop COPD, and thus GOLD 0 was delisted from the 2006 GOLD classification. 26In 2021, Han et al proposed the concept of pre-COPD, 3 meaning that individuals are at high risk of COPD, including non-obstructive dyspnoea.According to data from the European Community Respiratory Health Survey II, De Marco et al found that nonobstructive dyspnoea was not associated with incident COPD in young adults. 9However, Lindberg et al observed conflicting results. 11Substantial controversy followed these discordant results regarding the important, but yet unsolved, puzzle of whether individuals with normal lung function with dyspnoea are more likely to develop incident COPD than those without dyspnoea.Moreover, whether dyspnoea should be considered as one of the specific definitions of pre-COPD is unclear.In this review, we found that non-obstructive dyspnoea is related to an increase in incident COPD and mortality.This indicates that this type of individual can be considered as a special clinical subtype of pre-COPD, which has guiding significance for early screening, follow-up, and management.Notably, individuals with non-obstructive chronic bronchitis, [27][28][29] emphysema, 30 airway remodelling, 31 and small airway disease 32 33 are among the pre-COPD population.Therefore, comprehensive evaluation is needed when managing the pre-COPD population.
Our study did not perform a subgroup analysis because the number of relevant studies was small and the minimum requirements were not met.Lindberg et al found that dyspnoea is a significant risk factor for incident COPD In the NOS score, except for comparability, which can be rated up to 2 stars, the other items can be rated up to 1 star, with a full score of 9 stars.Higher scores indicate higher quality research.
Open access
Open access in men, but not in women. 11Whether dyspnoea demonstrate sex differences remains unknown.Knowledge in this area is still lacking, and further studies are needed to enhance our understanding of dyspnoea with COPD.Dyspnoea has various causes, and therefore further etiological investigations are necessary.In individuals with normal spirometry, dyspnoea may be caused by exercise or physical activity, pulmonary infection, inflammatory lung diseases, pulmonary embolism, pulmonary allergic reaction, cardiovascular disease, anaemia or even psychological factors (anxiety or panic).Therefore, clinicians should screen and exclude dyspnoea caused by other diseases and psychological factors before managing individuals with non-obstructive dyspnoea with pre-COPD to avoid delayed management.At present, no clear evidence indicating that drugs can alter COPD progression is available.Our study focused on identifying high-risk individuals who retained normal lung function, increasing attention to non-obstructive dyspnoea, strengthening follow-up and lung function testing, and even drug therapy to allow patients to benefit from early treatment.Early Open access intervention for individuals who are at risk of COPD is a crucial next step. 4he pathophysiological mechanism of COPD caused by dyspnoea is still unclear, but reasonable assumptions can be made.Dyspnoea is a symptom that may indicate underlying health conditions, such as respiratory and cardiovascular diseases, which can contribute to an increased mortality risk.Dyspnoea is often a manifestation of underlying diseases, such as COPD, heart failure, pulmonary hypertension, or interstitial lung disease.These conditions can significantly impact pulmonary function and overall health, leading to an increased mortality risk.Dyspnoea can also limit an individual's ability to engage in physical activity and exercise, which is associated with various health benefits.Reduced physical activity can lead to deconditioning, muscle weakness, and an increased risk of other health complications. 34dditionally, decreased exercise tolerance can result in a sedentary lifestyle, which is associated with high mortality rates. 35Furthermore, dyspnoea often occurs due to inadequate oxygenation of the body, and impaired lung function leads to reduced oxygen uptake and increased carbon dioxide retention.Our research team recently found that ventilatory inefficiency was associated with small airway dysfunction, which is a key pathological feature in patients with COPD. 36
Strengths and limitations
One notable strength of this review is that the majority of the included studies exhibited a high quality of evidence and appropriately adjusted for confounding variables, which reduced the impact of these confounding variables on the association between non-obstructive dyspnoea and the observed health risk.Moreover, we used strict inclusion and exclusion criteria and pooled the data using the random-effects model to explain the variance between the studies.
This study also has some limitations.First, the number of available studies was small, and therefore we could not perform multiple subgroup analyses and funnel plot analyses to investigate the associations between nonobstructive dyspnoea and the risks of incident COPD and mortality.Future cohort studies are needed to analyse the associations of dyspnoea with COPD events and respiratory health outcomes in specific subgroups (male sex, female sex, never smokers, ever smokers, current smokers, follow-up years, and baseline age groups).Second, we could not access the data of individuals to exclude potential confounders.Third, the cause of the augmented risk observed in our investigation remains ambiguous.Whether the increased risk stemmed from non-obstructive dyspnoea or the progression from nonobstructive dyspnoea to COPD during the follow-up period remains uncertain.Finally, the majority of the included studies used prebronchodilator lung function as a diagnostic tool for COPD, whereas most studies now use the GOLD criterion of postbronchodilator FEV 1 /FVC ratio to diagnose COPD.However, previous studies have shown that using prebronchodilator and postbronchodilator lung function is equally valuable in distinguishing long-term mortality risk. 37Therefore, the results of this study are unlikely to have been influenced by the use of prebronchodilator lung function to diagnose COPD.
CONCLUSIONS
This systematic review and meta-analysis comprehensively and rigorously summarised the data of eight studies involving 100 758 individuals to examine the association of non-obstructive dyspnoea with COPD incidence and allcause/respiratory-related mortality risk.Individuals with non-obstructive dyspnoea were more likely to develop incident COPD and were at a higher risk of mortality than those without dyspnoea.Our research findings support the inclusion of non-obstructive dyspnoea in the pre-COPD population for enhanced follow-up, management, and intervention.
Open access
of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.
Open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial.See: http://creativecommons.org/licenses/by-nc/4.0/.
Figure 1
Figure 1 Preferred reporting items for systematic reviews and meta-analyses flow diagram of systematic search and selection.
Figure 2 Figure 3
Figure 2 Forest plot of the risk of incident chronic obstructive pulmonary disease in individuals with non-obstructive dyspnoea compared with individuals without non-obstructive dyspnoea.Larger boxes indicate studies with larger sample sizes and larger weight.The combined effect size estimate takes into account both the individual study estimates and their respective weights.The pooled effect size estimate, indicated by the diamond at the bottom of the forest plot, provides an overall summary of the effect across all included studies.RR, relative risks
Table 1
Characteristics of all studies included in the meta-analysis BD, bronchodilator; COPD, chronic obstructive pulmonary disease; FEV 1 , forced expiratory volume in 1 s; FVC, forced vital capacity; mMRC, modified Medical Research Council Dyspnoea Scale; SD, standard deviation.
Table 2
Newcastle-Ottawa Scale and quality assessment of all studies included in the meta-analysis Study Selection (stars awarded) | 2024-02-25T06:17:13.147Z | 2024-02-01T00:00:00.000 | {
"year": 2024,
"sha1": "bef861963360fd502d77bc2cc474712e4f8f8b22",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "3c1632012b5f7bbc121c234431ca3a4d67b9ef34",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259010660 | pes2o/s2orc | v3-fos-license | Sedation with Sevoflurane versus Propofol in COVID-19 Patients with Acute Respiratory Distress Syndrome: Results from a Randomized Clinical Trial
Background: Acute respiratory distress syndrome (ARDS) related to COVID-19 (coronavirus disease 2019) led to intensive care units (ICUs) collapse. Amalgams of sedative agents (including volatile anesthetics) were used due to the clinical shortage of intravenous drugs (mainly propofol and midazolam). Methods: A multicenter, randomized 1:1, controlled clinical trial was designed to compare sedation using propofol and sevoflurane in patients with ARDS associated with COVID-19 infection in terms of oxygenation and mortality. Results: Data from a total of 17 patients (10 in the propofol arm and 7 in the sevoflurane arm) showed a trend toward PaO2/FiO2 improvement and the sevoflurane arm’s superiority in decreasing the likelihood of death (no statistical significance was found). Conclusions: Intravenous agents are the most-used sedative agents in Spain, even though volatile anesthetics, such as sevoflurane and isoflurane, have shown beneficial effects in many clinical conditions. Growing evidence demonstrates the safety and potential benefits of using volatile anesthetics in critical situations.
Introduction
Coronavirus disease (COVID-19) has become a significant worldwide challenge for health care providers. The wide variety of COVID-19 symptoms developed have ranged from mild headache or isolated cough to severe respiratory failure. Many patients affected by COVID-19 developed ARDS and required ICU admission for invasive mechanical ventilation (IMV). Oxygenation impairment and increased mortality rates have characterized this worldwide health crisis [1,2]. Impaired oxygenation in the ARDS context usually requires IMV support, and sedation is an integral part of therapy for this kind of patient. Titratable light-to-deep sedation may control neurological manifestations and help optimize ventilatory settings and endotracheal tube tolerance, although sometimes it is also necessary to use neuromuscular relaxants. Within the variety of sedative agents used in ICUs, there are two large groups classified according to administration route: intravenous and inhaled agents. Intravenous drugs represent routine clinical practice worldwide [3,4]: benzodiazepines (midazolam, lorazepam, and diazepam), propofol, and ketamine are commonly combined with opioids to achieve analgo-sedation. In contrast, volatile anesthetics (isoflurane and sevoflurane) agents on oxygenation in patients with ARDS due to COVID-19 infection. The initial idea was to recruit all consecutive patients admitted in participant critical care units, but emergency situations, patients' social problems, lack of organization, and workers on sick leave made it difficult to perform. Our team achieved poor patient recruitment, but we decided to analyze these data so we can draw some conclusions to continue with our study.
Study Population, Setting and Data Collection
After review and approval by the ethics committee of the INCLIVA Health Research Institute, our team developed a multicenter, national, randomized 1:1, controlled, parallel, open study registered as NCT04359862, in which patients with ARDS due to COVID-19 were included in the first 24 h after diagnosis (Figure 1). The participant centers in this trial were four third-level hospitals in Spain which were referents during the COVID pandemic: Hospital Clínic Universitari of Valencia (Valencia), Consorcio Hospital General Universitario of Valencia (Valencia), Hospital Universitario Ramón y Cajal (Madrid), and Hospital Universitario La Paz (Madrid). The initial sample size took into consideration the high number of daily admissions in our units, but organizational problems made the recruitment rate much lower than expected. This trial was carried out during the year 2020. Each patient who met the inclusion criteria and none of the exclusion criteria was proposed to participate in the study. When the informed consent was signed by the patient (or relatives when patient could not sign), randomization was performed to a treatment arm: sedation using sevoflurane (SEV) or propofol (PROP). In the PROP group, propofol was administered with volumetric pumps (Alaris GW and GP Plus), and sedation levels were evaluated using BIS ® technology (Medtronic Covidien, Spain) . In the SEV group, sevoflurane was administered through an AnaConDa ® device coupled to a Con-traFluran ® scavenging device, and CAM was measured with a SedLine ® monitor. Inclusion criteria were: age ≥ 18 years old, need for sedation, ARDS due to COVID-19, and accepted informed consent by the patient or a relative. Exclusion criteria included intracranial hypertension, allergy to any sedative agent, tidal volume < 250 mL, previous malignant hyperthermia or risk of developing malignant hyperthermia, hepatic failure, neutropenia, pregnancy, or chemotherapy in the previous month. All randomized patients received remifentanil as an analgesic and cisatracurium as a neuromuscular relaxant. A lung-protective ventilation strategy was carried out: VT 6 mL/kg, PEEP > 5 cmH2O, Plateau pressure < 30 cmH2O, respiratory rate < 35 rpm, and I:E ratio ≤ 1:2 [24]. This study protocol is publicly available for verification [25].
Patients were anonymized using a numerical code for data collection. Variables recorded included: anthropometric values, respiratory and hemodynamic parameters, and blood and bronchoalveolar fluid samples at day of randomization (D0), at 24 h (D1), and Each patient who met the inclusion criteria and none of the exclusion criteria was proposed to participate in the study. When the informed consent was signed by the patient (or relatives when patient could not sign), randomization was performed to a treatment arm: sedation using sevoflurane (SEV) or propofol (PROP). In the PROP group, propofol was administered with volumetric pumps (Alaris GW and GP Plus), and sedation levels were evaluated using BIS ® technology (Medtronic Covidien, Spain). In the SEV group, sevoflurane was administered through an AnaConDa ® device coupled to a ContraFluran ® scavenging device, and CAM was measured with a SedLine ® monitor. Inclusion criteria were: age ≥ 18 years old, need for sedation, ARDS due to COVID-19, and accepted informed consent by the patient or a relative. Exclusion criteria included intracranial hypertension, allergy to any sedative agent, tidal volume < 250 mL, previous malignant hyperthermia or risk of developing malignant hyperthermia, hepatic failure, neutropenia, pregnancy, or chemotherapy in the previous month. All randomized patients received remifentanil as an analgesic and cisatracurium as a neuromuscular relaxant. A lung-protective ventilation strategy was carried out: V T 6 mL/kg, PEEP > 5 cmH 2 O, Plateau pressure < 30 cmH 2 O, respiratory rate < 35 rpm, and I:E ratio ≤ 1:2 [24]. This study protocol is publicly available for verification [25].
Patients were anonymized using a numerical code for data collection. Variables recorded included: anthropometric values, respiratory and hemodynamic parameters, and blood and bronchoalveolar fluid samples at day of randomization (D0), at 24 h (D1), and at 48 h (D2) (see Table 1). Thirty days after randomization (D30), the following data were collected: duration and control of mechanical ventilation (MV), ventilator-free days (VFD), length of ICU stay, and mortality at D2 and D30.
Hypothesis and Objectives
The hypothesis of this clinical trial was that sedation using sevoflurane in patients with ARDS associated with COVID-19 infection improves oxygenation. The primary objective included the evaluation of oxygenation in randomized patients during the first 48 h, measured via PaO 2 /FiO 2 . The secondary objective included assessment of mortality rates at D30.
Statistical Analysis
All statistical comparisons were based on the intention-to-treat principle. Sample size was calculated using results of previous works [7,14]. The Student's t-test and the Mann-Whitney test were used as appropriate. Chi-square and Fisher's tests were used for categorical variables. Differences in PaO 2 /FiO 2 for D1 and D2 between the two treatment arms were determined using mixed regression analysis for repeated measures. This method also included an ANCOVA-type design; baseline values of the PaO 2 /FiO 2 variable were added to the regression model as covariates, interacting with the treatment variable. The analysis was also adjusted for the potential of autocorrelation between repeated measures in the same subject and for the nesting effect (due to the study center) through "random intercept" effects. The Kaplan-Meier method, Cox regression, and restricted median survival time (RMST) were used to compare D30 survival rates between the two study groups. Statistical analysis was performed using Stata version 16.1 (StataCorp. 2021. Stata Statistical Software: Release 16. College Station, TX, USA: StataCorp LP).
Primary Results
Analyses were based on a total of 17 patients: 10 patients in the PROP arm and 7 patients in the SEV arm. In Valencia, HCUV recruited seven patients and Consorcio Hospital General Universitario recruited three. In Madrid, Ramón y Cajal Hospital recruited one patient and La Paz Hospital recruited six. For each patient, multiple variables were collected on D1 and D2, and mortality was noted on D30. Regarding the patients' baseline characteristics, no statistically significant differences were found in any of the variables listed in Table 2. No statistically significant difference was found regarding several comorbidities (stroke, arterial hypertension, diabetes mellitus, dyslipemia, smoking habit, chronic kidney injury (CKI), and previous corticosteroid therapy). The calculated severity index (via SAPS-II [26] and LIS [27]) did not show differences between groups. Patients in the PROP arm spent an average 16.4 days in the ICU, and patients in the SEV arm spent an average 20.6 in the ICU (p = 0.563); the average days under IMV were 13 (PROP) vs. 14.6 (SEV), with the average IMV-free days in the ICU being 5.1 vs. 5.2, respectively. The mean number of days from randomization to death was 28 vs. 30, respectively (p = 0.495). Regarding ventilatory settings, no significant differences were found at baseline (Table 3) or at D1 (Table 4). At D2 (Table 5), statistically significant differences among groups were found; respiratory acidosis developed in the SEV group, probably related to differences in end-expiratory lung volumes. Figure 2 illustrates the changes in PaO 2 /FiO 2 between the two treatment arms at the baseline (before treatment), and at D1 and D2 (post-treatment), as previously shown in Tables 2-5. As shown in Figure 2, there were differences between the groups before randomization, with higher PaO 2 /FiO 2 values in the SEV group (p = 0.246). Differences persisted at D1 and minimized at D2. Figure 3 presents the core of the analysis: the effect of the randomized treatment on PaO2/FiO2 (at D1 and D2, post-randomization). It was adjusted by the baseline value of PaO2/FiO2 according to the ANCOVA design. Using mixed regression analysis in the context of ANCOVA, the results showed an improvement in the PaO2/FiO2 ratio in the SEV arm at D1 and D2; however, results were statistically significant only at D1** (Figure 3). Figure 3 presents the core of the analysis: the effect of the randomized treatment on PaO 2 /FiO 2 (at D1 and D2, post-randomization). It was adjusted by the baseline value of PaO 2 /FiO 2 according to the ANCOVA design. Using mixed regression analysis in the context of ANCOVA, the results showed an improvement in the PaO 2 /FiO 2 ratio in the SEV arm at D1 and D2; however, results were statistically significant only at D1** (Figure 3).
Effect over 30-Day Mortality
Kaplan-Meier analysis (Figure 4) showed an intersection of survival curves along the trace, which made it difficult to interpret and find the meaning of the Log-rank test (p = 0.584). Therefore, we tested a time-dependent effect for the PaO 2 /FiO 2 -mortality relationship. On average, patients treated using sevoflurane survived 1.66 days longer than those treated using propofol when patients were followed up at D30 ( Figure 5). Unfortunately, this difference did not achieve statistical significance (95% CIs = −11.00 to 14.33). Figure 6 shows a time-dependent effect with a higher mortality risk at the beginning of the study for the SEV arm, but this risk decreased to become protective in the following period. Unfortunately, confidence intervals reached the 1-line on the Y-edge, so the analysis did not achieve statistical significance. Figure 3 presents the core of the analysis: the effect of the randomized treatment on PaO2/FiO2 (at D1 and D2, post-randomization). It was adjusted by the baseline value of PaO2/FiO2 according to the ANCOVA design. Using mixed regression analysis in the context of ANCOVA, the results showed an improvement in the PaO2/FiO2 ratio in the SEV arm at D1 and D2; however, results were statistically significant only at D1** (Figure 3). FOR PEER REVIEW 8 of 14
Effect over 30-Day Mortality
Kaplan-Meier analysis ( Figure 4) showed an intersection of survival curves along the trace, which made it difficult to interpret and find the meaning of the Log-rank test (p = 0.584). Therefore, we tested a time-dependent effect for the PaO2/FiO2-mortality relationship. On average, patients treated using sevoflurane survived 1.66 days longer than those treated using propofol when patients were followed up at D30 ( Figure 5). Unfortunately, this difference did not achieve statistical significance (95% CIs = −11.00 to 14.33). Figure 6 shows a time-dependent effect with a higher mortality risk at the beginning of the study for the SEV arm, but this risk decreased to become protective in the following period. Unfortunately, confidence intervals reached the 1-line on the Y-edge, so the analysis did not achieve statistical significance. The prognostic effect of PaO2/FiO2 differences between D1 and D2 compared to the basal values for the SEV and PROP arms was tested using sensitivity analysis. Figure 7 shows Cox regression analysis results for D30 mortality, in which no statistical significance was found for changes in PaO2/FiO2. This showed that the effect of treatment on mortality did not depend on changes in PaO2/FiO2 measured at D1 and D2 compared to basal values. The prognostic effect of PaO2/FiO2 differences between D1 and D2 compared to t basal values for the SEV and PROP arms was tested using sensitivity analysis. Figur shows Cox regression analysis results for D30 mortality, in which no statistical sign cance was found for changes in PaO2/FiO2. This showed that the effect of treatment mortality did not depend on changes in PaO2/FiO2 measured at D1 and D2 compared basal values. The prognostic effect of PaO 2 /FiO 2 differences between D1 and D2 compared to the basal values for the SEV and PROP arms was tested using sensitivity analysis. Figure 7 shows Cox regression analysis results for D30 mortality, in which no statistical significance was found for changes in PaO 2 /FiO 2 . This showed that the effect of treatment on mortality did not depend on changes in PaO 2 /FiO 2 measured at D1 and D2 compared to basal values.
x FOR PEER REVIEW 10 of 14
Limitations
The main limitation of this clinical trial was its sample size. The health emergency situation in which this project was carried out led to the loss of recruitment and clinical data (due to work overload and difficulties in obtaining informed consent). In addition, assessment of oxygenation, levels of inflammatory mediators, and mortality in patients admitted to the ICU for ARDS associated with COVID-19 were probably influenced by unknown factors due to the incipient development of infection by this virus. Moreover, the study's initial protocol included cytokine level measurements; however, these could not be assessed due to laboratory issues (overwhelmed by COVID-19 tests) and storage limitations.
Generalizability and Interpretation
Intravenous agents (mainly propofol and benzodiazepines (3)) have been the mostemployed deep sedation drugs in ICUs worldwide. That is surprising, as the use of benzodiazepines has been associated with decreased ventilator-free days, increased risk of delirium, and worse long-term outcomes [28,29], so non-benzodiazepine strategies should be preferred for ICU sedation.
The COVID-19 pandemic emptied all hospital sedative stocks in just a few months. The main reason for this was that ICU capacities were overrun with an increase in invasively mechanically ventilated ARDS patients who required deep sedation combined with muscle relaxation to achieve a depth of sedation sufficient to avoid patient-ventilator dys-
Limitations
The main limitation of this clinical trial was its sample size. The health emergency situation in which this project was carried out led to the loss of recruitment and clinical data (due to work overload and difficulties in obtaining informed consent). In addition, assessment of oxygenation, levels of inflammatory mediators, and mortality in patients admitted to the ICU for ARDS associated with COVID-19 were probably influenced by unknown factors due to the incipient development of infection by this virus. Moreover, the study's initial protocol included cytokine level measurements; however, these could not be assessed due to laboratory issues (overwhelmed by COVID-19 tests) and storage limitations.
Generalizability and Interpretation
Intravenous agents (mainly propofol and benzodiazepines (3)) have been the mostemployed deep sedation drugs in ICUs worldwide. That is surprising, as the use of benzodiazepines has been associated with decreased ventilator-free days, increased risk of delirium, and worse long-term outcomes [28,29], so non-benzodiazepine strategies should be preferred for ICU sedation.
The COVID-19 pandemic emptied all hospital sedative stocks in just a few months. The main reason for this was that ICU capacities were overrun with an increase in invasively mechanically ventilated ARDS patients who required deep sedation combined with muscle relaxation to achieve a depth of sedation sufficient to avoid patient-ventilator dyssynchrony. At the beginning of the pandemic, some groups emphasized that affected patients required high sedative doses; although some possible underlying reasons for this were unknown, rational reasons included that patients were younger, without co-morbidities, and most of them needed to toggle prone position.
This situation necessitated opening the therapeutic arsenal to other options, such as volatile anesthetics and multimodal sedative approaches, to avoid adverse effects related to propofol (such as hyperlipidemia or propofol infusion syndrome (PRIS)) or the abuse of neuromuscular relaxants or analgesics (mainly opioids). Inhaled sedation (using isoflurane or sevoflurane) had already demonstrated faster and improved recovery after prolonged sedation [30,31], minimized delirium, an analgesic sparing effect [32][33][34], decreased pulmonary inflammation [14,35], improved oxygenation in patients with ARDS [5], and decreased mortality in long-term ventilated patients [36]. Sevoflurane has been used in many ICUs since AnaConDa© was designed and the accuracy of the pharmacokinetic model was published [37]. Some guidelines included its use in critically ill patients with ARDS for a moderate-to-deep sedation more than ten years ago [38,39]. Still, many Spanish ICUs did not switch to inhaled sedation until they ran out of propofol; the main reasons provided were that staff were unfamiliar with the volatile agent or its specific device (AnaConDa©), even though it was widely used for intraoperative anesthetic maintenance.
The primary role that propofol and midazolam have in ICUs increases the complexity of changing the routine from intravenous sedation in order to assess the spectrum of sedative agents that we currently use. Profiling the potential benefit of a hypnotic agent over oxygenation is complicated given the interference of multiple variables that probably act as confounding factors in a critically ill patient. However, it is important to keep in mind that the profile of these drugs (that act at so many levels) must be recognized in order to adapt decisions based on the patient being treated [40]. In addition to its function as sedative agent, sevoflurane has many intrinsic characteristics with potential therapeutic benefits that could be especially relevant to ICU patients: it is an easy-totitrate drug with shorter wake-up times, it has enhanced effect over analgesics (decreased use of opioids) and neuromuscular relaxants, and it has fewer vasopressor requirements compared to midazolam and propofol. All of these potential benefits should be taken into consideration [34,41].
Our study had inherent design limitations that made it difficult to draw categorical conclusions. However, this study highlights the feasibility of using sevoflurane as a primary sedative agent in ARDS patients. Both PROP and SEV treatment arms were comparable regarding patients' characteristics, co-morbidities, and ventilatory settings. Regarding the PaO 2 /FiO 2 ratio, it tended to improve in the SEV group both at D1 and D2; however, results were statistically significant only for D1. The SEV arm experienced longer ICU stays and longer days under MV; however, the number of days from randomization to death was longer in the SEV group. Moreover, patients on sevoflurane survived longer than those on propofol when patients were followed up at 30 days, even if results were not statistically significant. The reason for not achieving significance could be both low sample size and days under inhaled sedation; a retrospective study in surgical ICU ventilated patients (n = 128) with inhaled drugs used for more than 96 h demonstrated more ventilator-free days at day 60, more hospital-free days at 6 months, and decreased mortality compared with patients under intravenous sedation receiving midazolam or propofol [36]. Our results are consistent with previous analysis [30,[42][43][44][45], which found no differences between inhaled and intravenous sedation in deaths or length of ICU stay. An international retrospective study including 10 ICUs published in 2022 [46] found no association between inhaled sedation in COVID-19 patients and the number of ventilator-free days through to day 28; this suggests that the effect of treatment on mortality probably does not depend on the resulting changes in the PaO 2 /FiO 2 ratio.
Regarding oxygenation, studies performed in mice, rat, and pig models of ARDS found that inhaled agents reduced alveolar and systemic levels of pro-inflammatory cytokines [7,14,[47][48][49], improved arterial oxygenation, and decreased lung alveolar oedema [7,50]. Results of this work agree with previous publications in which the potential benefit of sevoflurane over oxygenation was observed [15,40]; however, more studies recruiting a higher number of patients are needed to support the use of inhaled agents. Few studies registered on the ICH GCP website include objectives regarding the study of sevoflurane in patients with moderate to severe ARDS diagnoses. Even so, volatile agents (sevoflurane and isoflurane) are used as alternatives to intravenous sedation in ICUs by an increasing number of physicians [51] as monotherapy or as part of a combined therapy [52]. There are some detractors because of volatile agents' potential adverse events [53], but there is relevant literature that supports their feasibility and safety of use, without the risk of tolerance or effects on renal or liver function [30,34,45,54,55].
In our unit's experience, managing one more drugs, whether or not they are superior to another agent, allows us to provide alternatives that can be beneficial to our various patients. Therefore, while waiting for new studies, the use of inhaled sedation with sevoflurane as the first line in patients affected by ARDS must be considered, and not only as second or third-line treatment, as recommended recently [28].
Conclusions
This study has demonstrated that sedation using sevoflurane improved oxygenation and increased survival times in patients affected by ARDS due to COVID-19 infection compared to propofol. Hence, in patients with ARDS who require sedation, sevoflurane is a safe and effective option that, in addition to its main purpose, has a beneficial effect on oxygenation and survival. Therefore, it could be considered as a first-choice strategy for this patient profile. Informed Consent Statement: Informed consent was obtained from all subjects, or relatives in charge, involved in the study. All signed consents are attached to each patient's medical records.
Data Availability Statement: Data supporting reported results are available from the corresponding author at any time.
Conflicts of Interest:
The authors declare no conflict of interest. | 2023-06-02T15:18:43.143Z | 2023-05-31T00:00:00.000 | {
"year": 2023,
"sha1": "2745833f02d6f21089b13c59891ad8473c8bb03e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/jpm13060925",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "cfa1c7168cd55b0de776c3ea0c8c55d8960783a8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4451626 | pes2o/s2orc | v3-fos-license | Evaluation of rapid post-mortem test kits for bovine spongiform encephalopathy (BSE) screening in Japan: Their analytical sensitivity to atypical BSE prions
ABSTRACT A classical type of bovine spongiform encephalopathy (C-BSE), recognized in 1987, had a large impact on public health due to its zoonotic link to variant Creutzfeldt-Jakob disease by the human consumption of dietary products contaminated with the C-BSE prion. Thus, a number of countries implemented BSE surveillance using rapid post-mortem test kits that were approved for detection of the C-BSE prion in the cattle brain. However, as atypical BSE (L- and H-BSE) cases emerged in subsequent years, the efficacy of the kits for the detection of atypical BSE prions became a matter of concern. In response to this, laboratories in the European Union and Canada evaluated the kits used in their countries. Here, we carried out an evaluation study of NippiBL®, a kit currently used for BSE screening in Japan. By applying the kit to cattle brains of field cases of C-BSE and L-BSE, and an experimental case of H-BSE, we showed its comparable sensitivities to C, L-, and H-BSE prions, and satisfactory performance required by the European Food Safety Authority. In addition to NippiBL®, two kits (TeSeE® and FRELISA®) formerly used in Japan were effective for detection of the L-BSE prion, although the two kits were unable to be tested for the H-BSE prion due to the discontinuation of domestic sales during this study. These results indicate that BSE screening in Japan is as effective as those in other countries, and it is unlikely that cases of atypical BSE have been overlooked.
INTRODUCTION
Transmissible spongiform encephalopathies (TSEs) are fatal neurodegenerative disorders that cause neuronal cell death and spongiosis in the brain of several mammalian species. In human beings, TSEs emerge in such forms as Creutzfeldt-Jakob disease (CJD), Gerstmann-Str€ aussler-Scheinker syndrome, fatal familial insomnia, and kuru. The causative agent is considered to be solely protein, referred to as 'prion', whose major constitutes are diseaseassociated forms of prion protein (PrP Sc , PrP refers to prion protein). 1,2 PrP Sc is a conformational isoform of glycosylphosphatidylinositolanchored, non-pathogenic cellular prion protein (PrP C ) encoded by the host gene, and it is partially resistant to proteolytic digestion by proteinase K (PK). 2,3,4 A key event in prion propagation is the conversion of endogenous PrP C to PrP Sc , and PrP Sc accumulates in the central nervous system of patients and animals. The cycles of conversion are triggered by preexisting PrP Sc as seeds, where the seeds are initially acquired by unknown processes, due to mutation of the PrP C gene, or by the intake of external PrP Sc . Accordingly, TSEs emerge as sporadic diseases of unspecified backgrounds, hereditary diseases, or infectious diseases. 4 Bovine spongiform encephalopathy (BSE) is a TSE of cattle. A classical type of BSE (C-BSE) was first reported in 1987 in the United Kingdom, 5 and its growing epidemic was recognized later in other countries by infection with the BSE prion through feeding contaminated meat-and-bone meal. Importantly, the epidemic of C-BSE posed an ensuing threat of zoonotic infection of humans after emergence of variant CJD cases in the 1990s, considered to be caused by the human consumption of beef products contaminated with the C-BSE prion. 6,7 This prompted a number of countries to implement protective measures including BSE surveillance using rapid post-mortem test kits. These kits are based on the enzyme-linked immunosorbent assay (ELISA), Western blot, or immunochromatography, which detect PrP Sc accumulated in the medulla oblongata at the level of the obex in cattle brains after proteolytic digestion and elimination of PrP C , or using antibodies that specifically recognize the conformation(s) of PrP Sc . The performance of these kits in detection of the C-BSE prion was evaluated and approved by the European Commission and European Food Safety Authority (EFSA). [8][9][10] Along with C-BSE, two novel atypical forms of BSE named H-type (H-BSE) and L-type BSE (L-BSE) were identified by the mid 2000s. 11,12 Cases of L-and H-BSE are less common than those of C-BSE, but they have been reported in several countries of the European Union (EU) as well as Japan, Canada, and United States of America. [11][12][13][14][15][16][17][18][19][20][21][22][23][24][25] Prions of atypical BSE are distinguished biochemically and pathologically from the C-BSE prion. The risk of transmission of atypical BSE prions to human beings is still under investigation. The L-BSE prion has been thought to be more virulent than the C-BSE prion in experimental transmission to non-human primates and transgenic mice expressing a human form of PrP C . [26][27][28][29][30] On the other hand, other studies showed inefficient transmission of the L-BSE prion to the transgenic mice, and inefficient in vitro conversion of a human form of PrP C by the L-BSE prion 31,32 So far, the H-BSE prion failed to be transmitted to the transgenic mice. 29,31 Nevertheless, it is sensible to determine if the rapid tests in current use are also valid for the atypical BSE prions. From this point of view, evaluation studies were carried out on seven tests used in EU countries and three tests used in the Canadian national BSE surveillance program. 33,34 In Japan, ELISA test kits such as 'Platelia Ò ' and 'TeSeE Ò BSE test kit' (Bio-Rad Laboratories, Inc., Hercules, CA, USA), 'FRELISA Ò BSE test kit' (Fujirebio Inc., Tokyo, Japan), and 'NippiBL Ò BSE test kit' (Nippi Inc., Tokyo, Japan) were mainly used in the BSE screening, and NippiBL Ò is now the only available kit. [8][9][10][35][36][37] In harmonization with reports from EU and Canadian laboratories as described above, we carried out an evaluation study of NippiBL Ò , together with TeSeE Ò and FRELISA Ò , to assess their competence to detect atypical BSE prions.
Performance of the Kits to Detect the C-BSE Prion
In a similar way to the preceding evaluation studies by EU and Canadian laboratories, the present study was designed to examine the performance of rapid ELISA tests to detect atypical L-and H-BSE prions in comparison with C-BSE prion. 33,34 Hence, we began with reviews of the performance of three kits (NippiBL Ò , TeSeE Ò , and FRELISA Ò ) using samples prepared from the brain of a C-BSEaffected cow, although the performances of these kits for detection of the C-BSE prion were already approved elsewhere. [8][9][10]38 We also included the BetaPrion Ò BSE test kit (Analytik Jena AG -AJ Roboscreen GmbH, Leipzig, Germany) as a reference kit to compare our results with those of EFSA and EU laboratories. 9,33 The EFSA has defined two sensitivity criteria: the 'diagnostic sensitivity' is the ability to recognize confirmed positive test samples as positive, while the 'analytical sensitivity' is a detection limit of positive samples serially diluted by negative brain tissues (i.e., the dilution limit for detection). 39 Consistent with the previous evaluation, all kits in the present study determined the brain samples positive for the C-BSE prion as positive. [8][9][10]33,35,38 Thus, the kits fulfilled the criteria of diagnostic sensitivity. In terms of the 'analytical sensitivity', Fig. 1 shows signal response profiles of the kits using serially diluted positive samples (referred to here as dilution-response profiles). Among the four kits, BetaPrion Ò had the highest analytical sensitivity under our experimental conditions, achieving a detection limit at a 1:1,024 dilution (2 10 dilution) of the brain positive for the C-BSE prion ( Fig. 1 and Table 1). NippiBL Ò , whose detection limit was a 1:256 dilution (2 8 dilution), was the next after BetaPrion Ò . FRELISA Ò and TeSeE Ò (in the conventional assay protocol) followed in descending order (Table 1). In analysis by a four-parameter logistic model, the dilutionresponse profile of each kit was fitted to a regression curve with an adjusted R 2 value higher than 0.938 ( Fig. 1A to D).
Evaluation of the Kits for Atypical L-and H-BSE Prions
After reviewing the performances of the kits to detect the C-BSE prion, we then applied them to the brain tissues derived from an L-BSE affected cow ( Fig. 2A to D). The results are summarized in Table 1. All kits distinguished the samples positive for the L-BSE prion and the samples of normal brains, without false-negative or false-positive signals ( Table 1). Among the kits, BetaPrion Ò showed the best analytical sensitivity by reaching a detection limit at a 1: 64 dilution (2 6 dilution) of the brain of the L-BSE cow (Fig. 2D). Analytical sensitivities of NippiBL Ò , FRELISA Ò , and TeSeE Ò followed in the same descending order as determined for the C-BSE prion (Table 1). Next, we examined the performance of NippiBL Ò using the brain samples containing the H-BSE prion in a similar way, and obtained its detection limit at a 1: 16 dilution (2 4 dilution) of the brain (Fig. 2E, and Table 1). On fourparameter logistic model analysis, the dilutionresponse profiles of the kits for the brain samples of atypical L-and H-BSE cows were fitted to regression curves with adjusted R 2 values higher than 0.949 ( Fig. 2A to E).
The positive samples in the present study were prepared by serial dilution of 40% (w/v) stock homogenates of the brains of the C-, L-, and H-BSE cows (see Materials and Methods), and the stock homogenates contained different concentrations of PrP Sc from each other. To determine the relative concentration of PrP Sc in the stock homogenates, the homogenates were digested by PK and subjected to Western blot analysis for quantification of PrP Sc (Fig. 3A). Figure 3B shows correlations between the amounts of brain tissues and total signal intensities of PrP Sc , in which the total signal intensity of PrP Sc represents a sum of the intensities of the non-, mono-, and di-glycosylated forms of PrP Sc in Fig. 3A. The analysis showed that comparable signal intensities of PrP Sc were detected in the homogenates corresponding to 12.5 mg of brain tissue of the C-BSE cow, 100 to 200 mg of brain tissue of the L-BSE cow, and 40 to 50 mg of brain tissue of the H-BSE cow ( Fig. 3A and B). Accordingly, the relative concentration of PrP Sc in the stock homogenates in the brains of C-, L-, and H-BSE cows were calculated to be approximately 2 4 : 1: 2 1.5 . In parallel, Table 1 indicates that all kits showed a detection limit for the C-BSE sample 2 3 to 2 5 -times higher than that for the L-BSE sample, and NippiBL Ò showed a detection limit for the H-BSE sample 2 1 -times higher than that for L-BSE. The parallelism in the relative concentration and detection limits of PrP Sc in the C-, L-, and H-BSE samples was an FIGURE 1. Dilution-response profiles of the kits using the brain homogenate of a C-BSE cow. Raw data were plotted as the mean § SEM (standard error of the mean) from a set of triplicate wells. The dotted lines indicate thresholds of positivity defined by the manufacturers' protocols. Non-linear curve fitting was applied to the row data using a four-parameter logistic model.
indication of the invariable reactivity of each kit to the three types of BSE prions, though the overall sensitivities were different among the kits.
Assessment of the Digestion Condition of NippiBL Ò
NippiBL Ò employs a unique protocol of treating the brain tissues with a mixture of protease at 56 C for 10 min for digestion before applying them to the ELISA assay, whereas the other kits in the present study digest the tissues by PK at 37 C. 35 Although NippiBL Ò possessed adequate analytical sensitivity for the detection of atypical BSE prions ( Fig. 2C and E, and Table 1), the line of evidence that PrP Sc of atypical BSE prions was less resistant to proteolytic digestion under stringent conditions prompted us to examine if the digestion condition of NippiBL Ò did not cause rapid or irregular decay of the signal intensities of PrP Sc when applied to atypical BSE prions. 17,40,41 To achieve this, we carried out a time-course study in which the brain samples of the C-and L-BSE cows were processed according to the protocol of NippiBL Ò but with different duration of digestion for up to 20 min before applying the samples to the ELISA assay. As expected, the signal intensities obtained from the C-BSE brain samples remained at a stable level even when the samples were digested for 20 min (Fig. 4A). Under the condition, the L-BSE samples (high, mid, and low concentrations of PrP Sc ) showed a gradual and time-dependent decrease of the signal intensities during the digestion ( Fig. 4A and B). However, the signals were sustained, and did not show a sudden fall or fluctuation that would potentially compromise the accuracy and reproducibility of the analysis. The narrow values of the standard error of the mean for the signal intensities of the triplicate L-and H-BSE samples in the dilution-response profiles ( Fig. 2C and E) supported this observation.
DISCUSSION
In addition to the prior approval of three rapid post-mortem BSE test kits used in Japan for detection of the C-BSE prion, we examined the analytical performance of the kits for the detection of atypical BSE prions. Among the three kits, TeSeE Ò and FRELISA Ò were used The number of wells in triplicate that showed positive signals at the detection limits. b) Evaluation of the kits was carried out with sets of serially diluted positive samples and normal brain samples. c) Weights of tissues that were processed and applied to a single well according to the manufacturers' protocols.
d)
Two independent tests were carried out for C-BSE and L-BSE samples, respectively. e) One well was positive, and one well was pseudo-positive (i.e., the absorbance was between the threshold of positivity and [threshold -10%] value). Two wells were positive, and one well was pseudo-positive (i.e., the absorbance was between the threshold of positivity and [threshold -10%] value). FIGURE 2. Dilution-response profiles of the kits using the brain homogenates of L-and H-BSE cows. Raw data were plotted as the mean § SEM from a set of triplicate wells. The dotted lines indicate thresholds of positivity defined by the manufacturers' protocols. Non-linear curve fitting was applied to the row data using a four-parameter logistic model. (A -D) TeSeE Ò , FRELISA Ò , NippiBL Ò , and BetaPrion Ò tested on the samples prepared from the L-BSE cow, respectively. (E) NippiBL Ò tested on the samples prepared from the H-BSE cow.
until 2014, and NippiBL Ò has been used since 2006 and is currently the only available kit.
Our study showed that the kits correctly judged the positive samples as positive, and the negative samples prepared from normal brains as negative (Table 1). Although testing a large number of independent samples was beyond the scope of the present study, the results fulfilled the 'diagnostic sensitivity' and 'specificity of the tests' that the European Commission and the EFSA have defined as the ability to determine specimens of true positive animals to be positive, and true negative animals to be negative. 8,39 With respect to the analytical sensitivity, the EFSA regulations require appropriate tests to be within a maximal 2 log10 inferiority range of the most sensitive test. 39 In this regard, IDEXX HerdChek Ò BSE-scrapie (IDEXX Laboratories, Inc., Maine, USA) is viewed currently as the most sensitive test. 9,33,34,38 Due to import regulations, we could not include IDEXX HerdChek Ò in the present study. Instead, based on the results of previous studies showing that TeSeE Ò (short protocol) and BetaPrion Ò satisfied the EFSA requirements for the detection of L-and H-BSE prions, 33,34 we considered that FRELISA Ò and NippiBL Ò , whose sensitivities were between TeSeE Ò and BetaPrion Ò , met the EFSA requirements. In fact, when examined using the brain samples of the L-BSE cow, the detection limit of NippiBL Ò was only 2 2 factors lower than that of BetaPrion Ò (Fig. 2, and Table 1). Apart from the analytical sensitivity, it might be intriguing to consider how much tissue is FIGURE 3. Western blot analysis after PK digestion to determine the relative amounts of PrP Sc in the stock homogenates of the brain. (A) The stock homogenates of the brains of the C-, L-, and H-BSE cows were digested by PK, and aliquots of volumes of the digests corresponding to the indicated weights of tissues were subjected to Western blot analysis. PrP Sc was detected using anti-PrP antibody 12F10, with the aid of a chemluminescent detection reagent and a cooled CCD camera imaging system. The letters non-, mono-, and di-denote the non-, N-mono-, and N-di-glycosylated forms of PrP Sc . (B) Signal intensities of the non-, mono-, and di-glycosylated forms of PrP Sc in each lane in (A) were measured by ImageGuage software, combined as a total signal intensity of PrP Sc , and plotted in relative magnitude by taking that of 50 mg of the C-BSE brain tissue as 10.0. required for a single well of the ELISA plates since the amount differs depending on the kits. In BetaPrion Ò , for example, a well contains the PK-digested sample corresponding to 28 mg of brain tissue. Thus, the detection limit of BetaPrion Ò for the authentic C-BSE brain tissue at a 1:1,024 dilution suggests that BetaPrion Ò can detect PrP Sc in as little as 27 mg of the brain of the C-BSE cow used in the present study (Table 1). In NippiBL Ò , a digested sample corresponding to 10 mg of brain tissue is applied to a well; thus, the detection limit of NippiBL Ò at a 1:256 dilution was equivalent to the ability to detect PrP Sc in 40 mg of the brain of the C-BSE cow (Table 1). Importantly, despite the kits showing different overall analytical sensitivities, each kit showed no preferential reactivity to a particular type of BSE prions; TeSeE Ò and FRELISA Ò had comparable reactivities to C-and L-BSE prions, and NippiBL Ò had comparable reactivity to C-, L-, and H-BSE prions.
The medulla oblongata at the level of the obex is specified as a general specimen for rapid post-mortem BSE tests. 39,42 The distribution and deposition of PrP Sc vary in the regions of the brain. 41 For example, we found by Western blot analysis that the cerebral specimens of the L-BSE cow we used in the present study contained an approximately 3-fold lower amount of PrP Sc than the medulla oblongata at the level of the obex of this cow (data not shown). Similar to the preceding evaluation studies, the test specimens in the present study were not prepared from the medulla oblongata at the level of the obex but from other regions of the brains of the C-, L-, and H-BSE cows (see Materials and Methods). 33,34,38 Despite the usage of specimens of the brain region not specified by the protocols, we think the results (A) Brain samples of the C-BSE and L-BSE cows were digested according to the manufacturer's protocol but for an extended time. For the L-BSE brain, samples of three different dilutions by normal brains were tested (a: 2 ¡0.6 dilution, b: 2 ¡1 dilution, c: 2 ¡2 dilution). The arrowhead at the top of (A) indicates the digestion time set by the protocol (10 min). Data were plotted as the mean § SEM from duplicate wells. The dotted lines indicate the thresholds of positivity defined by the protocol. The samples of normal brain (i.e., negative control) gave rise to negative signals throughout the assay. (B) Only data on the brain samples of the L-BSE cow in (A) were plotted for clarity.
of the present study strongly support the eligibility and efficacy of the test kits for the purpose of BSE screening.
In the present study, the samples for TeSeE Ò , FRELISA Ò and BetaPrion Ò were prepared by mixing the stock homogenates of the brains of the BSE affected cows in saline and the brain homogenate of normal cows in saline. The mixtures were centrifuged to discard the supernatant, and the pellet fractions of tissues (1-volume) were added with 2-volumes of the 1x concentrated homogenization buffers supplied by the kits (see Materials and Methods). This protocol provided an advantage in accurate dilutions of PrP Sc ranging from 2 0 to 2 11 in a specified volume of the pellet fractions, but it discarded soluble components of the tissues by centrifugation. Although we have not examined effects of the loss of soluble components on the performances of the kits, we conceived the ultimate goal of the present study was achieved because PrP Sc was expected to be retrieved in the pellet fractions almost quantitatively. 43 Also, in comparison with the manufacturers' protocols in which the concentrations of the homogenization buffers after addition to the tissues are at 0.80 § 0.02x, the above protocol brought the concentrations of the buffers after addition to the pellet fractions to 0.67x. However, this did not seem to affect the performance of the test kits, since the buffers are 5% glucose (TeSeE Ò ) or 50 mM Tris buffer supplemented with collagenase and DNase (FRELISA Ò ). After addition of the homogenization buffers, the samples were homogenized and digested by PK according to the manufacturers' protocols, so that the concentrations of PK and the other components such as urea and detergents were at the same as those indicated by the manufacturers. With regard to NippiBL Ò , the homogenization buffer contains Triton X-100 and urea, and the homogenization of the brain tissues is carried out in the buffer premixed with the proteases supplied with the kit. The manufacturer's protocol instructs that the concentration of the homogenization buffer and the proteases is at 0.90 § 0.02x after addition to tissues. 35 In the present study, the concentrations of the buffer and the proteases after addition to the samples were as follows: 0.83x for the samples at a 2 ¡1 dilution; 0.86x for the samples at a 2 ¡2 dilution, 0.87x for the samples at a 2 ¡3 dilution, 0.89x for the samples at 2 ¡4 and 2 ¡5 dilutions, and 0.90x for the samples at 2 ¡6 to 2 ¡11 dilutions.
An issue specific to NippiBL Ò is that it treats the brain tissues with a more aggressive digestion before applying them to the ELISA assay. 35 We showed that the digestion condition of NippiBL Ò did not compromise the accuracy of detecting atypical BSE prions (Fig. 4). Conceivably, PrP Sc of atypical BSE prions is largely resistant to the digestion condition of NippiBL Ò , or PrP Sc might be degraded to some extent but the degrading peptides retain the epitopes of the antibodies for effective detection. With regard to NippiBL Ò , we incidentally found that the freezing and thawing of brain tissues in the NippiBL Ò homogenization buffer significantly impaired the resistance of PrP Sc to the proteases of the kit (data not shown), possibly due to the effect of components in the homogenization buffer such as Triton X-100 and urea. 35 Of course, the manufacturer's protocol does not instruct users to freeze brain specimens in the homogenization buffer prior to the digestion. This should be avoided.
In conclusion, the present study showed that NippiBL Ò is suitable for the detection of C-, L-, and H-BSE prions. Also, TeSeE Ò and FRELISA Ò , which were discontinued from use in Japan in 2014, were appropriate for the detection of C-and L-BSE prions. These results support the effectiveness of the current BSE surveillance program in Japan, and it is unlikely that cases of atypical BSE have been overlooked due to the test method being used.
Brain Tissues
Brain tissues negative for the BSE prion (normal brain) were a pool of medulla oblongata tissues proximal to the level of the obex collected from four cows. These specimens were obtained from local abattoirs in Japan, and were determined to be negative for BSE by Western blot and histopathological analyses at the National Institute of Infectious Diseases.
Brain tissues positive for the C-BSE prion were from the thalamus of a field case of a C-BSE cow identified in Ireland (case number: H02-551). The specimen was provided by Prof. W. Hall (University College of Dublin, Ireland) after permission for import given by the Ministry of Agriculture, Forestry and Fisheries, Japan.
Brain tissues positive for the L-BSE prion were from the cerebrum of a field case of an L-BSE cow identified in Japan. The cow (case number: JP24) was positive for BSE by routine screening using the Platelia BSE kit at a local meat-inspection laboratory, and determined as an L-BSE case by confirmatory analysis at the National Institute of Infectious Diseases. 16 The DNA sequence of the PrP coding region of the cow had a synonymous codon of asparagine 192 (AAT) compared with that of Bos taurus PrP in a public database (accession number: AJ298878, AAC for asparagine 192 ). 16 Because no field case of H-BSE has been found to date in Japan, we utilized a cow experimentally infected with the H-BSE prion by intracranial administration (experimental code number: 9458). 44 Tissues positive for the H-BSE prion were from the brain stem. The DNA sequence of the PrP coding region of this cow was identical to that of Bos taurus PrP in the database (accession number: AJ298878). 44
Sample Preparation
Stock homogenates of 40% (w/v) brains of C-BSE, L-BSE, and H-BSE cows were prepared in saline (Otsuka Pharmaceutical Factory, Inc., Tokushima, Japan). Homogenization was carried out by vigorous shaking of the brains with ceramic YTZ Ò balls (2.7-mm diameter beads, Nikkato Co., Osaka, Japan) at 2,500 rpm for 5 min in a Multi-beads shocker Ò tissue disruptor (Yasui Kikai Co., Osaka, Japan). Stock homogenate of the normal brain was prepared in a similar way at a final concentration of 20% (w/v). For NippiBL Ò , aliquot weights of tissues of normal brains were mashed according to the manufacturer's protocol. Due to the scarcity of brain specimens of atypical BSE cows, the stock homogenates were serially diluted as described below.
TeSeE Ò , FRELISA Ò , and BetaPrion Ò : The 40% stock brain homogenates of C-BSE and L-BSE cows were added to an equal volume of saline, then serially diluted to a 2 base logarithm up to 2 ¡11 with 20% (w/v) normal brain homogenate to obtain 900 mL of 20% (w/v) brain homogenate (i.e., each sample contained 180 mg of brain tissues, in which the net amounts of the BSE-positive tissues were serially diluted). The samples were centrifuged at 19,000 xg at 4 C for 30 min, 600 mL of the supernatant was discarded, and the pellet fractions (equivalent to 180 mg of the brain per tube) were stored at ¡75 C until use. To obtain pellets of negative control tissues, aliquots of 900 mL of the 20% stock homogenate of the normal brain were centrifuged in the same way. Before the test, the pellet fractions were added to 600 mL of the homogenization buffer supplied by the kits to reconstitute 900 mL of 20% brain homogenate. The samples were then homogenized as instructed in the manufacturers' protocols by using the ceramic beads supplied with the kits of TeSeE Ò and BetaPrion Ò , or by using ceramic YTZ Ò balls (1.5-mm diameter beads, Nikkato Co.) for FRELISA Ò . The homogenates were dispensed in triplicate tubes by the volumes indicated in the manufacturers' protocols (i.e., dispense 250 mL per tube for TeSeE Ò , 250 mL for FRELISA Ò , and 200 mL for BetaPrion Ò ) for digestion with PK, and the assay was performed according to the manufacturers' protocols.
NippiBL Ò : The kit employs a unique protocol for sample preparation. The dissected brain tissues are mashed through mesh-bottomed cups (BioMasher), the mashed tissues are then directly added to 9-volumes of homogenization buffer supplemented with the proteases of the kit, and disrupted by ceramic beads to prepare 10% homogenate which is ready for digestion at 56 C. 35 To adapt to the protocol, test samples were prepared in the following way to contain 70 mg of the total brain tissues but serially diluted amount of the BSE-positive tissues: a sample at a 2 ¡1 dilution was prepared by the 122 K. Hagiwara et al. addition of mashed normal brain (35 mg) with 88 mL of the 40% homogenates of BSE-positive brains (equivalent to 35 mg tissue), a sample at a 2 ¡2 dilution was prepared by the addition of mashed normal brain (52 mg) with 44 mL of the 40% homogenates of BSE-positive brains (equivalent to 18 mg tissue), and samples at 2 ¡3 to 2 ¡11 dilutions were prepared by the addition of 70 § 5.6 mg (mean § SD) of mashed normal brain with appropriate volumes of the 40% homogenates of BSE-positive brains. Pieces of 70 mg of the normal brains were mashed and used as negative controls. The samples were added to 9-volumes of the homogenizing buffer containing the proteases to adjust the tissue concentration to 10%. Then, as instructed by the manufacturer's protocol, the samples were homogenized by using the ceramic beads that were supplied with the kit, and subjected immediately to digestion. 35 After digestion, the samples were dispensed to the wells of the kit in triplicate, and the assay was performed according to the manufacturer's protocol.
Execution of the Test
In the present study, we examined one rapid test kit for either of the brain samples of C-, L-, or H-BSE cow per day, and did not carry out simultaneous examination of different kits or different brain samples on the same day. To minimize variability, only two operators participated in the assay. A Model 680 microplate reader (Bio-Rad Laboratories, Inc.) and an ARVO X4 microplate reader (PerkinElmer Inc., Waltham, MA, USA) were used to measure the absorbance indicated by the manufacturers' protocols (450 nm for reading; 620 nm for reference). All procedures were carried out according to the biosafety guidelines of the National Institute of Infectious Diseases, and the National Institute of Animal Health.
Data Analysis
True-positive and pseudo-positive thresholds were defined by the manufacturers' protocols.
If more than two wells in the triplicate wells were positive or pseudo-positive, the overall result was judged as positive. The detection limit was defined as the maximum dilution factor where the overall result was positive. Fitting analysis by a four-parameter logistic model was carried out using Prism 6 software (GraphPad Software, Inc., La Jolla, CA, USA) with the constraint of the maximum absorbance being less than 3.5. Adjusted R-squared (adjusted R 2 ) values were calculated by Prism 6 software using the following equation: Adjusted R 2 D 1 -[SS residuals /(n-K)]/[SS total /(n-1)], where SS residuals is the sum of squares of the difference of each point from the fitted-curve, SS total is the square of the difference of the points from the mean of all absorbance, n is the number of data points, and K is the number of parameters fitted by the regression analysis.
Western Blot Analysis
The stock homogenates of the C-, L-, and H-BSE brains were diluted to 20% (w/v) with saline, and then 50 mL of the 20% (w/v) homogenates (equivalent to 10 mg of tissues) were added with an equal volume of a buffer consisting of 4% zwittergent Ò 3-14 (Merck Millipore, Darmstadt, Germany), 1% lauroylsarcosine sodium salt (Sigma-Aldrich, St. Louis, MO, USA), 100 mM NaCl, 50 mM Tris-HCl (pH 7.5). The samples were added to 0.625 mL of 80 mg/mL collagenase (Wako Pure Chemical Industries, Osaka, Japan), and incubated at 37 C for 30 min. After brief sonication, the samples were added to 1 mL of PK (at a final concentration of 50 mg/mL; Roche Diagnostics, Basel, Switzerland), and incubated at 37 C for 30 min. The digestion was stopped by the addition of 4-(2-aminoethyl)benzenesulfonyl fluoride at a final concentration of 2 mM (Roche Diagnostics). Following the addition of 50 mL of a mixture of 2-butanol and methanol (5/1, v/v), the samples were centrifuged at 18,000 xg for 10 min at 23 C. The pellet was dissolved in lithium dodecyl sulfate sample buffer (Thermo Fisher Scientific Inc., Novex TM , Carlsbad, CA, USA) supplemented with 80 mM dithiothreitol, heated at 100 C for 5 min, and aliquots of the samples were subjected to gel electrophoresis using a NuPAGE Ò Novex TM 12% Bis-Tris gel (Thermo Fisher Scientific Inc., Invitrogen TM ) and NuPAGE Ò MOPS-sodium dodecyl sulfate running buffer (Thermo Fisher Scientific Inc., Novex TM ). After electrophoresis, proteins were transferred to an Immobilon-P PVDF membrane (Merck Millipore) at 220 mA for 60 min using Trisglycine buffer (Bio-Rad Laboratories, Inc.) supplemented with 20% methanol. The membrane was incubated at 4 C overnight with anti-prion protein 12F10 antibody (epitope: G 153 SDYEDRYYRENMHRYPNQ 171 of bovine PrP; Cayman Chemical, Ann Arbor, MI, USA) at 0.16 mg/mL in Can Get Signal Ò -1 immunoreaction enhancer solution (Toyobo Co., Ltd., Osaka, Japan). 45 After washing the membrane with 0.05% Tween 20 in phosphatebuffered saline, the membrane was incubated at room temperature for 2 h with horseradish peroxidase-conjugated AffiniPure F(ab')2 antimouse IgG (Jackson ImmunoResearch Laboratories, Inc., PA, USA) at 0.1 mg/mL in Can Get Signal Ò -2 solution (Toyobo Co., Ltd.). Detection was carried out using SuperSignal TM West Dura Extended Duration Substrate (Thermo Fisher Scientific Inc., Thermo Scientific TM ) and a FluorChem IS-8044 imaging system (ProteinSimple, San Jose, CA, USA). Captured images were stored as TIFF files, and signal intensities were quantified by ImageGauge software (Fuji Photo Film, Tokyo, Japan).
Examination of the Effects of the Digestion Condition of NippiBL Ò
To examine the effects of the digestion condition of NippiBL Ò on stability of signal intensities of atypical BSE prions, a sample of the C-BSE cattle brain diluted to 2 ¡5 by mashed negative brains, and three samples of the L-BSE cattle brain diluted to 2 ¡0.6 (i.e., 1.5-fold), 2 ¡1 , and 2 ¡2 by mashed negative brains were prepared using the method described above. These dilutions were chosen with the expectation of signal intensities (i.e., absorbance at 450 nm) between 1.0 and 3.0, based on the data of the dilution-response profiles of NippiBL Ò shown in Fig. 2. The samples were processed according to the protocol of NippiBL Ò , but by setting the digestion time at 5, 10, 15, and 20 min. Normal brain tissue was processed in the same way. After digestion, the samples were dispensed to the duplicate wells of the kit, and developed for detection according to the manufacturer's protocol.
ABBREVIATIONS
BSE bovine spongiform encephalopathy CJD Creutzfeldt-Jakob disease EFSA European Food Safety Authority ELISA enzyme-linked immunosorbent assay PK proteinase K PrP C cellular prion protein PrP Sc disease-associated forms of prion protein TSE transmissible spongiform encephalopathy
DISCLOSURE OF POTENTIAL CONFLICTS OF INTEREST
No potential conflicts of interest were disclosed. | 2018-04-03T03:21:33.147Z | 2017-03-04T00:00:00.000 | {
"year": 2017,
"sha1": "8c334d7cb385d868c5a3179cc00ed647919c21e4",
"oa_license": null,
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/19336896.2017.1300731?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "a0bac7a11e3acbedd5f3de27ba792994c647d17d",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119670012 | pes2o/s2orc | v3-fos-license | Classification of almost Yamabe solitons in Euclidean spaces
In this paper, we completely classify almost Yamabe solitons on hypersurfaces in Euclidean spaces arisen from the position vector field. Some results of almost Yamabe solitons with a concurrent vector field and almost Yamabe solitons on submanifolds in Riemannian manifolds equipped with a concurrent vector field are also presented. Moreover, we classify complete Ricci solitons on minimal submanifolds in non-positively curved space forms. For almost Yamabe solitons, all of results in this paper can be applied to Yamabe solitons.
Introduction
(M, g, v, ρ) is called a Yamabe soliton if it satisfies where L v g is the Lie-derivative, R is the scalar curvature of M and ρ is a constant. If ρ > 0, ρ = 0, ρ < 0, then a Yamabe soliton (M, g, v, ρ) is called a shrinking, a steady or an expanding Yamabe soliton, respectively. A Yamabe soliton (M, g, v, ρ) is called a gradient Yamabe soliton if v is the gradient of some function f on M. We denote a gradient Yamabe soliton by (M, g, f, ρ).
E. Barbosa and E. Ribeiro introduced a generalization of Yamabe solitons in [1] as follows.
An almost Yamabe soliton (M, g, v, ρ) is called a gradient almost Yamabe soliton if v is the gradient of some function f on M. We denote a gradient almost Yamabe soliton by (M, g, f, ρ for any vector field X on M, where ∇ is the Levi-Civita connection on M. One of the most important example of Riemannian manifolds with a concurrent vector field is Euclidean spaces. Because, the position vector field on Euclidean spaces satisfies (3). Riemannian manifolds endowed with concurrent vector fields have been studied (cf. [4], [9] and [10]). In this paper, we completely classify almost Yamabe solitons on hypersurfaces in Euclidean spaces arisen from the position vector field v. We denote the tangential and the normal components of v by v T and v ⊥ , respectively. Theorem 1.3 (Theorem 5.1). Any almost Yamabe soliton (M, g, v T , ρ) on a hypersurface in a Euclidean space E n+1 is contained in either a hyperplane or a sphere.
The remaining sections are organized as follows. Section 2 contains some necessary definitions and preliminary geometric results. In section 3, we show that any Yamabe soliton (M, g, v, ρ) with a concurrent vector field v is a gradient expanding Yamabe soliton with ρ = −1 and the scalar curvature is zero. In section 4, we consider almost Yamabe solitons on submanifolds in Riemannian manifolds endowed with a concurrent vector field. Section 5 is devoted to the proof of Theorem 1.3. Finally in Appendix, we completely classify complete gradient Ricci solitons on minimal submanifolds in non-positively curved space forms.
Preliminaries
Let (N,g) be an m-dimensional Riemannian manifold and (M, g) be an n-dimensional submanifold in (N,g). We denote Levi-Civita connections on (M, g) and (N,g) by ∇ and∇ , respectively.
For any vector fields X, Y tangent to M and η normal to M, the formula of Gauss is given bỹ where ∇ X Y and h(X, Y ) are the tangential and the normal components of∇ X Y . The formula of Weingarten is given bỹ where −A η (X) and D X η are the tangential and the normal components of∇ X η. A η (X) and h(X, Y ) are related by The mean curvature vector H of M in N is given by For any vector fields X, Y, Z, W tangent to M, the equation of Gauss is given byg where Rm andRm are Riemannian curvature tensors of M and N, respectively. The equation of Codazzi is given by If N is a space of constant curvature, then the equation of Codazzi reduces to
Almost Yamabe solitons with a concurrent vector field
Firstly, we show a formula of almost Yamabe solitons which is useful for study of almost Yamabe solitons. Proof. Since where R ij is the Ricci curvature of M. By applying ∇ l to the both side of (5), we obtain Taking the trace, we obtain (4).
Proposition 3.2.
If an almost Yamabe soliton (M, g, v, ρ) has a concurrent vector field v, then M is a gradient almost Yamabe soliton with R = ρ + 1.
Proof. Firstly, we show that an almost Yamabe soliton with a concurrent vector field is a gradient almost Yamabe soliton. Set Then we have for any vector field X on M. Secondly, we show that R = ρ + 1. Since v is a concurrent vector field, we have (7) L v g = 2g.
By applying Proposition 3.2 to Yamabe solitons, we can get the following. Proof. Since ρ is constant, by Proposition 3.2 and (4), we have R = 0 and ρ = −1.
Almost Yamabe solitons on submanifolds
In this section, we assume that (N,g) is a Riemannian manifold endowed with a concurrent vector field v and (M, g) is a submanifold in (N,g). We denote the tangential and the normal components of v by v T and v ⊥ , respectively.
To classify almost Yamabe solitons on a submanifold, we show the following Lemma which will be used in the proof of Proposition 4.3 and Theorem 5.1.
for any vector fields X, Y on M.
Proof. Since v is a concurrent vector field and by using formulas of Gauss and Weingarten, we have for any vector field X on M. By comparing the tangential and the normal components of (10), we obtain From the definition of Lie-derivative and (11), we have for any vector fields X, Y on M. Combining (12) with (2), we obtain (9). Proof. Set Then we have for any vector field X on M. Proof. From Proposition 4.2, we know that any almost Yamabe soliton on a submanifold is a gradient almost Yamabe soliton. Let {e 1 , · · · , e n } be an orthonormal frame on M. From Lemma 4.1, we have Since M is minimal and taking the trace, we obtain n(R − ρ − 1) = ng(H, v ⊥ ) = 0.
Classification of almost Yamabe solitons in Euclidean spaces
In this section, we give the proof of Theorem 1.3, namely, we completely classify almost Yamabe solitons on hypersurfaces in Euclidean spaces arisen from the position vector field. Let v be the position vector field on Euclidean spaces.
where A N (e i ) = κ i e i , i = 1, · · · , n. So we have Taking the summation, we obtain Comparing (13) and (14), we have Therefore M is a totally umbilical submanifold with A N (e i ) = αe i and h satisfies h(X, Y ) = αg(X, Y )N. Now we have 0 =∇ X (g(N, N)) = 2g(∇ X N, N) = 2g(D X N, N).
Therefore D X N = 0. So we obtain for any vector fields X, Y, Z on M. From the equation of Codazzi, we have X(α)Y = Y (α)X.
Taking X and Y linearly independent, we conclude that α is a constant. Case 1: α = 0. From∇ X N = 0, N, restricted to M, is a constant in E n+1 and we havẽ This shows thatg(v, N) is constant when v and N is restricted to M. Therefore M is contained in the hyperplane normal to N. Case 2: α = 0. We havẽ This shows that the vector field v+α −1 N, restricted to M, is a constant in E n+1 . Therefore M is contained in the sphere.
Appendix
In this appendix, we completely classify complete gradient Ricci solitons on minimal submanifolds in Euclidean spaces or hyperbolic spaces. (M, g, f, ρ) is called a gradient Ricci soliton if it satisfies (see for example [5]), Some recent progress on the subject can be found in [2]. In [3], B. L. Chen showed that any complete gradient Ricci soliton has non-negative scalar curvature R ≥ 0. Case 1: c = 0. By (16), 0 ≤ R = −|h| 2 . Therefore, M is a totally geodesic submanifold and it is an affine subspace in E n+1 . | 2017-11-13T06:12:18.000Z | 2017-11-13T00:00:00.000 | {
"year": 2017,
"sha1": "2600ebd92a7d32e60f6b9dadfb1797edf57b785f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1711.04428",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2600ebd92a7d32e60f6b9dadfb1797edf57b785f",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
8870810 | pes2o/s2orc | v3-fos-license | Restoration of tumor suppressor functions by small-molecule inhibitors
Over the last decades, accumulating data have advanced our understanding of the mechanism of action of tumor suppressor proteins and therapeutic strategies to restore tumor suppressor pathways have emerged as a promising approach for cancer therapy. Based on our recent findings on bridging integrator-1 (BIN1), we outline potential advantages and disadvantages of chemical activation of tumor suppressors.
Small-molecule inhibitors continue to be at the leading edge of cancer therapeutics. The discovery of Gleevec (STI-571), a tyrosine kinase inhibitor, was a milestone achievement in clinical oncology and this inhibitor has demonstrated remarkable efficacy in Philadelphia chromosome-positive (Ph C ) chronic myeloid leukemia. 1 Since then, mechanism-based approaches have been used to specifically target various kinases and/or downstream oncogenic pathways that are critically involved in cell cycle progression and tumorigenesis. However, in addition to this approach, a more recent and novel use of small-molecule inhibitors has emerged as a promising endeavor in the field of cancer chemotherapy. Here, we briefly review the mechanistic basis of restoration of a tumor suppressor and its potential complications for cancer therapy.
The tumor suppressor function mediated by the retinoblastoma 1 protein (RB1) is principally attributed to its interaction with the E2F transcription factor 1 (E2F1). The RB1/E2F1 complex represses a number of E2F1-dependent transcriptional target genes that are required for the transition from G 1 to S phase in the cell cycle. Because RB1 is inactivated by phosphorylation mediated by the G 1 cyclin-dependent kinases 4 and 6 (CDK4 and CDK6), restoring RB1 function by inactivating CDK4/6 is theoretically an obvious approach. Although structural similarities among a number of CDK family members hampered the development of a CDK4/6-specific inhibitor for many years, some agents, including palbociclib (PD-0332991), have recently demonstrated promising results in Phase I/II clinical trials for human malignancies, including breast cancer. 2 Above and beyond RB1, another tumor suppressor that is critical for numerous growth inhibitory pathways is tumor protein p53 (TP53, best known as p53). The abundance of wild-type p53 protein is massively reduced as a result of ubiquitin-dependent and human homolog of double minute 2 (HDM2)-mediated degradation of p53. Therefore, dissociation of p53 from the p53/HDM2 complex is a reasonable strategy for rescuing p53 function. Based on the crystallographic structure of the p53/HDM2 peptide complex, small p53 peptides that mimic the region of p53 sufficient for HDM2 binding and small-molecule HDM2 antagonists have been shown to disrupt the p53/HDM2 interaction in vitro and in vivo. Some of these, including MI-219, Nutlin-3, and RG7112, have been found to be effective preclinically and have consequently moved into Phase I/II clinical trials. 3 Although proteasome inhibitors such as bortezomib (PS-341) may not be as specific for stabilizing p53 as these HDM2 inhibitors, other growthinhibitory gene products, including the cyclin-dependent kinase inhibitor 1B (CDKN1B or p27, Kip1) protein, can also be degraded in an ubiquitin-dependent manner. 4 Therefore, it may be advantageous to re-establish a broad spectrum of growth-inhibitory functions by blocking the proteasome pathway.
Although the approach of re-establishing tumor suppressor function in tumors as a therapeutic option is mechanistically intriguing, there are potential dilemmas associated with the systemic restoration of tumor suppressor function. Tumor suppressor genes are frequently mutated or deleted in cancer patients, and given that some of the mutant genes acquire oncogenic potential, this approach may simply reboot a mutant (i.e., oncogenic) tumor suppressor. Even if a tumor suppressor gene is intact, its function should not depend on other cancer-susceptible proteins. For example, the cyclin-dependent kinase inhibitor 2A (CDKN2A) gene is not frequently deleted in cancer cells, but is inactivated by DNA methylation. However, epigenetic reactivation of the CDKN2A gene may not be an effective approach if RB1 and/or p53 are deficient, because the tumor suppressor functions of the products of the 2 alternative reading frames of CDKN2A-p16 INK4A and p14 ARF proteins-largely depend on RB1 and p53, respectively. 5 Therefore, for the tumor suppression approach to be fully effective, it will be important to identify a non-mutated (or non-deleted) tumor suppressor whose function does not rely on other tumor suppressors that might be already mutated or deleted.
Bridging integrator-1 (BIN1) was originally identified as a c-MYC oncoproteininteracting tumor suppressor. 6 The BIN1 gene itself is rarely mutated or deleted, but is frequently silenced in human cancer cells. Moreover, BIN1 acts as a tumor suppressor in vitro and in vivo in the absence of RB1 and p53. 7 We recently demonstrated that BIN1, whose gene promoter is activated by E2F1, directly interacts with E2F1 and represses its transcription, implying that a negative-feedback loop regulates BIN1 gene expression. 8 Interestingly, we found that E2F1 is poly(ADP-ribosyl)ated by poly(ADP-ribose) polymerase 1 (PARP1) and that PARP1 inhibition unlocks the E2F1-BIN1 negative-feedback loop to vigorously activate the BIN1 gene, which induces G 2 /M arrest in the cell cycle and/or apoptosis. 8 Because of this so-called 'synthetic lethality,' PARP inhibitors have been actively used for clinical trials in breast cancer 1 and 2 (BRCA1/2)deficient breast and ovarian cancers. 9 However, it was unclear why PARP inhibitors alone also show therapeutic efficacy, even in cancer cells expressing wild-type BRCA1/2. Based on our recent data, 8 the restoration of BIN1 by PARP inhibitors may offer a mechanistic rationale for expanding the clinical usage of PARP inhibitors over a wider range of tumor types, regardless of the status of RB1, TP53, and BRCA1/ 2 genes (Fig. 1).
Chemotherapy and radiotherapy are conventional treatments for eradicating tumors, but cancer often develops therapeutic resistance over time. Given that many tumor suppressors are proapoptotic in response to DNA damaging agents, it would be clinically pertinent to increase the chemo-and radiosensitivities of cancer by combining standard treatments with agents that can restore the activity of silenced tumor suppressors, provided they are not mutated or deleted, in human malignancies. 10
Disclosure of Potential Conflicts of Interest
No potential conflicts of interest were disclosed. | 2018-04-03T00:05:43.384Z | 2015-01-23T00:00:00.000 | {
"year": 2015,
"sha1": "3da25e2346db047b651fca1d86a85c776b660d7a",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.4161/23723556.2014.991225?needAccess=true",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "3da25e2346db047b651fca1d86a85c776b660d7a",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
209500777 | pes2o/s2orc | v3-fos-license | Positively and negatively hydrated counterions in molecular dynamics simulations of DNA double helix
The effects of positive and negative hydration of counterions (Na+, K+, and Cs+) incorporated into the hydration shell of the DNA double helix have been studied using molecular dynamics approach. The results show that the dynamics of the hydration shell of counterions depends on region of ion localization around the macromolecule. The longest residence times have been observed for water molecules near the counterions that are localized in the minor groove of the double helix: about 30 ps in the case of Na+ counterions and about 7 ps in the case of K+ and Cs+ counterions. In the major groove and outside the double helix it is essentially lower. The counterions constrain water molecules too strong, and as the result the effect of negative hydration for K+ and Cs+ counterions was not observed in the simulations. The analysis show that the effects of counterion hydration may be described better by using the water models with lower dipole moments.
Introduction
The DNA is a polycationic macromolecule with the double helix structure that under the natural conditions is stabilized by water molecules and metal ions (counterions) forming the ion-hydration shell [1]. The ion-hydration shell has different physical properties in different regions of the macromolecule: inside the minor and major grooves of the double helix, and outside DNA [2][3][4][5]. The counterions, which are metal ions (Na + , K + ) and organic positively charged molecules (polyamines), neutralize the negatively charged atomic groups of DNA, making disorder or stabilizing the waters structure inside the macromolecule. The interplay between water molecules and counterions is shown to be important for counterion distribution around the DNA double helix [6,7]. To understand the role of counterions in mechanisms of DNA biological functioning the effects of ion hydration should be studied.
The ions organize water molecules into the hydration shells with the structure that depends on ion type. With this regard metal ions are usually classified as positively hydrated and negatively hydrated ions [8]. In the case of positively hydrated ions (Li + , Na + , and Mg 2+ ) the water molecules in the hydration shells are highly ordered, and the mean residence time of the molecule in the hydration shell of the ion is much higher than in the bulk, where a water molecule is surrounded by the other water molecules. Therefore, these ions are also known as the structure making ions [9]. In the case of negatively hydrated ions (K + , Rb + , and Cs + ) the mean residence time of a molecule in hydration shell of the ion is lower than in the bulk, and the structure of the hydration shell is more friable as in the bulk water. Therefore, these ions are also known as the structure braking ions [9].
The structure of hydration shell of DNA is essentially different form the structure of liquid water and depends on a region of the double helix [2,3,5,10]. In particular, in the minor groove of the double helix the mean residence time of water molecule is characterized by the highest values that may reach to about 100 ps [10]. In some cases of nucleotide sequence the water molecules can bridge in the minor groove the atoms of different nucleotides that was observed in crystallographic experiments as a spine of hydration [2,11]. In the major groove the hydration shell is friable and the dynamics of water molecules are characterized by several times lower values of the residence times than in the minor groove, and in the regions near the phosphate groups of the DNA backbone even less [5,10]. The hydration shell of DNA macromolecule is an important component that may be considered as the integral part of the double helix structure.
The intrusion of counterions into the hydration shell of DNA rebuild its structure and influences the dynamics. The structure of water solutions of metal ion may be described within the framework of statistical theory of electrolytes [12] that may be extended for the consideration of the ion-hydrate shell of DNA. There also exist some polyelectrolyte models that qualitatively describe the distribution of counterions around the double helix [13,14]. In the such models the macromolecule is presented as a chain of charged beads or as a uniformly charged cylinder amerced into the charged continuum. These models explain the effect of counterion condensation on DNA that was observed experimentally [15][16][17][18]. From another side, the structure of DNA with counterions may be presented as the ionic lattice [19,20]. The existence of the lattice-like structure of DNA with counterions has been proved by the observation of the modes of ion-phosphate vibrations in the low-frequency Raman spectra of DNA (< 200 cm −1 ) [19][20][21][22][23][24]. The concept of ion-phosphate lattice has been proven to be useful for the description of different effects of DNA-counterion interaction [25,26].
Despite the success of already existed approaches, which allow a general outline of some structural and dynamical properties of the DNA-counterion systems, they are not enough to describe the effects of counterions hydration. In this regard the method of classical molecular dynamics seems the most appropriate in the present time for the development of problem understanding. The molecular dynamics studies [4,6,7,[27][28][29][30][31] show that the character of counterion distribution around the double helix and their localization in characteristic binding sites of DNA depend on sequence of nucleotide bases and region of the double helix. These features of DNA-counterion localization are governed by the interplay between counterions and water molecules in many respects [6,7,29]. In particular, the study of counterion hydration [7] show that the interaction of structure making ions with DNA occurs via water molecules of the hydration shell mostly, while the structure breaking ions may squeeze through DNA hydration shell to the groove bottom and form long lived complexes with the atoms of nucleotide bases. Thus, for the understanding the interplay between water molecules and counterions in the hydration shell of DNA double helix the molecular dynamics simulations of positively and negatively hydrated counterions should be carried out.
The goal of the present work is to study the character of hydration of positively and negatively hydrated counterions that are localized in different regions of the DNA double helix. To solve this problem the atomistic molecular dynamics simulations of DNA with positively hydrated (Na + ) and negatively hydrated (K + , Cs + ) counterions have been studied. The radial distribution functions of water molecules with respect to the ions were built, and the potentials of mean force were derived. The residence times of water molecules in the hydration shell of counterions have been estimated. The results show that the dynamics of the hydration shell of counterions depends on a region of the double helix, where the ion is localized. The effects of counterion hydration have been shown to be better described with the use of the water models having lower dipoles moments.
Materials and methods
The analysis of the structure and dynamics of the hydration shells of counterions, localized in different regions of the double helix, has been done through molecular dynamics simulations [7]. The simulations [7] were carried out for the DNA double helix with the nucleotite sequence d(CGCGAATTCGCG) that is known as the Drew-Dickerson dodecamer [2]. This fragment of DNA is characterized by the narrowed minor groove in the region with AATT nucleotide sequence (Fig. 1a). The major groove is visibly wider comparing to the minor groove. The DNA macromolecule was immersed into the water box 64×64×64Å with the metal ions of defined type: Na + , K + or Cs + . The number of counterions was 22 that was equal to the number of the DNA phosphate groups, making the system electrically neutral. As a result three systems of DNA water solution with the counterions of different type were studied: Na-DNA, K-DNA, and Cs-DNA.
The computer simulations [7] were performed using NAMD software package [32] and CHARMM27 force field [33,34]. The length of all bonds with hydrogen atoms was taken rigid using SHAKE algorithm [35]. The TIP3P water model [36] and the Beglov and Roux parameters of ions have been used [37]. The total lengths of the trajectory for each system was more than 200 ns. The simulation data were analyzed after 100 ns of equilibration. The details of the simulation process are described in [7].
In the present work the VMD software [38] was used for the analysis and visualization. Using the plug-in [39] implemented to VMD, the radial distribution functions (RDFs) have been calculated by the following formula: were p(r) is the average number of atom pairs, found at the distance within (r ÷ r + ∆r); N p is the number of pairs of selected atoms; V is the total volume of the system; ∆r is the width of histogram bins which in the present work was taken equal to 0.5Å. The average number of atomic pairs has been calculated every 10000 time steps that is 500 frames per the nanosecond. The RDFs have been built for each nanosecond of simulation trajectory and than the mean RDFs have been obtained. The RDFs have been built for oxygen atoms of water molecules with respect to the ions localized in different regions of the double helix: in the minor and major grooves (RDF minor Ion and RDF major Ion ), near the phosphate groups (RDF ph Ion ), and in the bulk (RDF bulk Ion ). The counterion has been considered to be localized in some region of the double helix if it was within 5Å of one of the reference atoms. The reference atoms of DNA that have been used in the present study are shown on the Figure 1b and DNA region Adenine Guanine Thymine Cytosine
Results
Radial distribution functions. The obtained averaged radial distribution functions of water molecules with respect to the counterions (ion-water RDFs) are characterized by two maximums: the first is intensive and the second is weak (Fig. 2a). The position of maximums are governed by the size of counterion and water molecule, therefore the shifting of the maximums to larger distances is observed as counterion size increases. The intensity of the first and the second maximums depends on a region of the double helix where the counterion is localized.
The only exception is observed in the case of the first maximum for RDFs of Na + counterions that have approximately the same hight for all considered regions of counterion localization.
In the same time, in the case of K + and Cs + counterions the difference is essential in the case of the both the first and the second maximums.
The RDFs of water molecules with respect to water molecules (water-water RDFs) are characterized by strong first maximum and flat curve after (Fig. 2b). The second maximum is very weak and hardly visible. The obtained shape of the RDFs is characteristic for the TIP3P water model [36,40]. The difference between water-water RDFs for the case of different regions of the double helix is observed only for the first peak that has always lower intensity in the case of water molecules in the minor groove.
Potential of mean force. A water molecule in the hydration shell of the ion is trapped in the potential well that is characterized by the potential barrier (Fig. 3). In the present work the potential barrier is estimated using the potential of mean force (PMF) derived from the radial distribution functions: k B is the Boltzmann constant, T is the temperature. The calculated potentials of mean force are shown on the Figure 3. The obtained potential functions are characterized by two potential wells. In the present work the dynamics of water molecule in the first hydration shell is in the scope of interest, therefore the first potential well and the potential barrier (∆E) between the first and the second potential wells have been studied.
It is seen that in the case of ion-water PMF their shape and depth are different in the case of different counterions. In the case of Na + the potential well is the deepest, while in the case of Cs + it is the smallest (Fig. 4a). The difference of ion-water PMFs is also observed for different regions of the double helix, where the counterion may be localized. In the same time, the water-water PMF are rather similar, and the difference is hardly visible for different regions of the double helix.
Using the obtained PMFs, the parameters describing the energy of counterion hydration were calculated using the formula (2). The resulted values of the potential barriers for water molecule in the hydration shell of the ion (∆E ion ) are the highest in the case of Na + ions, while in the case of K + and Cs + ions the values of ∆E ion are about two times lower. Such behaviour is the result of different size of the ions. The potential barrier ∆E is the highest in the case of counterions in the minor groove, and it is the lowest in the case of counterion near the oxygen atoms of the phosphate groups and in the bulk ( Table 2). The energy barrier of water molecule in the hydration shell of counterion in the bulk is essentially higher than the energy of water molecule (∆E ion > ∆E w ).
The calculated values of the potential barrier in the Table 2 ions, respectively. It is seen that the barriers ∆E ion , obtained in the present work, are rather close to the values [41], but in general overvalued. The reason is may be that in the work [41] water model and ion parameters were different from that have been used in the present work. The difference of the energy barriers for water molecule in the hydration shell of counterion and in the bulk (dE = ∆E ion − ∆E w ) determines the character of counterion hydration. The structure making (positively hydrated) ions have dE > 0, while the structure braking (negatively hydrated) ions are characterized by dE < 0. From the Table 2 follows that the values of dE are positive for all counterions. For example, in the case of ions in the bulk water the values of dE are 2.15 kcal/mol, 0.91 kcal/mol, and 0.34 kcal/mol for Na + , K + , and Cs + counterions, respectively. In the same time, the experimental data reveal that among considered counterions only sodium is positively hydrated, dE = 0.25 kcal/mol [8], while potassium and cesium are negatively hydrated ions, dE = −0.25 kcal/mol and dE = −0.33 kcal/mol, respectively [8]. The reason of the difference of obtained energy values and experimental data may be related to the parametrization of water models that will be discussed in the following section.
Residence time. The potential barrier ∆E determines the average residence time of water molecule τ that is usually described by the equation of Arheniuns type [8]. In the present work it is presented in the following form: where τ 0 is the characteristic time of approaching of the molecule to the potential barrier ∆E. The coefficient 2 in the formula (3) appears because in our approach we consider that being at the top of the potential barrier the water molecule may leave the hydration shell or return back to the ion with the equal probability. The value of τ 0 is estimated from the law of energy conservation for the finite motion: where µ is the mass of a water molecule; x = r − r a is the displacement for the equilibrium position; x min and x max are the amplitude displacements of the mass of water molecule from the equilibrium position r a ; E 0 is the amplitude energy that may have a water molecule by vibrating in the potential well (Fig. 3). The potential function E(x) is determined from the potential of mean force as the approximation by the polynomial function: where E a is the depth of the potential well; C 2 , C 3 , C 4 are the fitting parameters. The amplitude displacement (x min and x max ) were determined from the condition: E(x) = E 0 (Fig. 3). Taking into consideration the Boltzmann low of equidistribution of energy by the degrees of freedom the value of amplitude energy of vibration has been determined as follows: E 0 = E a + k B T . By substituting (5) to the equation (4) the elliptic integral in the resulted equation is obtained, which has been calculated numerically. The calculated residence times of water molecules in the hydration shell of the ion are within the range from about 2 ps to 30 ps (Table 3). The longest residence time is observed for the case of sodium counterions, while in the case of potassium and cesium ions it is several times lower. The dependence of τ values on the region of counterion localization is also observed. The largest values of the residence time are in the case of the ion localization in the minor groove Table 3: The residence times (τ ) and the half-period of vibration (τ 0 ) in ps for water molecules in the hydration shell of counterion and surrounded by other water molecules.
System
Na-DNA K-DNA Cs-DNA Ion-water τ τ 0 τ τ 0 τ τ 0 Minor gr. 29.58 0. 10 of the double helix (τ minor ), while in the major groove they are shorter (τ major ), and the lowest values near the phosphate group of the macromolecule backbone (τ ph ): τ minor > τ major > τ ph . The comparison of our results with the results of molecular dynamics simulations of alkali metal ions in water solutions [41] show the obtained residence times have qualitatively the same dependence on the ion size. However, the τ values in the Table 3 are much lower than in the work [41]. The reason is that the values of residence times have been determined by different methods. In the method that was used in the present work the residence times have been calculated directly from the mechanistical approximation of the motion of water molecule in the potential well that was obtained from on the basis of the potential of mean force. In the work [41] the residence time is calculated using time correlation functions. These two approaches are not equivalent and the additional analyzes should be done to find where these two approaches meet each other.
Thus, the results for DNA with the positively hydrated Na + , and negatively hydrated K + and Cs + counterions show that the dynamics of water molecules in the hydration shells of counterions depends on their localization around the double helix. In particular, the longest residence time was observed for a water molecule near the counterion that is localized inside the minor groove of the double helix, and it is longer than for the case of a water molecule near the same ion but in a bulk water. This difference may be due to the confined space inside the double helix and due to the structured system of water molecules that is formed in DNA grooves. In the same time, the results clearly show that the obtained energy barriers for water molecules near the ions are too high, making the hydration shell too rigid. The counterions Na + , K + , and Cs + in the simulated systems are positively hydrated, and the effect of negative hydration for K + , and Cs + was not observed.
Discussion
To explain the reason of high values of the potential barriers the possible influence of water model should be analyzed. The TIP3P water model that was used in the simulations is characterized by the dipole moment value 2.35 D, while the experimental value for water molecule in gas phase is 1.86 D and in liquid phase is 2.95 D [44]. In this regard, let us analyze the potential barrier as a function of dipole moment. For this purpose the potential of mean force has been estimated as the change of free energy of water molecule after its replacement from the hydration shell of the ion to the bulk water as follows: where ∆H and T ∆S are the enthalpy and entropy contributions, and ∆G 0 is some constant part of the free energy change.
The enthalpy contribution is featured mostly by the interaction of water molecule with the ion. In the work [42] the energy of water molecule near the ion was successfully described by presenting the water molecule as a dipole in the field of the ion. In our model the repulsion between water molecule and ion at small distances is also taken into consideration. As the result the enthalpy change may be presented as a sum of average dipole-dipole (U i−d (r)) and repulsion (U rep (r)) terms: Under the room temperatures the direction of dipole vector in the electric field of the ion may be described by the Boltzmann distribution. Taking this into consideration an average ion-dipole interaction may be presented in the following form: where L(α) = coth α − α −1 is the Langevin function, and Here q is the charge of the ion; ε is the dielectric constant of the media near the ion; ε 0 is the dielectric constant of vacuum; d is the dipole moment of water molecule. The repulsion between water molecule and ion is described by the potential in Born-Mayer form that is often used for the description of interaction of the ions in ionic crystals [43] and the energy of DNA ion-phosphate lattice [23]: where A and b are the parameters describing repulsion between ion and water molecule as hard cores.
To determine the entropy contribution to the change of the free energy we take into consideration that the motions of dipole moments of water molecules around the ion are hindered and due to the electrostatic field the molecules are highly oriented. Therefore, we assume that the entropy increases with ion-water distance the same as the average direction of water dipole that is described in our model by the Langevin function L(α). As a result the change of entropy is presented as follows: where s 0 is the entropy of water molecule in the bulk. The parameters A and s 0 we derive from the condition for maximum and minimum at the distances r a and r b : d∆G dr | r=ra = 0, d∆G dr | r=r b = 0. The energy contribution ∆G 0 is featured by the interaction energy with other water molecules of the system that includes the both enthalpy and entropy contributions. The estimation of this contribution is a complex problem and it is not essential for the study of the potential barrier. In the present work it is determined from the condition ∆G(r c ) = 0, here r c is some point where the potential of mean is equal to zero. Taking this into consideration and using the equations (7) - (12), the change of the potential of mean force may be written in the following from: where B = −A/k B T ; ∆g 0 = −L(α c )(α c + s 0 ) + Be −(rc−ra)/b , and α c = α(r c ).
For the estimations the repulsion parameter is taken the same as in the case of the crystals of alkali metal ion that is b ≈ 0.3Å. The temperature is taken the same as in the molecular dynamics simulations T = 300 • K. The dipole moment d = 2.35D was taken the same as in TIP3P model of water molecule. The values of equilibrium distances (r a ) and the barrier distance (r b ) were taken from the Table 2. The distance r c is defined as: r c = (r a + r b )/2. The dielectric constant has been determined using the dielectric function [45], developed for the description of the electrostatic interactions in nucleic acids: ε( r) = 78 − 77(0.0128 r 2 + 0.16 r + 1)e −0.16 r , where r is the distance between charges in Angstroms. At the distance about (2 ÷ 4) A this function gives the value within the range ε ≈ (1.3 ÷ 3).
The estimations performed by the formula (12) show that due to the competition of the electrostatic and entropy contributions to the potential barrier occurs (Fig. 5a). The value of the potential barrier decreases as the size of the counterion increases. The same character of the energy dependence for the case of the first hydration shell is obtained in our molecular dynamics simulations (Figure 4 and Table 2). The potential barrier ∆E = g(r b ) − g(r a ) have been calculated by the formula (12) for different values of the dipole moment (Fig. 5b). The values of the potential barriers that correspond to the dipole moments of different water models [46][47][48][49][50] are shown by points. The results show that the potential barrier for water molecule in the hydration shell of the ion increases linearly as the value of dipole moment. Taking this into consideration it is expected that the models of water molecule with lower dipole moments should give more accurate description of the hydration effects of counterions.
Conclusions
The dynamics of water molecule in the hydration shell of the positively (Na + ) and negatively (K + and Cs + ) hydrated counterions around the DNA double helix has been studied using molecular dynamics approach. The potential barriers and the residence times of water molecules near the counterions have been calculated. The results show that the dynamics of water molecules in the hydration shell of counterions depends on their localization around the double helix that is the manifestation of the interplay between water molecules in the hydration shell of DNA and counterion. The longest residence time of water molecule has been observed for the case of counterion in the minor groove of the double helix. It is about 30 ps for the positively hydrated Na + counterion and about 7 ps for negatively hydrated K + and Cs + counterions. In the major groove and outside the double helix it is essentially lower. In the simulations the considered counterions constrain water molecules too strong in the hydration shell making them positively hydrated, and the effect of negative hydration in the case of K + and Cs + counterion was not obtained. The analysis, performed within the framework of the developed phenomenological model, has been showed that the strength of the hydration shell is proportional to the value of dipole moment of water model. The water models with lower dipole moments are expected to give better description of the effects of counterion hydration. | 2019-12-26T14:04:45.000Z | 2019-12-26T00:00:00.000 | {
"year": 2019,
"sha1": "9987cb4068830d0f3e298b40c6dbef0da43a2c2d",
"oa_license": null,
"oa_url": "https://ujp.bitp.kiev.ua/index.php/ujp/article/download/2019678/1625",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "918b6f9394453751c15fc010afaee526ff9dd002",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Chemistry"
]
} |
202750140 | pes2o/s2orc | v3-fos-license | IoT Technologies for Augmented Human: a Survey
Internet of Things (IoT) technology has delivered new enablers for improving human abilities. These enablers promise an enhanced quality of life and professional efficiency; however, the synthesis of IoT and human augmentation technologies has also extended IoT-related challenges far beyond the current scope. These potential challenges associated with IoT-empowered Augmented Human (AH) have so far not been well-investigated. Thus, this article attempts to introduce readers to AH concept as well as summarize notable research challenges raised by such systems, in order to facilitate reader's further interest in this topic. The article considers emerging IoT applications for human augmentation, devices and design principles, connectivity demands, and security aspects.
I. INTRODUCTION
Present efforts in human augmentation (sometimes referred to as Human 2.0) focus on the creation of cognitive and physical improvements as an integral part of the human body 1 . These improvements are enabled by specially designed devices, such as leg or hand prosthesis, implants, artificial vision connected to the neural system of an organism, augmented reality glasses, hearing aids, and insulin pumps. Artificially recreated or extended abilities may improve quality of life and even give some competitive advantages for users.
Currently, the progress in human augmentation is driven by the interconnected Internet of Things (IoT) devices. The performance of these devices relies heavily on communication technologies. Commonly, such devices are located in close proximity to the human body. More specific applications may utilize bio-integrated devices, for example, neurally-controlled artificial limbs. All the devices used by an individual form an integrated ecosystem and should work coherently, which enabled by appropriate communication technologies. Depending on the type of the device the utilized communication technology may vary from traditional wireless protocols such as Bluetooth or Wi-Fi to highly specific technologies, such as electromagnetic or molecular nanonetworks. Therefore, a network of assisting devices can be considered as a highly heterogeneous Body Area Network (BAN). In addition to local communication, applications of Augmented Human (AH) require an internet connection (e.g., to be aware of context, offload of difficult computational tasks, upgrade software). As a whole, the concept of the Augmented Human creates a new segment of communication challenges, since the reliable 1 https://www.gartner.com/it-glossary/human-augmentation/ performance of communication technologies in such systems is the essential enabler for the users' well-being.
Presently, the research on AH is spread across many different communities. From the perspective of communication technologies, devices for human augmentation have a lot in common with wearable electronics. However, being a branch of the IoT concept, AH devices perhaps provide the most critical class of services, because humans do not exist independently but rather as a part of human-centric AH systems in which one is trained [1]. A failure in an AH application would cause chaos in this system and make the human vulnerable. Inherently, a communication failure will reduce a users physical or cognitive abilities. Thus, the AH applications require comprehensive analysis from the technological perspective to define their place in emerging network services.
This paper provides an overview of the IoT technologies for AH and defines relevant research challenges in this innovative area.
The article is organized in the following way: Section II provides an overview of AH applications. Section III considers the aspects and trends in device design. In Section IV we consider connectivity demands of the AH applications. Section V we discuss the security concerns for AH applications, and conclude the article in Section VI.
II. APPLICATIONS OF AH TECHNOLOGIES
Attempts to recover or improve human abilities began in ancient times. The majority of these attempts aimed at replacing a lost body part with an artificial one, for example, a leg or hand prosthesis. Some enthusiastic inventors aimed to go beyond the natural capabilities of the human organism by developing "upgrades", such as wings for flying. These two vectors of augmentation development are still relevant and form an augmentation continuum as shown in the Fig.1. Initially, the majority of efforts towards human augmentation were focused on the improvement of physical abilities, while in the 20th century, due to progress in microelectronics, augmentation has been extended by advanced sensing and cognitive improvements. Small-sized electronic devices are capable of assisting in performing specific tasks, e.g., a hearing aid assists people with auditor disorders or the use of AR glasses capable of providing both navigation support and object recognition. [2] A. Objectives of AH applications In a general case, AH systems assisting in daily routines [3] using electronic devices, which are connected to a single BAN via different communication technologies. The network of interconnected wearables serves as a technological layout for high-level applications of AH, which enable physical augmentation, advanced sensing, and mental assistance Fig. 2.
Physical augmentation aims at enhancement of an individuals ability to move and manipulate objects. Examples of tools used for physical augmentation include exoskeleton, artificial arms and legs, or even a jet pack. Failures in physical augmentation can be hazardous for human safety and health; therefore physical augmentation tools should be capable of providing basic functionality even when network service or other resources are unavailable.
Sensory abilities allow a person to be aware of the environment and the context surrounding them. Sensory abilities may include vision, touch, hearing, smell, and taste. Augmentation may facilitate these senses by amplifying them or, in the case of having lost a sense, augmentation allows a transformation of the characteristics of one sensory modality into stimuli of another sensory modality [4], e.g., visualizing speech or smells.
Mental (or cognitive) augmentation provides data processing assistance and facilitates decision making. An illustrative example of a cognitive augmentation is a personal planning application, where users can save time and resources when planning daily routines. The application may plan optimal logistics during the day, select and book lunch at the highest quality and yet affordably priced restaurant within the defined location, find parking spaces and car charging plugs, integrate recommended physical activity into the days timeline, and automatically revise plans in accordance with changing conditions (automatic negotiations with involved parties and reconfiguration of schedule). All these functions can be performed in a background mode, increasing the efficiency of a working day and saving time for creative activities or leisure. Currently, cognitive augmentation is the most familiar branch of human augmentation because of its widespread use in mobile applications. As one may observe, technologically such applications rely on machine learning [5] and entirely hinge on information about the environment, while also being significantly dependent on an internet connection.
B. Classification of AH applications
Taxonomy of AH applications include three major classes: (i) supporting independent living (e.g., for the aging population or people with impairments); (ii) facilitating the professional performance; (iii) self-efficiency and entertainment.
The applications which support independent living allow users to satisfy their basic daily needs without the assistance of other people. In addition, such applications monitor users' health conditions in real-time and increase their safety (e.g., by protecting aging people from occasional fall). As a result, nursing costs can be considerably reduced while also improving the quality of life for both aging and disabled people.
The AH applications for improving professional performance focus on augmenting the abilities relevant to the professional areas of an individual. For instance, an exoskeleton for a worker allows for moving heavy weights without harmful consequences for the spine. Another illustrative example comes from emergency response, where AH may enhance the performance of rescue team members by providing augmented sensing (e.g., sensing of hazard gases, utilizing thermal vision), empowered physical abilities (e.g., exoskeleton), and efficient decision making (e.g., AI-assisted operation).
The entertainment class aims to provide unusual user experiences (e.g., flying with a jet pack) or an immersive experience of extreme situations (e.g., virtual reality gaming) without physical risks to the user.
It is worth noting that AH systems may encompass the entire context where a person exists [6], which include interaction with proximate entities such as buildings, city infrastructure, and other individuals. Communicating with each other, these form an integrated smart environment (Fig.3).
C. Challenges
Ethical and social aspects. Innovative technologies for human augmentation will undoubtedly bring new challenges in ethical and social fields. Augmented features may become trendy especially among a younger generation [7], or the border between artificial and natural abilities may become blurred. It is not evident which specific issues will become a part of the agenda, nevertheless, ethical and social aspects require primary consideration.
Human-machine interaction. User experience issues are of the utmost importance when designing IoT systems for human augmentation. The IoT-empowered AH systems should provide functional but straightforward user interfaces to avoid certain user groups (e.g., older people) from feeling uncomfortable when using such systems (due to the system complexity).
III. DEVICES AND DESIGN PRINCIPLES
Technological challenges related to AH devices are primarily shaped by design principles, which rely on users' demands and expectations. Regarding the wearable IoT devices (which AH systems are), the users' expectations are primary: small size and weight of devices, enhanced reliability, and long battery lifetime.
A. Flexible Hybrid Electronics
Flexible Hybrid Electronics (FHE) integrates devices from thinned flexible materials with electric circuits in formats that can be thin, light-weight, flexible, bendable, conformal, potentially stretchable and disposable [8]. FHE offer notable advantages over the conventional electronic systems that are made of bulky and rigid materials [9]. Recent advancements in the advanced materials and soft mechanics have enabled a successful integration of rigid, miniaturized chips with flexible/stretchable circuit interconnects. Such FHE in AH applications enhances signal processing, memory, and wireless power transfer in wearable systems [10], [11]. For example, in real-time monitoring of health parameters, FHE enables biofriendly devices on biological tissues, such as artificial human skin, or internal organs with time-dynamic motions [12]. In general, implementation of FHE in AH enabling improved wearability and performance for the devices, and as a result, facilitating their use among individuals.
B. Reduced size of the devices
Due to the progress in nanotechnologies, IoT wearables can be deployed at the nano level (named as Nanonetworks) [13].
Such nanodevices employ unique properties of graphene, which allows a significant size decrease for electronic elements, including antennas, processors, receivers and transmitters [14]- [16], as well as sensors and actuators [17], [18]. The graphene-based nanoantennas enable communication in the THz frequency band [15]. However, the distance of communication in the THz frequency band is substantially limited by the high signal power losses during propagation [19], [20]. The distance of communication will not exceed 2 meters, even in an air environment with minimal humidity; if the communication is performed in an environment with a high concentration of liquids, such as a human body, the distance of transmission will decrease to several millimeters [21] establishing new challenges related to enabling communication within such networks. These graphene-based devices (antennas and transceivers for THz communication) are small enough to be integrated into biological systems (on the border between the organism and the environment) and can be easily integrated into modern communication devices (e.g., smartphones) as they are based on existing electronic technologies.
C. Improving energy effeciency
A power unit used in wearables typically the most significant contributor to both the size and weight of the devices [22]. As a consequence, developers must balance between size and the capability for autonomous operation when designing wearables. A majority of devices are designed with the priority given to size and weight, and thus have minimal operation time between recharging [23]. However, users' expectations continue to move toward fully autonomous devices without recharges or other maintenance operations. To address these demands, recent research efforts have targeted enhanced battery lifetime through improving the energy efficiency of the devices. Significant energy costs in wearables come from network functions, data acquiring, and processing [24]- [26].
The networking overheads of wearable devices was investigated in [27], [28]. More specifically, these works considered digital traffic generated by the wearable network in realtime mode. Results of the study demonstrated that network resource utilization in wearable systems is extremely low due to signaling overheads. However, the efficiency can be improved if an advanced data management algorithm is utilized on the BAN gateway. One such algorithm was proposed and evaluated in [29]. The reported results demonstrated improved networking efficiency by approximately 80 percent via the reduction in network signaling overheads, while the performance of applications decreased negligibly. Despite the notable improvements in networking, the energy efficiency of the considered systems is far from optimal and has massive potential for further improvement.
From the perspective of data processing, a drastic improvement is the promise of Approximate Computing (AC) [30]. Approximate Computing is inspired by the Pareto Principle according to which, roughly 80 percent of the effects come from 20 percent of the causes. Regarding the wearable networks, this principle can be formulated in the following way: capturing just 20 percent of the data may enable 80 percent of the applications performance. It should be noted that the actual percentage can be different; however, the general principle remains the same a minority of efforts provides the majority of results.
Wearable applications work with noisy data; thus they are natively resilient to error [31], moreover, most of the applications do not require extremely precise results, thus the paradigm of an acceptable margin of error as introduced by AC promises significant energy-efficiency gains for AH systems.
D. Reliability
Reliability issues need to be addressed long before a device could be considered for any mission-critical application. However, the reliability and validity of existing wearable devices is concerning. The majority of available devices are not verified in terms of accuracy and reliability [32]. Recent tests among wearables showed significant variations of accuracy with error margins of up to 25 percent [33].
In addition to device reliability, but by no means less important, is the enabling quality of server platforms. Possible adverse effects from a cloud server failure are widely discussed in the literature [22], [34], [35], and can be considerably mitigated via placement optimization [36].
E. Challenges
Developing networks of nanodevices. Recent developments in nanotechnologies have enabled tiny-sized devices with both sensor and actuator functionality. However, due to multiple limitations, these devices are not capable of supporting standard communication protocols, including medium access control, routing, and security. Although networking among nanodevices is widely discussed in the literature, commercially available solutions have yet to be delivered, which keeps the door open for transferring theoretical findings to the real world.
Power supply. Emerging AH systems should fully utilize the benefits of efficient wireless power harvesting and energy transmission [37], as well as low energy technologies, for reducing a users routine in its relation to the charging of devices.
Requirements for the devices and testing specifications. Despite, the notable progress in provisioning reliable AH operation, there is a lack of systematic perspective on the reliability of mission-critical IoT systems. This gap is expected to be fulfilled by the efforts of international standardization bodies (e.g., SG11 ITU-T) which perform extensive work towards the standardization of unified testing procedures for such systems.
Balancing the trade-offs between energy efficiency and accuracy. Implementation of AC promises a reduction of energy consumption by computing and sensing blocks of AH systems. However, the balance between energy efficiency and application performance should be clearly defined.
IV. CONNECTIVITY DEMANDS OF AH
The connectivity demands of AH include intra-BAN and inter-BAN considerations and cover physical interfaces, networking architecture, and AH integration in emerging network infrastructure (5G/5G+).
A. Multi-tier networking architecture
To enable the sustainable operation of AH devices, and context-awareness, AH systems must support multiconnectivity when operating in a multi-tier network environment (Fig.4).
Fig. 4. Integrated smart environments
Intra-BAN communications integrate all personal devices of an individual into one network. Such a network can operate in a distributed way or can be orchestrated by the head node (e.g., smartphone or body gateway). The orchestrated BAN is less reliable, because the fault of orchestrating devices causes a disruption of the whole BAN, while in a case of a distributed approach the network is resilient to the faults of individual devices [38]. On the contrary, an orchestrated BAN demonstrates better quality of service (QoS) and energy efficiency [39]- [41].
Inter-BAN communication covers the interaction between devices of two or more individuals. Such interaction often relies on device-to-device communication (D2D) and is required for enabling synchronization among AH systems when users collaborate. This type of communication is commonly characterized by a higher temporal and spatial dynamic (e.g., link blockages and outages). To improve the stability of sessions, the communication links can be established via assisting robot relays, such as drones. Drones can be considered as a part of a personal IoT ecosystem where they contribute to sensor augmentation, providing additional information about the environment. Simultaneously they may serve as relays for reliable D2D communication (e.g., connection with a user around the corner). In total, communication with infrastructure in direct mode allows an AH to be context-aware without a load on the mobile infrastructure. An internet connection via the mobile network infrastructure can be used for accessing cloud servers and other devices, which are not reachable via D2D communication (e.g. offloading of computation to the edge) [42].
Wired interfaces enable improved reliability and stable quality of connections, which can be fundamentally important for critical elements of AH. Moreover, wired devices (almost unsusceptible to radio interference) reduce the problem of the radio-noisy environment when plenty of wireless devices are working in close proximity (ultra-dense scenario). In addition, a wired connection can be used for an energy supply which is a notable advantage of such systems. However, the low flexibility of wired networks significantly limits their utilization in AH systems. Wired connections are currently selected exclusively for intra-BAN communication. The most suitable niche for wired communications is related to cases where connected devices are not expected to be moved considerably in relation to each other. For example, elements of an exoskeleton, elements of smart textile, and sensors embedded in the skin and connected via smart tattoo.
Recent advances in inductive links and intrabody links may establish a new branch of communication technologies for AH systems, based on using human body tissues as a transmission medium (e.g., molecular communication) [13], [43].
C. Increasing throughput
Initially, wireless technologies for machine type communications (interconnected devices) were developing with a focus to low rate traffic (e.g., telemetry), and a limited density of devices in a network. Presently, due to the reduced size of wearables, the density of connected devices can be significant. In addition, their services have spread far beyond simple telemetry, and they now use media extensively (e.g., AR/VR video services). As a result, IoT devices are generating a notable portion of data in the network, which can be expected to continue to increase in the future.
Supporting a high data rate among wireless wearable devices, especially in dense deployment (e.g., crowded streets of the city, stadiums) is a challenging task. The primary concern is interference when many devices operate simultaneously. As an alternative to an extensively employed microwave spectrum, it is proposed to use millimeter-wave (mmWave) links [44]. Due to the higher spectrum and less interference (because of greater signal loss at these frequencies), mmWave links are considered as a solution for the mitigation of interference and throughput concerns in emerging wearable networks [45].
D. Augmented Human in 5G/5G+ landscape
The connectivity challenges of AH in 5G/5G+ networks are driven by the spontaneous forming, maintaining and termination of heterogeneous networks of AH devices, and traffic flow balancing. Mission-critical communication has already been deeply investigated and discussed in the literature. However, network demands in the considered scenarios all occur in predefined locations (e.g., manufacturing, transportation hubs, medical facilities) [46]- [48], while network demands of AH applications are characterized by a high degree of temporal and spatial variations [49]. Therefore, conventional static network planning methods are inefficient for AH and require development of adaptive methods.
In comparison with legacy mobile networks, 5G systems bring a considerable shift in the quality of services offering Ultra-Reliable Low-Latency Communication (URLLC) for delay-sensitive applications, which open new horizons for AH applications. More specifically, the dynamic demands of AH are expected to be addressed in 5G by utilizing mobile access points (e.g., cell on wheels, aerial access point) and traffic offloading on D2D mesh networks. Additionally, connectivity of AH can be considerably enhanced by utilizing multiband access (e.g., using sub-6 GHz and millimeter-wave bands of 5G NR simultaneously).
Nevertheless, the mission-critical services natively supported by 5G systems require standardization efforts to meet AH demands. These efforts should result in a prioritized network service of AH applications and support interoperability between AH systems in 5G and beyond.
E. Challenges
D2D mesh networking. Secure inter-BAN multi-hop D2D communications are required for supporting merging and consequent splitting of AH systems employed by different users during their collaborative activities. Merging of AH systems means the incorporation of corresponding BANs. Such connectivity on the fly, requires robust devices identification method, neighbors discovery, routing, automatic choice and assignment of devices acting as a heterogeneous gateway to connect devices with different radio access technologies. For the last, but not least, it is essential to incentivize users to share their resources and participate in mesh networks; otherwise, the performance of the meshes will be very limited.
Health Concerns. Wide use of wireless wearables raises concerns related to the effects caused by the high-frequency electromagnetic waves on peoples health. The sensitivity of human tissues and skin to electromagnetic radiation, as well as long term effects caused by wireless devices, needs to be analyzed carefully.
Enabling directional wireless communications in BAN Directional wireless communication is extensively discussed in the literature as a feature of emerging air interfaces operating using high frequencies (e.g., millimeter-wave or THz communication). Utilization of directional antennas in wearable networks significantly increases the complexity of wireless interfaces, but promising lower interference among devices and gigabits-per-second rates (if mmWave links used) [45]. To enable directional links in BAN, research challenges related to beamforming techniques must be addressed [50].
Adaptive network management mechanisms. A novel signaling architecture is required for capturing and predicting AH demands, in order to enable real-time network adaptation to varying demands of AH application and the varying available network resource. The promising solutions for addressing this challenge may come from the synthesis of machine learning approach and SDN/NFV technologies.
V. SECURITY CONSIDERATIONS
Applications of AH bring security concerns to the top, as security breaches in such enablers can have dramatic results to both the infrastructure and the individuals who rely on them. International standardization bodies are considering security challenges architecturally [51]- [53]. Following this, Fig. 5 summarized in a layered manner, common security threats relevant for AH applications.
A. Physical level
Attacks on the physical level may disrupt the normal operation of connected devices even if the high level (MAC, network and application) is well designed. For example radio frequency (RF) jamming may imply interruption of wireless communication using high power radio signals of the same frequency as used by AH devices. RF jamming may entirely block communication or interfere with it. The later may exhaust batteries of wearables due to additional energy costs required for numerous retransmissions, using higher transmit power, idle listening, etc.
A wireless medium is essentially a broadcast one, which makes such systems vulnerable to eavesdropping (e.g., attackers may eavesdrop on ongoing transmissions and hijack the contents or spoof the other user) [54].
B. Network and MAC levels
A significant issue of network and MAC security is caused by a lack of robust device identification methods [54]- [56]. There are many solutions proposed for the identification [57]. These technologies can be classified into two groups: virtual identifiers and physical identifiers. Currently, the most popular identifiers (IMEI, MAC-address) are recorded into the memory of the device [58], which makes them vulnerable to cloning and tampering [59]- [62]. An alternative recently was proposed a concept of hybrid identifier [58], which is significantly more resilient to tampering and potentially may address the issue of reliable device identification in the network. A reliable identification method is required to enable the blocking of untrusted devices on MAC and network levels which reduces the risk of attacks based on accessing the network, including Sybil, tampering server or client interface, spoofing (e.g., DNS, ARP), signaling storms (redundant signaling messages), traffic bursts (e.g., extensive request or data forwarding), and desynchronization.
Concerning the network level, IEEE 802.15.6 defines three levels of security with the focus on critical applications of BAN. According to the standard, each security level has different properties, security levels, and data frame formats. The lowest level of security is provided on level 0, which employs an unsecured data frame for communication. This level has no mechanism for data integrity, confidentiality and privacy protection, and replay defense. The next level provides authentication for enhancing security; however, data is not encrypted. Thus confidentiality and privacy issues are not addressed. Finally, the third level enables authentication and encryption, providing maximal security. The required security level can be selected when a new device associates BAN. The security mechanisms proposed in IEEE 802.15.6 support both unicast and multicast [63].
C. Application level
Software (including firmware) quality and immutability are primary concerns at the application level. Most common attacks exploit the vulnerabilities of software to inject malicious code or logical bombs, performing DoS attacks, and sniffing. It is worth noting that untrusted software producers may incorporate malicious code or a logical bomb into their application by default, which can make the user vulnerable. Beyond this, the most successful attacks at the application level are based on social engineering. This type of attacks exploits users' weaknesses, which is often much easier than hacking a well-designed application.
D. Challenges
Software secured from social engineering attacks. Software utilized in AH applications should be design in such a way to a users' actions by enabling protection from social engineering attacks, by limiting their rights in a system. Recent machine learning algorithms are expected to enable monitoring and dynamic protection from social engineering attacks [64].
Standardization of requirements and testing procedures. Security and reliability requirements for AH applications have to be standardized to provide a validated design framework for developers. New applications can then be considered as ready for AH services if an appropriate testing campaign has certified their conformity to the standard.
Device identification and validation Counterfeit devices still have a notable share in the market. Such devices may operate incorrectly and reduce the performance of the system as a whole. Thus, it is especially important to provide a robust device identification system for AH applications. Such a system can be used for blocking counterfeit and untrusted devices in the network which facilitates the security of the applications.
VI. CONCLUSION
Communication technologies of the past decades notably shaped social and lifestyle changes. Internet-related technologies accelerated lifestyle via efficient and prompt information exchange. Currently, in the era of IoT one may observe how connected devices have become fully autonomous, delivering advanced services to their users. Emerging IoT applications are facilitating human augmentation via enhanced sensing, increased physical power, or cognitive performance. These applications form a new area for research and development, promising to become one of the most impactful technologies in the foreseeable future.
This paper covered the main aspects of IoT technologies for human augmentation and identified possible future research directions. The topic of human augmentation is highly interdisciplinary; thus, the defined challenges are not limited to communication technologies only, and their mitigation requires efforts in ethics, security, and natural sciences. Only collaborative work on this topic enables real opportunities for human wellbeing via IoT augmentation. | 2019-09-24T21:20:40.000Z | 2019-09-24T00:00:00.000 | {
"year": 2019,
"sha1": "9610e0e6996811e2733709def10698f74bb621b7",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1909.11191",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9610e0e6996811e2733709def10698f74bb621b7",
"s2fieldsofstudy": [
"Engineering",
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
254338135 | pes2o/s2orc | v3-fos-license | Euterpe oleracea Mart (Açaizeiro) from the Brazilian Amazon: A Novel Font of Fungi for Lipase Production
Lipases (EC 3.1.1.3) are hydrolases that catalyze triglycerides hydrolysis in free fatty acids and glycerol. Among the microorganisms that produce lipolytic enzymes, the entophytic fungi stand out. We evaluated 32 fungi of different genera, Pestalotiopsis, Aspergillus, Trichoderma, Penicillium, Fusarium, Colletotrichum, Chaetomium, Mucor, Botryodiplodia, Xylaria, Curvularia, Neocosmospora and Verticillium, isolated from Euterpe oleracea Mart. (Açaizeiro) from the Brazilian Amazon for lipase activity. The presence of lipase was evidenced by the deposition of calcium crystals. The endophytic Pestalotiopsis sp. (31) and Aspergillus sp. (24) with Pz 0.237 (++++) and 0.5 (++++), respectively, were the ones that showed the highest lipolytic activity in a solid medium. Lipase activity was rated in liquid medium, in a different range of temperatures (°C), pH and time (days). The values obtained in the production of lipase by the endophytic fungi were 94% for Pestalotiopsis sp. (31) and 93.87% for Aspergillus sp. (24). Therefore, it is emphasized that the endophytic fungus isolated the E. oleracea palm may be a potential candidate to produce enzymes of global commercial interest.
Introduction
Brazil has around 20% of the world's biodiversity [1], and the Amazon is majorly responsible for this biodiversity, due to the high specificity of the environment, which contributes to a diversified species of microorganisms; however, the microorganisms present and their interactions with other organisms are poorly understood [2][3][4].
Euterpe oleracea Mart. (Arecaceae), commonly known as açaí, is a palm tree typically found in the Amazon region, naturally found in North Brazil, especially in the Pará, Amazonas and Amapá states, and which has great importance because of the economic value of its fruit pulp [5]. Numerous advances have been made in recent years to demonstrate the health benefits of açai pulp and seed from E. oleracea Mart [6]. Studies have confirmed it to be one of the most potent antioxidants [7] and anti-inflammatory food sources available, attributable to a class of flavones, as well as other polyphenols, lignans and saccharides [8].
Endophytic fungi are microorganisms that live inside the host plant tissues without causing diseases [2]. Some studies show that fungi endophytes are capable of producing a large number of important bioactive metabolites, for example, the taxol, an important anticancer drug, is produced by fungus endophytic Taxomyces andreanse, isolated from Taxus brevifolia bark [9]. The endophytic fungi can be also a source of different enzymes, such as, amylases, quietness, proteases, asparaginases, celluloses, laccase and lipases with biotechnological interest [10][11][12].
The first description of endophytic fungi from E. oleracea palm leaves, of the Amazon region, was performed by Rodrigues [13]. In opportunity, Rodrigues also described the occurrence of a new genus, such as, Letendraeopsis palmarum, and new species of the genus Idrella isolated from E. oleracea palm [14]. The enzymatic potential (cellulolytic and amylolytic) of endophytic fungi isolated from Euterpe precatoria Mart. was shown by Batista [15]; Recently, the extract with effect antimicrobial from endophytic fungi of E. precatoria against Staphylococcus aureus, Streptococcus pneumoniae, Enterococcus faecalis, Escherichia coli and Klebsiella pneumoniae human pathogens was examined [16]. In addition, endophytic fungi from E. precatoria were used as antagonistic agents towards Colletotrichum gloeosporioides in the control of anthracnose in açaí leaflets [17]. McCulloch et al. [18] reported the genome, in a single chromosome, of Lactococcus lactis strain AI06, isolated from the mesocarp of the açaí fruit (Euterpe oleracea) in eastern Amazonia, Brazil. This strain is an endophyte of the açaí palm and also a component of the microbiota of the edible food product. However, studies about endophytic fungi from E. oleracea are scare in the literature.
Lipases (EC 3.1.1.3) are catalytic enzyme of hydrolases reaction, on the carboxylic ester bond and catalyze the reaction of hydrolysis esterification and interesterifications of fats with excellent performance [19,20]. These enzymes correspond to the third biggest selling group in the world [21], your applications goes from the production of detergents, degreasers [22,23], textile products, to paper, [10,23]. Lipase can be produced by animals, plants and microorganisms, the enzymes produced by the last one being more stable when compared to other sources [19].
Some studies are being carried out to explore environments that have not been studied, especially located in the Amazon region. Therefore, this article aims to demonstrate the potential enzymatic (lipolytic) activity of endophytic fungi from the fruit of Euterpe oleracea Mart (açaizeiro).
Isolation and Identification of Endophytic Fungi from the Fruits of E. oleracea Mart
The botanical material (fruits and leaves) of E. oleracea was collected and given by the Brazilian Agriculture and Livestock Research Company (Empresa Brasileira de Pesquisa Agropecuaria-EMBRAPA/Amapá), from the coordinates (N 00 • 22 55 e O 51 • 01 40 ) during the period of August 2018. The fungi used in this research were isolated from their fruits, roots and leaves and were stocked according to [24]. All microbiologic manipulation activity was conducted inside the laminar flow cabinets.
Identification and Conservation of Endophytic Fungi
The morphological identification was conducted at the genus level by macro-morphological grouping, by observing the characteristics of each individual such as its appearance, form, color and the consistency of its colonies. In order to visualize the microscopical structures, specially designed coverslips were made in microcultures, using a blue pigment of lactophenol (0.5%) across the surface of the coverslips. An optical microscope (OLYMPUS ® BX41) was used for capturing these images. The images were amplified by 200-400× and compared to the ones found in the specialized literature [13]. Thereafter, the macromorphological and micromorphological analysis identified the following genera: Pestalotiopsis sp., Aspergillus sp., Trichoderma sp., Penicillium sp., Fusarium sp., Colletotrichum sp., Chaetomium sp., Mucor sp., Botryodiplodia sp., Xylaria sp., Neocosmospora sp. and Verticillium sp.
Determining the Enzymatic Activity
The microorganisms were precisely inoculated in the center of the Petri dish (90 cm) and incubated in B.O.D in a regulated environment of 28 • C with a photoperiod of 12 h. The measurement of the colony and the halo diameter were expressed in centimeters (cm) and conducted once per 24 h for 5 consecutive days. All the testing was conducted in triplicate.
Lipase Production in a Liquid Environment
After the sorting with the ones that were isolated in a solid medium, we opted to optimize the lipase production in a liquid environment using only the fungi Pestalotiopsis sp. (31) and Aspergillus sp. (24). In order to cultivate those in a liquid medium, Erlenmeyer flasks (250 mL) were used, containing 25 mL of growth medium with 5 mycelial discs (8 mm of diameter) of the fungi Pestalotiopsis sp. (31) and Aspergillus sp. (24). After cultivating them, those fungi were incubated for different days (3, 6 and 9 days) under different temperatures (25, 30 and 35 • C) and different pH (5, 7 and 9) in an orbital shaker at 150 rpm. The fermented broth was vacuum-filtered with a filter paper of 80 g.m −2 . Its biomass was vacuum-drained until its weight reached a constant weight in an average of about 96 h. The filtered broth devoid of cells was used to determine the lipolytic activity; for this end, olive oil was used as the carbon font and inductor of the lipase production. the medium used possessed the following composition: 37g/L of peptone, 1.11 g/L of magnesium sulfate, 1.85 g/L of potassium phosphate, 1.85 g/L of sodium nitrate and 14 mL/L of olive oil. The experiments were realized in triplicate.
Quantification of Lipolytic Activity in a Liquid Medium
The lipolytic activity was measured according to the methodology described and adapted by Mayordomo et al. [25]. The lipolytic activity was conducted using 250 µL of a solution containing 200 mg Triton X-100, 50 mg of gum arabic and phosphate buffer (0.1 M) at pH = 7.5 for a final volume of 50 mL. Following this, 250 µL of the enzymatic broth was added to a 45 µL solution of palmitate of p-nitrophenol (p-NPP) diluted in isopropanol (10 mL); thereafter, the reaction was taken to a water bath at 40 • C at 30 min. Then, 0.5 mL of Trizma base 2% (m/v) was added. The quantification of the lipolytic activity was realized starting with the solution p-NPP that releases nitrophenol (NP), quantified by absorbance at 398 nm in a spectrophotometer (Perkinelmer-Lambda35). One unit of activity was defined as the amount of enzyme required to hydrolyze 1 µmol of p-NPP per minute under the conditions described.
Statistical Analysis Experimental Design and Statistical Model
In this study, a three-level and three-variable Box-Behnken factorial design was applied to determine the best combination of variables for determination of lipase production using isolated endophytic fungi. The pH of the medium, time (days) and temperature ( • C), which were identified to have strong effects on the response in preliminary one-factor-at-a-time experiments, were taken as the variables tested in a 15-run experiment to determine their optimum levels. Independent variables were designated as x1, x2 and x3, and their level values are shown in Table 1. The polynomial equation used for the three variables is given below: where Y is the predicted response; β0 is model constant; β1, β2 and β3 are the linear coefficients; β11, β22 and β33 are the quadratic coefficients; β12, β13 and β23 are the interaction coefficients; and x1, x2 and x3 are independent variables. Table 1. Three independent variables used in Box-Behnken factorial design.
Factor
Name Temperature ( • C) 25 30 35 The optimal condition was determined considering the lipase production content (BD%) as the response. The software STATISTICA ® (version 10, Statesoft-Inc., Tulsa, OK, USA, trial version, 2011) was used for experimental design, data analysis and determination of optimal conditions. ANOVA was used for the evaluation of the significance of independent variables' influence and interactions. Pareto charts were applied to obtain the significance of the impact of tested variables on mentioned responses.
Isolating and Purifying Endophytic Fungi
The strains of endophytic fungi were isolated from E. oleracea by making use of fragments of its plant tissue (fruits, roots and leaves), Figure 1. A total of 32 fungi of different morphological genera were isolated, including: Pestalotiopsis sp., Aspergillus sp., Trichoderma sp., Penicillium sp., Fusarium sp., Colletotrichum sp., Chaetomium sp., Mucor sp., Botryodiplodia sp., Xylaria sp., Curvularia sp., Neocosmospora sp. and Verticillium sp. Figure 2 shows the endophytic fungi isolated and grown in Petri dishes after the process of purifying their lineages.
The taxon Pestalotiopsis is characterized by spores with pigmented median cells, divided by four eusepta (true septum), with 2-3 apical appendages resulting from tubular extensions of the apical cell and a central basal appendage [27]. However, the genus Pestalotiopsis is complex and can be difficult to classify at the species level, because characteristics such as fruiting structure, length and conidia morphology tend to vary within species and also with any change in the environment [28]. The colonies were characterized by having white coloration, vigorousness, cottony mycelium, formation of black masses of conidia and abundant sporulation. The taxon Pestalotiopsis is characterized by spores with pigmented median cells, divided by four eusepta (true septum), with 2-3 apical appendages resulting from tubular extensions of the apical cell and a central basal appendage [27]. However, the genus Pestalotiopsis is complex and can be difficult to classify at the species level, because characteristics such as fruiting structure, length and conidia morphology tend to vary within species and also with any change in the environment [28]. The colonies were characterized by having white coloration, vigorousness, cottony mycelium, formation of black masses of conidia and abundant sporulation. The taxon Pestalotiopsis is characterized by spores with pigmented median cells, divided by four eusepta (true septum), with 2-3 apical appendages resulting from tubular extensions of the apical cell and a central basal appendage [27]. However, the genus Pestalotiopsis is complex and can be difficult to classify at the species level, because characteristics such as fruiting structure, length and conidia morphology tend to vary within species and also with any change in the environment [28]. The colonies were characterized by having white coloration, vigorousness, cottony mycelium, formation of black masses of conidia and abundant sporulation. Rodrigues [14] recorded the first occurrence of the genus Pestalotiopsis in the Amazon region as endophytic in açaí leaves.
Additionally, the identification of Aspergillus has traditionally been based on morphological characterization [29]. Macromorphological characteristics include colony color on various culture media, colony diameter, colony reverse color, production of exudates and soluble pigments. The micromorphological characterization mainly related to the form of serialization of the conidial head, the size of the vesicle, the morphology of conidia and the presence of cells [30].
The Aspergillus conidiophore is simple, usually aseptic and ends in a vesicle, where the phialides are inserted. Some species can produce Hülle cells or sclerotia. Many species of Aspergillus have teleomorphs and reproduce sexually [30].
Screening of Lipase-Producing Endophytic Fungi
The determinant factors that make the qualitative enzymatic test selection viable include the direct correlation between the halo size and the degradative capacity of the microorganisms. The Table 2 show the enzymatic activity of the isolated endophytic fungi from endophytic E. oleracea (açaizeiro). The results from this selection allow to foresee the enzyme production yields, indicating the presence of a determined substance through the detection of some specific activity [31]. The isolated fungi with the largest activity halo were selected for the enzymatic activity determination step in a liquid environment. From the alignment of lipase production of endophytic fungi in Petri dishes, the isolated strains show the formation of calcium crystal halos around the colonies (Figure 3), indicating the production of lipase by these fungi. Most of the lipase-producing fungi are isolated from industrial or domestic oily leftovers, which have been contaminated with grease and oil and living and dead animals [32,33]. From this study, it is also possible to affirm that endophytic fungi can exhibit an interesting lipolytic activity. The lipasic activity is frequently measured by the release of either fatty acids or glycerol, and the use of a solid medium with inducting substrates such as vegetable oil, standard triglycerides, Tween 80 and coloring agents was already described in the literature, aiming at the pre-selection of lipase-producing microorganisms [34]. Many of these genera fungi were described in the literature as potential lipase producers, such as Trichosporon, Botrytis, Pichia, Fusarium, Aspergillus, Mucor, Rhizopus, Penicillium, Geotrichum, Tulopsis and Candida [35].
Tarci et al. [30] isolated Aspergillus sp. DPUA 1727 from both the maize and soil and it studied in order to produce lipase using agro-industrial waste as an inductor. It was shown that agro-industrial waste can be used for this purpose mainly if it presents a higher percentage of fatty acid esters (>80%).
Experimental Design for Lipase with Endophytic Fungi Pestalotiopsis sp. (30) and Aspergillus sp. (24)
The lipase production by the endophytic fungi Pestalotiopsis sp. (30) and Aspergillus sp. (21) from different experimental assays of the experimental planning protocol [32] concluded that the factorial planning Box-Behnken Design (BBD) might be the ideal tool in order to optimize the experimental conditions for endophytic fungi. Moreover, another significant advantage of using BDD instead of other techniques is the budget, due to the fact that it demands a smaller number of experimental executions and less time and, consequently, a smaller use of supplies [36].
The values of the variables, the levels used in the experiments and the results obtained are shown in Table 3. The variation between the maximum and minimum values obtained was from 93.18 to 94% for Pestalotiopsis sp. (31) and from 93.12 to 93.87% for Aspergillus sp. (24), where the highest percentages represent the higher relative production of lipase by the endophytic fungi compared to the negative control group, with the best responses (greater amount of lipase) in assay 4 for Pestalotiopsis sp. (31) and Aspergillus sp. (24) (pH 5; temperature of 35 • C; and time of 6 days). The coefficients of determination of the models (R 2 ) were 0.82 for the tests of the endophytic fungus Pestalotiopsis sp. and 0.86 for the Aspergillus sp. (24). Model proficiency is demonstrated if R 2 is equal to 0.75 or greater than this value [37]. The values obtained for the relative production of lipase by the endophytic fungi were quantified from the equation Y = 0.004489*X + 0.1323 generated by the standard curve ( Figure 4). Pareto charts of the standardized effects were generated that reveal the significant effects of the medium pH, growth time (days) and temperature ( • C), both linear and quadratic, with the endophytic fungi Pestalotiopsis sp. (31) and Aspergillus sp. (24), where the bar length represents the absolute importance of the effects estimated according to the values used in the tests. The vertical line represents the boundary between significant and insignificant effects with a 5% risk of error. The effects are significant at a 95% confidence level in the experimental domain studied (p < 0.05) as shown in Figure 5. As shown in Figure 5, three effects were statistically significant (p < 0.05) for relative lipase production with the endophytic fungus Pestalotiopsis (31) sp. For lipase activity from the endophytic Aspergillus sp. (24), evidently, the temperature variable (L) represents the most decisive factor for improving the production of lipase, and in these cases, as the values generated by the Pareto graph were positive (4.950 and 6.025), they show that the higher the temperature used in the reactions, the better. For the reaction conditions for the fungi, results can be observed in experiments 4, 8 and 12 of the experimental design matrix ( Table 3).
The quadratic model correlating the factors utilized in the runs for lipase production by endophytic fungi allows the projection of three response surface graphics for the optimization of the results. The response surface graphic for the lipase production of the endophytic fungi Pestalotiopsis sp. (31) (Figure 6) was generated to determine the crossing between two conditions and analyze its effects over the obtained results. As shown in Figure 5, three effects were statistically significant (p < 0.05) for relative lipase production with the endophytic fungus Pestalotiopsis (31) sp. For lipase activity from the endophytic Aspergillus sp. (24), evidently, the temperature variable (L) represents the most decisive factor for improving the production of lipase, and in these cases, as the values generated by the Pareto graph were positive (4.950 and 6.025), they show that the higher the temperature used in the reactions, the better. For the reaction conditions for the fungi, results can be observed in experiments 4, 8 and 12 of the experimental design matrix ( Table 3).
The quadratic model correlating the factors utilized in the runs for lipase production by endophytic fungi allows the projection of three response surface graphics for the optimization of the results. The response surface graphic for the lipase production of the endophytic fungi Pestalotiopsis sp. (31) (Figure 6) was generated to determine the crossing between two conditions and analyze its effects over the obtained results. As shown in Figure 5, three effects were statistically significant (p < 0.05) for relative lipase production with the endophytic fungus Pestalotiopsis (31) sp. For lipase activity from the endophytic Aspergillus sp. (24), evidently, the temperature variable (L) represents the most decisive factor for improving the production of lipase, and in these cases, as the values generated by the Pareto graph were positive (4.950 and 6.025), they show that the higher the temperature used in the reactions, the better. For the reaction conditions for the fungi, results can be observed in experiments 4, 8 and 12 of the experimental design matrix ( Table 3).
The quadratic model correlating the factors utilized in the runs for lipase production by endophytic fungi allows the projection of three response surface graphics for the optimization of the results. The response surface graphic for the lipase production of the endophytic fungi Pestalotiopsis sp. (31) (Figure 6) was generated to determine the crossing between two conditions and analyze its effects over the obtained results. The lipase production in response to the temperature and pH in the reactional medium ( Figure 6A) reveals that a better production of this enzyme can be obtained due to the increase in temperature (from 30 to 35 °C) combined with both pH 5.0 and pH 9.0. When crossing the growth time and the pH of the medium ( Figure 6B), it is possible to observe that the increase in the time interval used generates a good influence on the responses, increasing from 3 to 9 days with pH 5.0 and 9.0. In Figure 6C, the interaction between temperature and growth time shows that increases in both time and temperature are important in the results of relative lipase production.
Likewise, three response surface plots were obtained to optimize the experimental conditions for the endophytic fungus Aspergillus sp. (24) (Figure 7). Lipase production in feed at the temperature used and the pH of the rational medium ( Figure 7A) show that the increase in temperature (>30 °C) combined with pH 5 and 9 directly influences the response. In Figure 7B, the relationship between time, pH and reaction time shows that, for a greater production of lipase, it is necessary to increase the reaction days (>6.0) using pH 5 or 9. In the interaction between reaction and temperature ( Figure 7C), the best conditions obtained were at temperatures of 30 and 35 °C, mainly in the growth time of 9 days. It is important to highlight that the time and pH factors in the variables used in this experimental design were not statistically significant, making it evident that the temperature was the most relevant factor in this method.
Lipases are known for being efficient and stable catalyzers in many culture mediums and for acting in a diverse range of organic solvents. Many studies identified the production of enzymes by endophytic fungi in solids and liquids, just like the uses of those in transesterification reactions of many different lipidic biomasses [2,22,38].
In a study realized by Souza et al. 2018 [39] utilizing cotton oil as a substract, the Preussia africana isolated from Handroanthus impetiginosus showed lipolytic activity with 5.9 U/mL. Stemphylium lycopersicie isolated from Humiria balsamifera and Sordaria sp. isolated from Tocoyena bullata not only showed maximum activity for lipase production with 110 U/mL, but they also promoted the esterification reaction for the synthesis of ethyl oleate [2]. The lipase production in response to the temperature and pH in the reactional medium ( Figure 6A) reveals that a better production of this enzyme can be obtained due to the increase in temperature (from 30 to 35 • C) combined with both pH 5.0 and pH 9.0. When crossing the growth time and the pH of the medium ( Figure 6B), it is possible to observe that the increase in the time interval used generates a good influence on the responses, increasing from 3 to 9 days with pH 5.0 and 9.0. In Figure 6C, the interaction between temperature and growth time shows that increases in both time and temperature are important in the results of relative lipase production.
Likewise, three response surface plots were obtained to optimize the experimental conditions for the endophytic fungus Aspergillus sp. (24) (Figure 7). Lipase production in feed at the temperature used and the pH of the rational medium ( Figure 7A) show that the increase in temperature (>30 • C) combined with pH 5 and 9 directly influences the response. In Figure 7B, the relationship between time, pH and reaction time shows that, for a greater production of lipase, it is necessary to increase the reaction days (>6.0) using pH 5 or 9. In the interaction between reaction and temperature ( Figure 7C), the best conditions obtained were at temperatures of 30 and 35 • C, mainly in the growth time of 9 days. It is important to highlight that the time and pH factors in the variables used in this experimental design were not statistically significant, making it evident that the temperature was the most relevant factor in this method.
Lipases are known for being efficient and stable catalyzers in many culture mediums and for acting in a diverse range of organic solvents. Many studies identified the production of enzymes by endophytic fungi in solids and liquids, just like the uses of those in transesterification reactions of many different lipidic biomasses [2,22,38].
In a study realized by Souza et al. 2018 [39] utilizing cotton oil as a substract, the Preussia africana isolated from Handroanthus impetiginosus showed lipolytic activity with 5.9 U/mL. Stemphylium lycopersicie isolated from Humiria balsamifera and Sordaria sp. isolated from Tocoyena bullata not only showed maximum activity for lipase production with 110 U/mL, but they also promoted the esterification reaction for the synthesis of ethyl oleate [2]. Between the factors that influenced the enzymatic activity, the medium's pH is among the most significant variables in this study. The pH has a vital role in manutention of metabolism of fungi, taking part in a diverse range of biological functions. In the enzymatic process, each enzyme shows maximum activity at specific pH values. Moreover, the applicability of those enzymes in the biotechnological field depends on its stability in different pH ranges. In this sense, several studies report that endophytic fungi are efficient producers of stable enzymes with variable pH (alkaline and acid), such as lipases.
The endophytic Aspergillus sojae isolated from the plant Plectranthus amboinicus produced stable lipases, with maximum activity at pH 6 under a temperature of 27 °C [40]. Rocha et al. (2020) [2] isolated the fungi Stemphylium lycopersici from the leaves of a Humiria balsamifera and noted that under a temperature of 30° C and pH 7 it was a great lipase producer. According to [41], the fungi Preussia africana isolated from Handroanthus impetiginosusi, when cultivated under the conditions of pH 7 and a temperature of 37 °C, was an excellent lipase producer. Additionally, [42] optimized the lipase production process by making use of the fungi Aspergillus niger (MTCC 872) and observed that the maximum production of this enzyme was correlated to the temperature of 40 °C and pH 6.
The optimization of reaction conditions through experimental design has been a great tool for bioassays with fungi, as conducted in the production of lipase with three selected strains: Candida guilliermondii, Penicillium sumatrese and Aspergillus fumigatus. Enzymatic active were optimized through the experimental design, with excellent yield [43,44], by the endophytic fungus Penicillium bilaiae. Additionally, [45] applied a factorial experiment to optimize the production of extracellular enzymes by the endophytic fungus Alternaria alternata.
Conclusions
This study isolated 32 endophytic fungi from Euterpe oleracea (fruits, leaves and roots) from different genera. The isolated fungi of the endophytic Pestalotiopsis sp. (31) and Aspergillus sp. (24) were the ones that showed the highest lipolytic activity in a solid medium. Through experimental planning, the isolated Aspergillus sp. (24) and Pestalotiopsis sp. (31) have shown how the variable of pH affects the lipolytic activity, with the medium containing pH 9 having the most significance. Therefore, this highlights that the endophytic fungi isolated from the palm tree E. oleracea might be potential candidates for enzyme production of global commercial interest. Between the factors that influenced the enzymatic activity, the medium's pH is among the most significant variables in this study. The pH has a vital role in manutention of metabolism of fungi, taking part in a diverse range of biological functions. In the enzymatic process, each enzyme shows maximum activity at specific pH values. Moreover, the applicability of those enzymes in the biotechnological field depends on its stability in different pH ranges. In this sense, several studies report that endophytic fungi are efficient producers of stable enzymes with variable pH (alkaline and acid), such as lipases.
The endophytic Aspergillus sojae isolated from the plant Plectranthus amboinicus produced stable lipases, with maximum activity at pH 6 under a temperature of 27 • C [40]. Rocha et al. (2020) [2] isolated the fungi Stemphylium lycopersici from the leaves of a Humiria balsamifera and noted that under a temperature of 30 • C and pH 7 it was a great lipase producer. According to [41], the fungi Preussia africana isolated from Handroanthus impetiginosusi, when cultivated under the conditions of pH 7 and a temperature of 37 • C, was an excellent lipase producer. Additionally, [42] optimized the lipase production process by making use of the fungi Aspergillus niger (MTCC 872) and observed that the maximum production of this enzyme was correlated to the temperature of 40 • C and pH 6.
The optimization of reaction conditions through experimental design has been a great tool for bioassays with fungi, as conducted in the production of lipase with three selected strains: Candida guilliermondii, Penicillium sumatrese and Aspergillus fumigatus. Enzymatic active were optimized through the experimental design, with excellent yield [43,44], by the endophytic fungus Penicillium bilaiae. Additionally, [45] applied a factorial experiment to optimize the production of extracellular enzymes by the endophytic fungus Alternaria alternata.
Conclusions
This study isolated 32 endophytic fungi from Euterpe oleracea (fruits, leaves and roots) from different genera. The isolated fungi of the endophytic Pestalotiopsis sp. (31) and Aspergillus sp. (24) were the ones that showed the highest lipolytic activity in a solid medium. Through experimental planning, the isolated Aspergillus sp. (24) and Pestalotiopsis sp. (31) have shown how the variable of pH affects the lipolytic activity, with the medium containing pH 9 having the most significance. Therefore, this highlights that the endophytic fungi isolated from the palm tree E. oleracea might be potential candidates for enzyme production of global commercial interest. Data Availability Statement: All data is provided in full in the results section of this paper.
Conflicts of Interest:
The authors declare that they have no known competing financial interests or personal relationships that could have influenced the present study. | 2022-12-07T16:02:52.776Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "03b8c76a87d6bb3d6c947ebaf90d472e1e155602",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2607/10/12/2394/pdf?version=1669976940",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3e22a1bc31a4373e83d065cf37406aec93a40faa",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253332221 | pes2o/s2orc | v3-fos-license | FIBRE SOURCING FOR THE NIGERIAN PULP MILLS: EVALUATION OF SUITABILITY INDICES OF SELECTED NIGERIAN RAINFOREST WOOD FIBRES
FIBRE SOURCING FOR THE NIGERIAN PULP MILLS: EVALUATION OF SUITABILITY INDICES OF SELECTED NIGERIAN RAINFOREST WOOD FIBRES. To find a lasting solution to the problem of suitable fibre for pulp and papermaking in Nigeria, fibre suitability indices of nineteen wood species native to the rainforest zone of Nigeria were evaluated. Matured stems of the species were sourced and prepared for maceration. The fibre characteristics of the wood were carried out following ASTM D-1030-95 and ASTM D-1413-61. The fibres obtained were observed with the aid of a microscope and measurements of their morphology were done. A minimum of 25 fibres were measured for each species for accuracy. Selected morphological indices such as Runkel Ratio (RR), Flexibility Coefficient (FC), Slenderness Ratio (SR) as well as Rigidity Coefficient (RC) of the wood fibres were estimated. The results showed that the fibres lenght fall under short (1.05–1.36), medium-long (1.52–1.75), and long (2.0 mm) fibres criteria. All derived morphological indices showed significant variations from species to species. All fibres are not rigid and exhibited good SR with moderate rigidity and good felting power. They were all elastic; R . heudolotii and P . macrocarpa exhibited high elastic nature. They all have FC ≥ 50 and pass the RR ≤ 1, acceptable value for paper-making fibre except P . biglobosa and M . excelsa . The flexibility coefficients are in the range of 0.50 and 0.81. All the species pass the SR > 33 acceptable value for paper-making fibres. The species if harnessed as fibre blends in pulp and paper making furnish will help to solve the problem of inadequate long fibres for paper production in Nigerian pulp mills.
I. INTRODUCTION
Nigeria is one of the largest wood producers in Africa and a major exporter of timber resources (Obiora, Jonah, Ikenna, & Christian, 2019;FAO, 2004). The nation's forest industry, especially the paper sub-sector seems to be the worst-performing among all the industries (Nigerian Voice, 2010). The federal government established three paper mills in the 1970s which include; Nigerian Paper Mill in Jebba, Nigerian Newsprint Manufacturing Company, Oku Iboku, and Iwopin Pulp and Paper Company. According to reports, all three mills started well but could not sustain operation and eventually closed down in 1996 (Azeez, Andrew & Sithole, 2016;Udohitinah & Oluwadare, 2011). The cause of the failed investments has been attributed majorly to the inadequate supply of long-fibre for pulp and paper production (Oluwadare, 2007) due to the absence of long fibred raw materials in Nigerian forests. This then necessitated heavy dependence on imported pulp fibres. Although several species have been suggested to be used as a feasible solution to address the inadequate long fibre problem the suggestions were never implemented until all the mills finally closed down. Efforts to develop a sustainable pulp and paper industry have proved abortive because of the high dependence on imported long-fibre pulp (Ogunwusi, 2013). As far back as the 1980's, approximately US $ 85 million was required to import 85,000 tons of long fibre pulp required by the three integrated pulp and paper mills in Nigeria (Makinde, 2004;Egbewole, & Rotowa, 2017). Thus, Nigeria currently depends on the importation of writing, duplicating, printing, and kraft papers including newsprint (Ogunwusi & Onwualu, 2013).
Nigerian forests are characterised by mixed tropical hardwood species whose fibre lengths are short. The morphology of the fibres is important index in evaluating the suitability of fibre for pulp and paper-making Dinwoodie (1965). A number of hardwood species have been studied by various researchers (Ogunwusi, 2002;Osadare, 2001;Ogunkunle & Oladele, 2008;Oluwadare, & Sotannde, 2007;Ogunjobi, Adetogun, & Omole, 2014) were reported to be suitable sources of fibre for paper making. Although the suitability rating by these authors was based only on the fibre lengths of the species. It is not that hardwoods are typically bad for paper production. Papers from hardwood pulps are generally lower in strength because of their shorter fibres than those of softwoods with longer fibres.
Long softwood fibres give essential strength, while short hardwood fibres are used in furnishes to provide good printability and stiffness to the end product. Analysis of the morphology of fibres and their derived indices are important factors in estimating the pulp quality of any fibre material (Dinwoodie, 1965). The morphology of the fibre and its derived indices correlates with most of the strength properties of pulp. Repetition A fibre with thinner cell walls will collapse more easily than a fibre with thicker cell walls. Collapsed fibres are more flexible and have a higher area available for bonding. Collapsed fibres create a network with much higher density and lower bulk. Thus, the paper will have a higher tensile strength, compression strength, burst strength, tensile stiffness, and elasticity. The flexibility of the fibres has a large influence on the tensile strength, density, porosity and light scattering of the paper. Fibre cell lumen size and cell wall thickness affect the rigidity and strength properties of the papers (Panshin & de Zeeuw 1980). Fibres with a large lumen and thin walls tend to flatten to ribbons during paper-making with enhanced inter-fibre bonding between fibres, consequently having good strength characteristics (Oluwadare, 1998;Osadare, 2001).
Suitable indices such as cell wall thickness, Runkel ratio, flexibility coefficient, slenderness ratio, and rigidity coefficient which determine the suitability of any fibrous material for pulp and paper making have not been well documented for the wood species under study. Most Nigerian woods are lacking in this aspect. To effectively use these species as raw materials in pulp and paper-making furnish, reliable knowledge of their suitability based on the derived indices is essential. In this study, the morphological indices of 18 hardwood species were analysed to determine if they could serve as fibre blended with long softwood fibre or recycled paper pulps. This intends as to increase the raw material base for the moribund Nigerian paper mills, whose major problem is inadequate fibre raw material.
A. Wood Collection and Preparation
Eighteen different wood species were collected from sawmills in Akure, Ondo State. Akure falls within the rainforest zone of Nigeria. The wood Samples were first identified by the saw millers. The literature further substantiated the local identification to obtain their corresponding scientific names. A list of the species used in the study with their local names is provided in Table 1. All of the wood samples were taken from mature wood. Special care was taken to ensure that species were accurately identified using macroscopic and microscopic anatomical features such as colour, density, and press.
Fibre characterisation of the wood species
Fibre characterisation of the wood samples was carried out following ASTM D-1037-12 (2020) and ASTM D-1413ASTM D- -61 (2007. Small slivers having radial and tangential dimensions of 2 and 5 mm, respectively, from each of the wood species were macerated with acetic acid and hydrogen peroxide (1:1) and boiled in a water bath at a temperature of 100°C for 10 minutes following a procedure adopted by Ogbonnaya, Roy-Macauley, Nwalozie & Annerose (1997). Some macerated fibres were randomly selected and mounted on slides and then observed under a Reichert Microscope. The fibre length, fibre diameter and lumen width of unbroken fibres were measured using an eyepiece micrometer after calibrating with a stage micrometer. Some derived values such as the cell wall thickness, Slenderness Ratio, Flexibility coefficient, Runkle ratio, and Rigidity coefficient were computed from the measured fibre dimensions following the method of Sadiku, Oluyege, and Ajayi (2016) as shown below. Twenty-five fibres were measured from each representative sample slides.
C. Analysis
Variations in the fibre morphology and derived values were evaluated by analysis of variance at p ≤ 0.05. Duncan Multiple Range Test was used to compare mean values for the different species. The evaluated fibre morphology and derived values were then ranked based on the suitability of each species.
A. Fibre Morphology
The mean values of the Fibre Length (FL), Fibre Diameter (FD), Lumen Width (LW) and Cell Wall Thickness (CWT) is presented in Table 2. The fibre length, diameter, lumen width and Cell Wall Thickness varied significantly from species to species. The fibre length varied from 1.05 mm to 2.48 mm. N. diderichii had the longest fibre of 2.48 mm followed closely by R. heudollotii (1.7 mm), while A. indica had the shortest fibre length (0.8 mm) and was closely followed by P. macrocarpa (1.05 mm). The fibre diameter varied from 15.22 µm to 51.68 µm. R. heudolotii had the largest (51.68 µm) fibre diameter, while M. excelsa had the smallest (15.22 µm) ( Table 2). The wood species lumen width (LW) varied from 8.89 µm to 43.82 µm. R. heudolotii had the largest lumen width of 43.82 µm, while the smallest (8.89 µm) was recorded for M. excelsa. Cell wall thickness also varied from 2.61 µm to 6.92 µm. P. macrocarpa had the thinnest (2.61µm) fibre cell wall while N. diderichii had the thickest fibre wall of 6.48 µm based on the Duncan multiple range test (Table 3).
Generally, the influence of species was profound on the fibre properties of the wood species according to ANOVA result (Table 2). There were significant statistical variations in the fibre morphologies of the wood species (Table 2). However, the fibre length of M. excelsa, A. boonei, T. africana and A. zygia are statistically similar. T. monaldeph and T. superba; P. angolensis and C. albidium had similar fibre lengths, respectively ( (Table 3).
The wood fibres in this study fall into short (1.05-1.36), medium-long (1.52-1.75), and long (2.0 mm) fibres (Table 3). This finding further substantiates the report of Illvessalo-Pfaffli (1995) that fibre length and width of both woody and non-woody plants vary depending on the species and the plant part from which the fibre is derived. Hurther (2001) also reported that the average length of fibres in hardwoods is about 1 mm and in coniferous wood is about 3 mm. Similar observations by Kpikpi (1992) and Uju and Ugowoke (1997) reported of less than 1.60 mm fibre lengths in some Nigerian hardwood species. All the fibre length of the species falls in the same range as those reported for Guinea savannah species except N. diderrichi which had 2.48 mm (Sadiku & Abdukareem, 2019). R. heudolotii had a larger fibre diameter, T. monaldepha, R. heudolotii, P. angolensis, T. africana, P. macrocarpa, and V. doniana had extremely wider lumen than the reported Guinea savannah woods. However, their cell wall thickness falls in the same range.
The fibre morphological properties are important quality parameters for pulp and paper properties. They are mostly correlated with the physical and mechanical properties of paper. Fibre length is one of the major factors controlling the strength properties of paper (Riki et al., 2019). The fibre length affects the tensile strength, breaking strain and fracture toughness of dry paper and is important for wet web strength (Retulainen et al., 1998). Also, fibre length has been discovered to influence paper sheet formation and its uniformity.
Fibre length is associated with a number of bonding sites available on an individual fibre. It also affects certain characteristics of pulp and paper, such as tear resistance, tensile power and folding power (Fatriani and Banjarbaru, 2017).Generally, both long and short fibres are needed for good papers. Most of the fibres of the species in this study are short except for N. diderichii having long fibres. A long fibred material can have more fibre joints and create a stronger network than a shorter fibre (Riki et al., 2019). Although shorter fibres decrease tensile stiffness. However, the shortening of fibres will improve the formation if well beaten. Therefore, the beating of the wood species in this study will increase the fibres surface and flexibility, which will aid good paper formation.
B. Derived Fibre Morphological Indices of the Wood Species
Some indices are usually calculated to determine the suitability of any fibrous material for pulp and paper production. According to Veveris et al. (2004), the Slenderness Ratio also termed Felting Power, if less than 70 for any fibrous material is not valuable for quality pulp and paper production. Low slenderness ratio means reduced tear strength. Fibre flexibility dictates the burst and tensile strength as well as the development of the paper properties that affects printing. High elastic fibres with high flexibility can collapse easily and flatten to produce good surface area contact while elastic fibres collapsed partially to give relative contact and fibre bonding (Riki et al., 2019).
Good quality papers are produced when the Runkel Ratio is less than one. Fibres with higher Runkel Ratio are stiffer, less flexible and form bulkier paper of low bonded areas than fibres with lower Runkel Ratio (Veveris et al., 2004). The higher the Coefficient of Rigidity the lower the tensile power of the paper, conversely the lower the coefficient of rigidity the higher the tensile power of paper. The mean values of the Slenderness Ratio (SR), Flexibility Coefficient (FC), Runkel Ratio (RR) and Rigidity Coefficient (RC) are presented in Table 4. ANOVA result ( Table 2) showed that there were significant variations in all the derived values among the 18 wood species (Table 4).
Generally, the most slender fibre is R. heudolotii while the most flexible fibre is that of P. macrocarpa judging from the FC value of 0.81 (Table 4). M. excelsa had the highest RR of 1.5 while R. heudolotii had the lowest RR of 0.1. However, some of the wood species showed similarities in their derived values. Generally, the most suitable wood for pulp and paper production based on Runkel Ratio is R. heudolotii due to the lowest RR value of 0.1 (Table 4). R. Dinwoodie (1965), the basis for establishing the suitability of raw material for pulp and paper making is that the Runkel Ratio must be less than one. All the species had RR less than 1 except M. excelsa and P. biglobosa. This indicates that the two species are unsuitable for pulping considering their Runkel Ratio as they are relatively higher than the standard (Xu, Wang, Zhang, Fu & Wu, 2006) (Table 4). A higher Runkel ratio gives lower burst, tear and tensile indexes (Bektas, Tutus & Eroglu, 1999). Fibre with a high Runkel ratio value is stiff, less flexible and forms bulkier paper of low bounded area than the lower ratio fibre. Therefore, it is expected that P. biglobosa and M. excelsa produce poor paper. The RR values reported in this work are similar to other Nigeria timbers reported in previous works (Ezeibekwe, Okeke, Unamba & Ohaeri, 2009;Awaku, 1994;Ogunkunle, 2010;Oluwadare and Sotannde, 2007;Ajuziogu, Nzekwe & Chukwuma, 2010;Sadiku and Abdukareem, 2019).
The fibre flexibility (elasticity coefficient, or Istas coefficient) of the species are 0.50-0.81. Depending on the elasticity rate, fibres were grouped into four Istas followings, Heremans & Roekelboom (1954) and Bektas et al. (1999) grouping. According to this grouping, all the species are not rigid. They were all elastic with R. heudolotii and P. macrocarpa exhibiting high elastic nature. All the wood species have their flexibility/elasticity coefficient ≥ 50 and are therefore included in the elastic fibre group (Table 7). Rigid fibres do not have efficient elasticity and are not suitable for paper production except for cardboard production (Akgül and Tozluoğlu, 2009). It is expected that pulp made from all the wood species would have a greater inter-fibre bond and hence greater tensile strength, which favours those properties that affect printing (Ogunjobi et al., 2014). This range is almost similar to Brindha, Vinodhini & Alarmelumangai (2012), where 0.60 (60%) was reported as well as similar for some and higher than some Nigerian Guinea Savannah Timbers (Sadiku and Abdukareem, 2019). Considering the FC > 0.55 (55%) acceptable value for paper-making fibre (Bektas et al., 1999), all the species would be suitable. However, a flexibility ratio between 50 and 70 implies that the fibres can easily be flat and give good paper with highstrength properties (Brindha et al., 2012).
Fibre slenderness significantly influenced the pulp sheets breaking length, bursting, tearing and stretching (Ogunjobi et al., 2014). All the species had good Slenderness Ratio as they all pass the SR > 33 acceptable value for paper-making fibre according to Xu et al., (2006). However, Bektas et al. (1999) show that if Slenderness Ratio is lower than 70, it is invaluable for quality pulp and paper production (Bektas et al., 1999). But, if the Slenderness ratio is higher than 70, it can be utilized for pulp and paper production. Generally resistance to tearing increases with increasing fibre slenderness. Paper made from all the species is expected to have increased tear strength suitable for wrapping and packaging purposes (Sankia et al., 1997).
The Rigidity Coefficient (RC) of the fibres varied from 0.16 to 0.7. The RC might be associated with fibre cell wall thickness and fibre diameter used to obtain the equation for RC. These fibres are less rigid compared to those of Guinea savannah species (Sadiku & Abdukareem, 2019). This value is in the range of those reported for Eucalyptus tereticornis (0.63) and Eucalyptus camadulensis (0.53) and Eucalyptus grandis (0.33) (Dutt & Tyagi, 2011) which are conventional paper-making fibres as well as juvenile beech (25.85%) and black pine (13.30%) woods. As the fibre rigidity increases, the physical resistance properties of paper weaken (Akgül & Tozluoğlu, 2009). As hard wood generates thick wall fibres, their Rigidity Coefficients are mostly higher (Hus, Tank & Goksal, 1975). The RC in this study is higher than those reported by the various researchers, which may be due to the ages of the trees from which the woods of these species were cut. Therefore, observed species shows higher RC and they may not be used conveniently for producing high quality writing and printing papers, compared with these of low RC which will be less stiff, more flexible and form lower bulk and well bonded paper. Increasing fibre rigidity results in a decrease in fibre bonding, which results in stiffer, less flexible and form bulkier paper with a lower bonded area, coarse surfaced and containing a large amount of void volume (Dutt & Tyagi, 2011).
F-factor is the fibre length ratio to the fibres cell wall thickness. According to Akgül and Tozluoğlu (2009), the higher the F-factor, the better the fibre is for paper-making. The F-factors reported in this study are extremely lower than those reported for both soft and hardwoods. 140.38 and 240.55 were reported for beech and black pine juvenile woods (Akgül & Tozluoğlu (2009);25.92 and 206.78 for two Populus species (Kar, 2005) which are hardwood species and 606.66 and 410.34 were reported for Pinus brutia and Cedrus libani which are softwood species respectively (Erdin, 1985). The F-factor was low for all the species under study as the F-factors did not exceed 0.61 or 61%. The lower values compared to those reported for hardwoods by other researchers may be attributed to the short length of the fibres and the higher cell wall thickness of the fibres.
C. Classification of the Fibres and the Suitability Rating of the Wood Species for Pulp and Paper Production
The classification was done following Metcalfe andChalk (1983) andAnonymous (1984). They classified fibres below 1.60 mm as short while those above 1.60 mm in length as long. Judging from the fibre morphology, we classified the fibres into four classes: <1.00mm as extremely short fibres; 1.00-1.49 mm short fibres; 1.50-1.99 mm medium long and > 2.00 mm long fibres (Table 5).
Species for Pulp and Paper's Production based on their Derived Morphological Indices
The flexibility coefficient, otherwise known as elasticity coefficient, or Istas coefficient, is a function of the elasticity of the wood fibres. Depending on the elasticity rate, fibres are grouped into four following Istas et al. (1954) and Bektas et al. (1999) grouping (Table 4.17). The wood species were thus grouped following the classification as outlined in Table 7. All the wood fibres are elastic with R. heudolotii and P. macrocarpa have highly elastic fibres.
Similarly, the Runkel Ratio is the most important and primary parameter needed to find the suitability of any raw material for pulp and paper. The standard for this ratio is one (1). Any RR values greater than 1 is termed poor (does not favour pulp strength properties). Favour pulp strength properties are usually obtained when the value is below the standard value. All the wood fibres are excellent pulp and paper materials judging from their RR values. Two species; M. excelsa and P. biglobosa had RR values that were greater than 1 (Table 4.12) F-factor shows the flexibility of fibres. The highest F-factor was observed for P. macrocarpa while the least was for A. indica. (Table 4). The high F-factor values for P. macrocarpa and R. heudolotii place these species well in the ranking at the upper limit among the selected hardwood species. The rigidity coefficient put C. albidium to be less suitable due to its highest rigidity (0.70) while R. heudolotii was most suitable considering the Rigidity Coefficient. In terms of slenderness ratio, P. macrocarpa had the best rating while R. heudolotii was the poorest considering the Slenderness Ratio. (1954) and Bektas et al. (1999)
IV. CONCLUSION
There were significant variations in all the measured fibre properties and derived values. Each of the 18 wood species either falls into short (1.05-1.36), medium long (1.52-1.75) or long (2.0 mm) fibre categories. All the fibres were elastic. All the wood is suitable for papermaking based on > 33 SR acceptable value for paper-making fibres. However, P. biglobosa and M. excelsa are not suitable considering their RR, which are greater than 1. C. albidium is unsuitable due to its highest rigidity, while R. heudolotii was most suitable considering its low Rigidity Coefficient. Regarding Slenderness Ratio, P. macrocarpa had the best rating while R. heudolotii had the poorest. | 2022-11-05T15:52:18.043Z | 2022-10-31T00:00:00.000 | {
"year": 2022,
"sha1": "c529c15c4845386e6987ed4e4102194fa5b49a92",
"oa_license": "CCBYNCSA",
"oa_url": "http://ejournal.forda-mof.org/ejournal-litbang/index.php/IJFR/article/download/6120/5849",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2ec875ef0d810462548b52a0bbad135a67607110",
"s2fieldsofstudy": [
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": []
} |
18536594 | pes2o/s2orc | v3-fos-license | Posterior decompression and short segmental pedicle screw fixation combined with vertebroplasty for Kümmell’s disease with neurological deficits
The aim of this study was to investigate the treatment of Kümmell’s disease with neurological deficits and to determine whether intravertebral clefts are a pathognomonic sign of Kümmell’s disease. A total of 17 patients who had initially been diagnosed with Kümmell’s disease were admitted, one patient was excluded from this study. Posterior decompression and vertebroplasty for the affected vertebrae were conducted. Pedicle screw fixation and posterolateral bone grafts were performed one level above and one level below the affected vertebrae. Vertebral tissue was extracted for histopathological examination. The mean time of follow-up was 22 months (range, 18 to 42 months). The anterior and middle vertebral heights were measured on standing lateral radiographs prior to surgery, one day postoperatively and at final follow-up. The Cobb angle, the visual analog scale (VAS) and the Frankel classification were used to evaluate the effects of the surgery. The VAS, anterior and middle vertebral heights and the Cobb angle were improved significantly one day postoperatively and at the final follow-up compared with the preoperative examinations (P<0.05). No significant differences were observed between the one-day postoperative results and those at final follow-up (P>0.05). The neurological function of all patients was improved by at least one Frankel grade. All patients in this study exhibited intravertebral clefts, and postoperative pathology revealed bone necrosis. One patient (not included in this study) showed an intravertebral cleft, but the pathology report indicated a non-Hodgkin’s lymphoma. The intravertebral cleft sign is not pathognomonic of Kümmell’s disease. Posterior decompression with short-segment fixation and fusion combined with vertebroplasty is an effective treatment for Kümmell’s disease with neurological deficits.
Introduction
Delayed post-traumatic vertebral collapse, characterized by painful kyphosis that develops several weeks or months following an injury after a symptom-free period, was first publicly presented by the German surgeon Hermann Kümmell in 1895 (1). The development of the disease has three phases. In the first phase, patients initially experience back pain, which subsides and leads to an asymptomatic period. In the second phase, the pain recurs weeks to months after the initial incident without further apparent trauma. In the third phase, patients develop progressive angular kyphosis. With the advent of radiography, the progressive angular kyphosis was attributed to a delayed post-traumatic vertebral compression fracture. The outstanding radiological findings of Kümmell's disease consist of an intravertebral cleft (either intravertebral vacuum cleft or fluid collection) combined with a collapsed vertebra. More recently, multiple synonymous terms have been used to describe Kümmell's disease, including delayed post-traumatic vertebral collapse (2,3), vertebral osteonecrosis (4,5), intravertebral pseudarthrosis (4,6), fracture non-union (6) and intravertebral cleft (7). However, whether intravertebral clefts are a pathognomonic sign of Kümmell's disease is controversial. Certain studies have demonstrated that intravertebral clefts are a benign sign, whereas others have reported that intravertebral clefts occur rarely in patients with spinal infection and in patients with multiple myeloma (8,9). For Kümmell's disease with persistent pain and without neurological symptoms, percutaneous vertebroplasty (PVP) (4,10) or kyphoplasty (PKP) (5,6,11) achieves good results. For patients with neurological deficits, PVP and PKP are unsuitable. In the past, anterior decompression with bone grafting fusion (12), posterior decompression with pedicle subtraction osteotomy (PSO) (2,3,13,14) or a combined anterior and posterior approach operation (14) were used, but these procedures have long surgery times and cause increased hemorrhage and multiple complications. In the current study, posterior decompression and vertebroplasty were used to treat Posterior decompression and short segmental pedicle screw fixation combined with vertebroplasty for Kümmell's disease with neurological deficits the affected vertebrae. Pedicle screw fixation and posterolateral bone grafts were performed at one level above and one level below the affected vertebrae for Kümmell's disease with neurological deficits and improved results were achieved. Through a review of the literature (15)(16)(17)(18) combined with our own study, we intend to further investigate whether intravertebral clefts are a pathognomonic sign of Kümmell's disease and to determine a suitable treatment method for Kümmell's disease with neurological deficits.
Materials and methods
Patients. The cohort consisted of 16 Surgical procedure. The patients were operated on under general anesthesia and placed in the prone position. Pillows were used to support the upper chest and pelvis and the operating table was adjusted to enable maximum extension of the spinal column. This postural reduction generally restored most of the body height of the fractured vertebrae. Using a standard posterior midline approach, pedicle screws were placed promptly into the vertebrae one level above and below the affected vertebra through a distraction rod to restore the vertebral body height further. The diseased vertebral laminae and ligamentum flavum were resected to decompress the spinal cord. A puncture needle was driven into the affected vertebral body to establish a working channel. A biopsy needle was used to collect a specimen. Under fluoroscopic guidance, bone cement was injected into the vertebral body. Intraoperative exploration revealed no compression of the dural sac and no leakage of bone cement in the spinal canal. Posterolateral fusion with autogenous bone grafts from the decompression laminectomy was performed.
Evaluation. Vertebral height was measured in millimeters along the vertebral borders at the anterior and middle of the vertebral body. The Cobb angle was measured as the angle between the upper endplate of the upper vertebra of the fractured vertebra and the lower endplate of the lower vertebra of the fractured vertebra. The visual analog scale (VAS), which ranges from 0 (no pain) to 10 (maximal pain), was used to assess pain severity. Frankel classification was used to assess neurological status and the development of surgical complications was observed.
Statistical analysis. SPSS 17.0 statistical software (SPSS, Inc., Chicago, IL, USA) was used for analysis. The data are presented as the mean ± standard deviation. One-way ANOVA was used to evaluate the changes in the VAS, Cobb angles and vertebral body heights based on the data obtained preoperatively, one day postoperatively and at final follow-up. A multiple comparison was conducted using the least significant difference test. P<0.05 was considered to indicate a statistically significant difference.
Results
Preoperative standing lateral radiographs and intraoperative prone position lateral radiographs, as well as preoperative extension and flexion radiographs of five patients, were compared and it was identified that vertebral height varied with postural changes. Stress view radiographs showed that vertebral height decreased with flexion and increased with extension ( Fig. 1). All 16 patients presented with intravertebral cleft signs during the preoperative examination. The following radiological patterns were identified as signs of an intravertebral cleft: i) a gas-filled transverse band in the vertebral body on a conventional radiograph (5 cases); ii) a gas-filled transverse band in the vertebral body on a CT image (9 cases, 3 of which exhibited adjacent intradiscal gas at the same time) and iii) a gas or fluid signal on an MRI scan (preoperative MRI scans of all 16 patients). In one case, a lumbar MRI T2-weighted image revealed mixed signals of gas and liquid at T12. After 8 min, the thoracic MRI T2-weighted image displayed an apparent inconsistency and a hyperintense fluid signal at T12 (Fig. 2).
The mean surgery time was 110 min (range, 90-140 min), and the mean estimated blood loss was 250 ml (range, 150-500 ml). The mean volume of polymethylmethacrylate (PMMA) was 7.2 ml (range, 4.5-12 ml). A spinal dural tear occurred in one case. Intraoperative biopsies from all 16 cases reported bone necrosis (Fig. 3). Clinically, one patient was identified who had no neurological deficits (and so was excluded from the group), whose CT displayed the vacuum phenomenon (Fig. 4A) and whose MRI scan displayed a liquid signal (Fig. 4B); the pathology report revealed non-Hodgkin's lymphoma (Fig. 4E).
The patients underwent follow-up after 18-42 months (mean, 22 months). The mean VAS score, the anterior and middle height of the affected vertebrae and the Cobb angle improved significantly from prior to the surgery to one day postoperatively (P<0.01). The improvement was maintained from one day postoperatively to the final follow-up (P>0.05; Table I). No patient received a grade A under the Frankel classification. Preoperatively, two patients were classified as grade B, five were grade C and nine were grade D. One day postoperatively, one patient was grade B, three were grade C, seven were grade D and five were grade E. At final follow-up, two patients were grade C, five were grade D and nine were grade E. The neurological function of each patient was improved by at least one level at the final follow-up (Table II). One patient developed a superficial skin infection. No obvious loosening of internal fixation, breakage or bone cement displacement occurred.
Discussion
Maldague et al (19) first reported the intravertebral vacuum cleft sign, and the authors considered gas accumulation (vacuum cleft sign) in the vertebral body on plain X-rays as pathognomonic of Kümmell's disease. The vacuum phenomenon is more evident in the extended position and may reduce or disappear in the flexed position. The gas noted on the plain radiographs was expected to be hypointense on both the MRI T1 and T2 sequences. However, the majority of authors have reported either a homogeneous fluid or gas signal on the MRI sequences of patients with the intravertebral vacuum phenomenon. Malghem et al (8) plausibly explained this phenomenon. Patients with the vacuum sign were serially imaged, and the MRI demonstrated that the initially gas-filled cleft appeared hypointense. However, following prolonged supine positioning, a hyperintense signal appeared on the T2 sequences, indicating the presence of fluid instead of gas. We also observed this phenomenon in one patient. The lumbar MRI T2-weighted image showed a mixed signal of gas and liquid at T12. After 8 min, the thoracic MRI T2-weighted image showed a hyperintense liquid signal at T12, which suggests that the contents (fluid and gas within the vertebral body) are variable over time.
Whether intravertebral clefts are a pathognomonic sign of Kümmell's disease is controversial. Certain studies have demonstrated that intravertebral clefts are a benign sign, whereas others have reported that intravertebral clefts occur rarely in patients with spinal infections and in patients with multiple myeloma (8,9). We identified a patient (excluded from
A B A B
the study) with a CT that displayed a vacuum phenomenon (Fig. 4A) and an MRI that displayed a liquid sign (Fig. 4B).
The patient was diagnosed with Kümmell's disease based on the clinical and radiological signs. The vacuum cleft was filled well with PMMA ( Fig. 4C and D). However, the pathology report revealed non-Hodgkin's lymphoma (Fig. 4E). To the best of our knowledge, no non-Hodgkin's lymphoma with vacuum cleft has been reported. Therefore, intravertebral clefts are not pathognomonic of Kümmell's disease, but they are highly suggestive of the disease. Thus, we consider that it is necessary to confirm Kümmell's disease with bone necrosis under biopsy. The pathogenesis of the vertebral vacuum phenomenon remains controversial and it has been mainly theorized to involve vertebral avascular necrosis (4,19), vertebral fracture nonunion or pseudarthrosis (6) or intradiscal gas leakage through the endplate fractured into the vertebral body (20). In the current study, only two patients had factors that predispose to bone necrosis (long-term corticosteroid application history). The remaining patients had no other predisposing factors. The theory of vertebral avascular necrosis alone does not explain the pathogenesis of the disease. In the current study, nine patients exhibited a gas signal in the affected vertebral body based on CT but only three cases had gas in the adjacent disk. Therefore, the theory that intravertebral gas originates from the adjacent disk alone does not explain the intravertebral vacuum phenomenon. In addition, we compared the preoperative standing lateral radiographs and intraoperative prone lateral radiographs, as well as the preoperative extension and flexion radiographs, of five patients. We found that vertebral height varied with postural changes, in accordance with the report by Yang et al (6). These findings support the theory of vertebral fracture nonunion or pseudarthrosis. Thus, we advocate the complete filling of the cleft with cement to maximize stabilization of the pseudarthrosis. In the current study, the mean amount of cement injected was 7.2 ml. According to the literature, as well as our imaging results and clinical data, the pathogenesis of the vertebral cleft phenomenon requires a combination of avascular bone necrosis, fracture non-healing and adjacent intradiscal gas diffusion.
The treatment strategies for Kümmell's disease differ between patients with neurological symptoms and those without neurological symptoms. For patients without neurological symptoms, the objective is to eliminate motion at the fracture site and restore the spinal curvature. Certain authors have reported that PVP (4,8) or PKP (5,6,9) achieves good clinical results for Kümmell's disease without neurological symptoms. For neurologically impaired patients, the aim of surgery is to decompress the spinal cord, restore the spinal physiological curvature and maintain spinal stability. The surgical modes include anterior, posterior or combined anterior and posterior approaches. Anterior decompression and fusion with intervertebral tricortical graft or ceramic glass spacers has favorable results. These procedures are the most efficient for decompressing the spinal cord since the locus of pathology (deficient anterior and middle spinal columns) is directly addressed, and they provide anterior column support. Anterior approach surgery has a high fusion rate (95.5-100%) and the postoperative kyphosis correction angle has a mean of 10.4-18˚. At final follow-up, the corrected degree decreased by 4.8-8˚. The drawback of the anterior approach in pleural and extrapleural operations is that it may cause pulmonary complications in injuries of the thoracolumbar junction, where most cases of intravertebral vacuum occur, and it may affect gastrointestinal function in retroperitoneal surgery. Moreover, in the anterior approach, the stabilization of the spine may fail due to the osteoporotic bone. Surgeries that use the posterior approach include decompression and PSO (2,3,13,14). The fusion rate of the posterior approach operation is 62.5-100%, and the immediate postoperative kyphosis correction angle is 14.6-25.7˚. The average loss of correction at final follow-up is 2.4-8.8˚. PSO surgery often requires the fixation of the vertebral bodies above and below the affected vertebra; thus, adjacent vertebral disease often occurs. A combined anterior and posterior approach has a good fusion rate (100%), with a kyphosis angle correction of 11.2˚ postoperatively and a loss of 4.2˚ at final follow-up. However, the surgery time is longer (351 min) and the blood loss is higher (2892 ml) (14).
Patients with Kümmell's disease with neurological symptoms are often older and have a variety of diseases; thus, the patients do not easily tolerate the aforementioned surgical methods. Therefore, the development of a minimally invasive and effective treatment is required. Surgeons have performed open posterior decompression and short-segment fixation for Kümmell's disease with neurological symptoms, followed by vertebroplasty (15)(16)(17)(18) or kyphoplasty (17) under direct visualization. This surgical method provides several advantages. Posterior decompression relieves nerve compression with short segment fixation and fusion reduces the fusion segment and the influence of long segmental spinal function. Vertebral bone cement provides anterior support to minimize posterior pedicle screw stress. Furthermore, bone cement leakage may be avoided under direct vision. Matsuyama et al (18) used calcium phosphate cement, which polymerizes at lower temperatures. The results included effective pain relief (from 8.6, preoperatively, to 2, postoperatively, on the VAS), nerve function and kyphosis restoration (vertebral height from 41% preoperatively to 74% postoperatively and 68% at final follow-up). In the current study, we used PMMA for vertebroplasty which achieved effective pain relief (the mean preoperative VAS score of 8.49 was reduced to 2.09 one day postoperatively and 2.29 at final follow-up) and good postoperative kyphosis correction (the anterior and central vertebral body height were enhanced by ~1 cm and Cobb's angle correction was 18.29˚ one day postoperatively). Follow-up examinations were conducted for ≥18 months. At the final follow-up, a slight reduction in the vertebral height and a kyphosis correction of 1.11˚ were observed compared with those at one day after the surgery. However, these differences were not statistically significant. The patients recovered neurologically, and nerve function improved by least one Frankel grade at final follow-up. The mean surgery time was 110 min (range, 90-140 min) and the mean estimated blood loss was 250 ml (range, 150-500 ml). Thus, posterior decompression with short-segment fixation and fusion combined with vertebroplasty is an effective treatment for Kümmell's disease with neurological symptoms, especially for patients who are not able to tolerate long surgery times and massive blood loss. However, a previous study hypothesized that the osteolysis rate among patients with Kümmell's disease is greater than the rate of bone callus formation. Following PVP or PKP, accelerated osteolysis occurs and may displace the bone cement (21). Two case reports have focused on bone cement displacement following PVP (22) or PKP (23) alone for Kümmell's disease without neurological deficits. Therefore, greater numbers of patients and longer follow-up times are required to verify the efficiency of posterior decompression with short segmental pedicle screw fixation and fusion combined with vertebroplasty for Kümmell's disease with neurological deficits. | 2016-05-12T22:15:10.714Z | 2012-11-26T00:00:00.000 | {
"year": 2012,
"sha1": "2e08c5e8b5d7d04af37bea122a50dd221aa3d670",
"oa_license": "CCBY",
"oa_url": "https://www.spandidos-publications.com/etm/5/2/517/download",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2e08c5e8b5d7d04af37bea122a50dd221aa3d670",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
85560747 | pes2o/s2orc | v3-fos-license | Correlative Experimental and Theoretical Investigation of the Angle-Resolved Composition Evolution of Thin Films Sputtered from a Compound Mo 2 BC Target
The angle-resolved composition evolution of Mo-B-C thin films deposited from a Mo2BC compound target was investigated experimentally and theoretically. Depositions were carried out by direct current magnetron sputtering (DCMS) in a pressure range from 0.09 to 0.98 Pa in Ar and Kr. The substrates were placed at specific angles α with respect to the target normal from 0 to ±67.5◦. A model based on TRIDYN and SIMTRA was used to calculate the influence of the sputtering gas on the angular distribution function of the sputtered species at the target, their transport through the gas phase, and film composition. Experimental pressureand sputtering gas-dependent thin film chemical composition data are in good agreement with simulated angle-resolved film composition data. In Ar, the pressure-induced film composition variations at a particular α are within the error of the EDX measurements. On the contrary, an order of magnitude increase in Kr pressure results in an increase of the Mo concentration measured at α = 0◦ from 36 at.% to 43 at.%. It is shown that the mass ratio between sputtering gas and sputtered species defines the scattering angle within the collision cascades in the target, as well as for the collisions in the gas phase, which in turn defines the angleand pressure-dependent film compositions.
Introduction
Mo 2 BC is classified as a nanolaminated material with an orthorhombic structure [1][2][3].It shows a unique combination of mechanical properties, such as an elastic modulus of 470 GPa, a ratio of bulk and shear moduli of 1.73, and a positive Cauchy pressure, which are required for hard and wear-resistant coatings with moderate ductility [3,4].Bolvardi et al. [5] successfully synthesized crystalline Mo 2 BC at 380 • C by high power pulse magnetron sputtering (HPPMS) [6] compared to a required temperature of 580 • C during direct current magnetron sputtering (DCMS) [7].The lower deposition temperature for the synthesis of a crystalline thin film by HPPMS was attributed to a larger adatom mobility induced by ion bombardment during HPPMS.
The compositional evolution of binary Ti-B thin films was investigated experimentally and with a Monte Carlo model based on TRIDYN (dynamic transport of ions in matter) and TRIM (transport of ions in matter) codes [12].It was shown that the Ti/B ratio strongly depends on the gas pressure and target-substrate distance, which in a product is proportional to the number of collisions sputtered species experience within the gas phase.The model was extended to Cr-Al-C thin films-a ternary system [29].
Van Aeken et al. [36] developed a Monte Carlo code SIMTRA for the simulation of sputtered particle trajectories in a gas-phase within a definable 3D setup.Collision modelling by interatomic potentials and thermal motion of background atoms are included within the code.
From the above, it can be learned that the deviation of the chemical composition of a thin film and multi-element target can be controlled by the sputtering pressure and gas type.
Within this work, experimental data were compared to a model based on TRIDYN and SIMTRA utilized for Mo-B-C thin films to understand how the gas phase transport affects the thin film chemical composition in a system with large mass differences of the multi-element target constituents.
Experimental Details
Mo-B-C thin films were deposited in a high vacuum chamber assembled from a DN160 six-way cross.A base pressure of <1.1 × 10 −4 Pa was achieved before all depositions with a combination of a rotary-vane (Edwards E2M28, Edwards, Burgess Hill, UK) and a turbomolecular pump (Pfeiffer Vacuum TPU 240, Aßlar, Germany).A self-built magnetron with Ø 90 mm was placed in the center of the chamber.A 6 mm thick Mo 2 BC compound target (Plansee Composite Materials GmbH, Lechbruck am See, Germany) with the composition of 54.3 at.%, 24.2 at.%, and 21.5 at.% of Mo, B, and C, respectively, bonded on a Cu backing-plate, was utilized for the investigations.The target contained a major Mo 2 BC phase with minor Mo 2 C and MoC phases (Figure 1), as measured by a Bruker D8 Discovery general area detector diffraction system (GADDS, Bruker, Billerica, MA, USA) with Cu(Kα) radiation at 40 kV and 40 mA with a constant incident angle of ω = 15 • .The thin films were deposited for 1 h onto grounded, not intentionally heated Si (100) substrates with a size of approximately 15 × 15 mm 2 arranged at different angular positions with respect to the target normal of α {0 • , ±22.5 • , ±45 • and ±67.5 • } (Figure 2).The target-substrate distance was kept constant at 70 mm with respect to the target center point.The DC power of 100 W was applied by an ADL 1.5 kW DC power supply (ADL Analoge und Digitale Leistungselektronik GmbH, Darmstadt, Germany).The Ar and Kr pressures utilized in the depositions are summarized in Table 1.
of the chamber.A 6 mm thick Mo2BC compound target (Plansee Composite Materials GmbH, Lechbruck am See, Germany) with the composition of 54.3 at.%, 24.2 at.%, and 21.5 at.% of Mo, B, and C, respectively, bonded on a Cu backing-plate, was utilized for the investigations.The target contained a major Mo2BC phase with minor Mo2C and MoC phases (Figure 1), as measured by a Bruker D8 Discovery general area detector diffraction system (GADDS, Bruker, Billerica, MA, USA) with Cu(Kα) radiation at 40 kV and 40 mA with a constant incident angle of ω = 15°.The thin films were deposited for 1 h onto grounded, not intentionally heated Si (100) substrates with a size of approximately 15 × 15 mm 2 arranged at different angular positions with respect to the target normal of α ϵ {0°, ±22.5°, ±45° and ±67.5°} (Figure 2).The target-substrate distance was kept constant at 70 mm with respect to the target center point.The DC power of 100 W was applied by an ADL 1.5 kW DC power supply (ADL Analoge und Digitale Leistungselektronik GmbH, Darmstadt, Germany).The Ar and Kr pressures utilized in the depositions are summarized in Table 1.The chemical composition of the deposited films was measured by energy dispersive X-ray spectroscopy (EDX) attached to a JEOL JSM-6480 scanning electron microscope (SEM, JEOL, Tokyo, Japan).The electron gun of the SEM was operated at an acceleration voltage of 5 kV.Each sample was measured 10 times.The statistical uncertainty associated with this EDX quantification of Mo, B, and C was less than or equal to 5% relative deviation.To overcome the unknown systematic uncertainty for light elements in EDX, the samples deposited at 0.66 Pa Ar with α = 0°, −22.5°, −45°, and −67.5° were quantified by time-of-flight elastic recoil detection analysis (ToF-ERDA) and used as a standard for the respective positions.The statistical uncertainty for all ToF-ERDA was <0.4% absolute.In ToF-ERDA, the relative systematic uncertainties in the specific energy loss of the constituents and primary ions of the target are assumed to range from 5% to 10%.Hence, the lower bound of the total measurement uncertainty for the EDX analysis with ToF-ERDA quantified standards ranges from 7% to 11%.The chemical composition of the deposited films was measured by energy dispersive X-ray spectroscopy (EDX) attached to a JEOL JSM-6480 scanning electron microscope (SEM, JEOL, Tokyo, Japan).The electron gun of the SEM was operated at an acceleration voltage of 5 kV.Each sample was measured 10 times.The statistical uncertainty associated with this EDX quantification of Mo, B, and C was less than or equal to 5% relative deviation.To overcome the unknown systematic uncertainty for light elements in EDX, the samples deposited at 0.66 Pa Ar with α = 0 • , −22.5 • , −45 • , and −67.5 • were quantified by time-of-flight elastic recoil detection analysis (ToF-ERDA) and used as a standard for the respective positions.The statistical uncertainty for all ToF-ERDA was <0.4% absolute.In ToF-ERDA, the relative systematic uncertainties in the specific energy loss of the constituents and primary ions of the target are assumed to range from 5% to 10%.Hence, the lower bound of the total measurement uncertainty for the EDX analysis with ToF-ERDA quantified standards ranges from 7% to 11%.
Simulation Details
The angular-resolved chemical composition of the thin films was simulated with a Monte Carlo model based on TRIDYN [37,38] and SIMTRA [36] for the sputtering process and the gas phase transport, respectively.
TRIDYN
The impinging ion energies of Ar + and Kr + ions in the TRIDYN simulation were set according to the experimentally measured target voltages (Table 1).To address the dependence of the surface binding energy from the surface chemistry, a matrix model was introduced [38] and modified [29] for a system containing three elements, as presented in Equation (1), where SBE i is the surface binding energy of the i-th target element at a given target concentration c, c i is the concentration of the i-th target element, and SBV i-j is the surface binding potential of the i-th and j-th elements.SBV i-j are assumed to be constant.Calculated angular distribution functions (ADF) and energy distribution functions (EDF) of the sputtered species are utilized in SIMTRA.
For the determination of the surface binding potentials, an approach based on the energy conservation law [29,38] was used and will in the following be called the energy conservation law approach.In addition, a DFT ab initio-based approach has been employed.
Energy Conservation Law Approach
The surface binding potential of pure elements SBV i-i is assumed to be equal to the enthalpy of sublimation ∆ sub H i .The surface binding potential of the atom pairs SBV i-j is calculated using Equation ( 2), where ∆ f H Mo n B m C o is the enthalpy of formation of the ternary compound and a and b are the stoichiometric factors of the elements i and j.
The energy of formation per formula unit (f.u.) of ∆ f H Mo 2 BC = −1.132eV f.u.used in the simulations was calculated by Bolvardi et al. [4].The enthalpies of sublimation of 6.83, 5.73, and 7.51 eV for Mo, B, and C are given in the elements.datfile of TRIDYN, respectively.In addition, enthalpies of sublimation of 6.81, 5.75, and 7.37 eV for Mo, B, and C, respectively, can be found in [39].
Ab Initio Approach
In addition to the TRIDYN approach, an ab initio approach based on DFT was used for the determination of the respective surface binding potentials.DFT calculations were implemented within the Vienna ab initio simulation package (VASP) [40,41].Perdew-Burke-Ernzerhof (PBE) adjusted generalized gradient approximation (GGA) [42] was used for all calculations with projector augmented wave potential [43].In addition, the tetrahedron method for total energy using Blöchl-corrections [44] and the reciprocal space integration using the Monkhorst-Pack scheme [45] were applied.The utilized k-point grid was 4 × 4 × 4 for the (100) and (001) surfaces and 6 × 2 × 6 for the (010) surface.The cut-off energy was set to 500 eV with an electronic relaxation convergence of 0.01 meV.Considering the matrix model presented in Equation ( 1), the energy required to remove atoms of specific surfaces with different chemical compositions needs to be calculated.( 100) and (001) surfaces, as well as different surface terminations of the (010) surface, are considered in the calculation and are illustrated in Figure 3. Subsequently, atoms are removed from the surface, creating a vacancy.The change in energy is considered to be the surface binding potential of the atom within the respective surface, as shown in Equation ( 3).E i is the energy of the atom i after being removed from the surface, E surface,j vac,i is the energy of surface j with the vacancy of atom i, and E surface j is the energy of surface j without a defect.Within DFT, the surfaces were simulated by a vacuum layer on top of the unit cell with the height of approximately 10 Å for (100) and (001) and 17 Å for (010) surfaces.Calculated SBVs for both approaches are presented in Equations ( 4) and ( 5).
SBV energy conservation law = the surface, E vac,i surface,j is the energy of surface j with the vacancy of atom i, and E surface j is the energy of surface j without a defect.Within DFT, the surfaces were simulated by a vacuum layer on top of the unit cell with the height of approximately 10 Å for (100) and (001) and 17 Å for (010) surfaces.Calculated SBVs for both approaches are presented in Equations ( 4) and (5).
SBV energy conservation law =
SIMTRA
Within SIMTRA simulations, 1 × 10 7 particles for Mo and 5 × 10 6 particles for B and C corresponding to a 2:1:1 target composition were transported.For the simulation setup, a cylinder with a diameter of 0.16 m and a length of 0.334 m was used.The target was positioned in the center of the simulation chamber.Seven circular substrates with a radius of 5 mm were arranged in the chamber corresponding to the actual experimental setup.The gas temperature was set to 300 K.The atomic interaction was described with the Lenz-Jensen screening function implemented in SIMTRA.Gas motion and diffusion is considered within the gas transport.The racetrack profile of the target used for the experimental work was measured by a profilometer and taken into account for the simulations.The simulations were carried out in vacuum (pAr = 1 × 10 −9 Pa) and in Ar and Kr gaseous atmosphere at pressures utilized in the experiments (Table 1).Atoms redeposited on the target during deposition are not sputtered again within the simulation.To overcome this virtual loss of particles, atoms redeposited on the target are distributed on all surfaces within the utilized simulation chamber with respect to the initial particle distribution, including the influence of the angular distribution function.For this, the ratio of deposited atoms on a substrate divided by the total number of sputtered atoms was multiplied by the number of deposited atoms on the target surface and added to the specific substrate.
SIMTRA
Within SIMTRA simulations, 1 × 10 7 particles for Mo and 5 × 10 6 particles for B and C corresponding to a 2:1:1 target composition were transported.For the simulation setup, a cylinder with a diameter of 0.16 m and a length of 0.334 m was used.The target was positioned in the center of the simulation chamber.Seven circular substrates with a radius of 5 mm were arranged in the chamber corresponding to the actual experimental setup.The gas temperature was set to 300 K.The atomic interaction was described with the Lenz-Jensen screening function implemented in SIMTRA.Gas motion and diffusion is considered within the gas transport.The racetrack profile of the target used for the experimental work was measured by a profilometer and taken into account for the simulations.The simulations were carried out in vacuum (p Ar = 1 × 10 −9 Pa) and in Ar and Kr gaseous atmosphere at pressures utilized in the experiments (Table 1).Atoms redeposited on the target during deposition are not sputtered again within the simulation.To overcome this virtual loss of particles, atoms redeposited on the target are distributed on all surfaces within the utilized simulation chamber with respect to the initial particle distribution, including the influence of the angular distribution function.For this, the ratio of deposited atoms on a substrate divided by the total number of sputtered atoms was multiplied by the number of deposited atoms on the target surface and added to the specific substrate.
Experiment
The angle-and pressure-dependent film compositions for both sputtering gases, Ar and Kr, are presented in Figure 4.The target composition is indicated by black solid lines.For both sputter gases, the angle-dependence of Mo is convex, while the lighter elements B and C show a concave angle-dependence.At α ≤ 22.5 • (Figure 2), a deficiency of the heavy element (Mo) and a surplus of light elements (B and C) is measured.Mo exhibits a deficiency of up to 18 at.%,while B and C exhibit a surplus of up to 9 at.% with respect to the target composition.The opposite trend is observed for α ≥ 45 • .Hence, the film composition while sputtering from a Mo 2 BC target is angle-dependent, which was previously observed by Olsen et al. [35] for sputtering (metallic) alloy targets.They explained mass-dependent angular distribution functions by backscattering of light elements on the heavier elements within the collision cascade in the target [35], resulting in an enrichment of lighter elements in directions normal to the target surface.Obviously, Mo cannot be backscattered due to reflective collisions with lighter elements, such as B and C.
Experiment
The angle-and pressure-dependent film compositions for both sputtering gases, Ar and Kr, are presented in Figure 4.The target composition is indicated by black solid lines.For both sputter gases, the angle-dependence of Mo is convex, while the lighter elements B and C show a concave angledependence.At α ≤ 22.5° (Figure 2), a deficiency of the heavy element (Mo) and a surplus of light elements (B and C) is measured.Mo exhibits a deficiency of up to 18 at.%,while B and C exhibit a surplus of up to 9 at.% with respect to the target composition.The opposite trend is observed for α ≥ 45°.Hence, the film composition while sputtering from a Mo2BC target is angle-dependent, which was previously observed by Olsen et al. [35] for sputtering (metallic) alloy targets.They explained mass-dependent angular distribution functions by backscattering of light elements on the heavier elements within the collision cascade in the target [35], resulting in an enrichment of lighter elements in directions normal to the target surface.Obviously, Mo cannot be backscattered due to reflective collisions with lighter elements, such as B and C. Comparing the Mo content of Ar and Kr depositions, a clear pressure-dependence can be seen for Kr, while no significant composition changes were obtained for Ar.For Kr sputtering at α = 0°, the Mo content changes from 36 at.% at 0.09 Pa to 43 at.% at 0.96 Pa.The chemical variation at α = ±45° is less distinct, while at α = ±67.5°, the Mo content variations are within the measurement error.For gas phase scattering of B and C in Kr, the opposite trend is observed regarding the angledependent composition variation.However, the chemical variations due to pressure changes are within the measurement error.It is evident that an increase in pressure leads to a chemical Comparing the Mo content of Ar and Kr depositions, a clear pressure-dependence can be seen for Kr, while no significant composition changes were obtained for Ar.For Kr sputtering at α = 0 • , the Mo content changes from 36 at.% at 0.09 Pa to 43 at.% at 0.96 Pa.The chemical variation at α = ±45 • is less distinct, while at α = ±67.5 • , the Mo content variations are within the measurement error.For gas phase scattering of B and C in Kr, the opposite trend is observed regarding the angle-dependent composition variation.However, the chemical variations due to pressure changes are within the measurement error.It is evident that an increase in pressure leads to a chemical composition closer to the nominal target composition and hence, stoichiometry.In an effort to determine the cause for the here observed sputtering gas-induced composition deviations, simulations were carried out, which allow for an independent analysis of composition deviations caused by sputtering of the target and scattering during the gas phase transport.
Simulations
The angle-and pressure-dependent film compositions with surface binding potentials (SBV's) determined by the energy conservation law and ab initio approaches, as discussed above, are presented in Figure 5 for depositions in Ar.
Coatings 2019, 9, x FOR PEER REVIEW 7 of 13 determine the cause for the here observed sputtering gas-induced composition deviations, simulations were carried out, which allow for an independent analysis of composition deviations caused by sputtering of the target and scattering during the gas phase transport.
Simulations
The angle-and pressure-dependent film compositions with surface binding potentials (SBV's) determined by the energy conservation law and ab initio approaches, as discussed above, are presented in Figure 5 for depositions in Ar.The trend of the experimentally determined angle-and pressure-dependent film composition depicted in Figure 4 is reproduced.The angle-dependence of Mo is convex, while B and C show a concave angle-dependence.Films at α ≤ 22.5° exhibit a deficiency of the heavy Mo and an enrichment of light B and C. As in the experimental data for α > 45°, an opposite trend is observed.The maximum difference in SBVs determined by the energy conservation law and ab initio approaches is 32%.This SBV difference leads to composition differences of less or equal to 0.9 at.% and 1.1 at.% for Mo sputtered in Ar (Figure 5) and Kr (not shown), respectively.The magnitude of these composition differences cannot be resolved by EDX as the expected experimental errors are larger than the composition differences.For all simulations discussed below, SBVs determined by the ab initio approach were employed.
Pressure changes affect the target voltage and hence the ion energies impinging on the target , an opposite trend is observed.The maximum difference in SBVs determined by the energy conservation law and ab initio approaches is 32%.This SBV difference leads to composition differences of less or equal to 0.9 at.% and 1.1 at.% for Mo sputtered in Ar (Figure 5) and Kr (not shown), respectively.The magnitude of these composition differences cannot be resolved by EDX as the expected experimental errors are larger than the composition differences.For all simulations discussed below, SBVs determined by the ab initio approach were employed.
Pressure changes affect the target voltage and hence the ion energies impinging on the target (see Table 1).The influence of the ion energy on the ADF is illustrated in Figure 6.Within these simulations, scattering events during gas phase transport are deliberately not considered by utilizing an Ar pressure of 10 −9 Pa.Hence, these simulations only describe sputtering, specifically the effect of the kinetic energy of Ar + and Kr + on the angle-dependent composition of the sputtered flux.These simulations will therefore be referred to as initial ADFs.Increasing the kinetic energy of Ar + from 314 to 401 eV (by 27%) results in absolute mean composition differences of less than or equal to 0.4 at.% for all simulations.Hence, the absolute, ion energy-induced composition changes in the sputtered flux are on average one order of magnitude smaller than the expected measurement error and hence could not be resolved by EDX measurements.
Coatings 2019, 9, x FOR PEER REVIEW 8 of 13 simulations will therefore be referred to as initial ADFs.Increasing the kinetic energy of Ar + from 314 to 401 eV (by 27%) results in absolute mean composition differences of less than or equal to 0.4 at.% for all simulations.Hence, the absolute, ion energy-induced composition changes in the sputtered flux are on average one order of magnitude smaller than the expected measurement error and hence could not be resolved by EDX measurements.The initial ADF of Mo sputtered by Ar + (see Figure 6) exhibits a convex distribution, resulting in an Mo deficiency of 8 at.% at α = 0° with respect to a nominal Mo content of 50 at.%.At α = ±67.5°,a surplus of 5 at.%Mo is obtained.Both light elements exhibit a concave distribution, resulting in a surplus of 4 at.% at α = 0° and a deficiency of 3 at.%at α = ±67.5°with respect to a nominal light element content of 25 at.%each.Sputtering by Kr + (Figure 6) leads to more pronounced convex and concave distributions for heavy and light elements, respectively.The Mo deficiency and surplus are increased to 14 at.% and 8 at.%, respectively.For both light elements, a surplus of 7 at.%and a deficiency of 4 at.%can be found at α = 0° and ±67.5°, respectively.Compared to Ar, the sputteringinduced differences of ADF in Kr result in larger deviations between the composition of the target and the angle-dependent sputtered flux.These results can be rationalized based on the above discussed mass-dependent reflective collisions within the target surface.In the collision cascade, only B and C can be backscattered by Mo, leading to a preferential ejection of B and C close to the target normal.Mo cannot be backscattered due to a reflective collision with lighter B or C.
Simulations of the film composition that take, in addition to sputtering at the target, the The initial ADF of Mo sputtered by Ar + (see Figure 6) exhibits a convex distribution, resulting in an Mo deficiency of 8 at.% at α = 0 • with respect to a nominal Mo content of 50 at.%.At α = ±67.5 • , a surplus of 5 at.%Mo is obtained.Both light elements exhibit a concave distribution, resulting in a surplus of 4 at.% at α = 0 • and a deficiency of 3 at.%at α = ±67.5 • with respect to a nominal light element content of 25 at.%each.Sputtering by Kr + (Figure 6) leads to more pronounced convex and concave distributions for heavy and light elements, respectively.The Mo deficiency and surplus are increased to 14 at.% and 8 at.%, respectively.For both light elements, a surplus of 7 at.%and a deficiency of 4 at.%can be found at α = 0 • and ±67.5 • , respectively.Compared to Ar, the sputtering-induced differences of ADF in Kr result in larger deviations between the composition of the target and the Coatings 2019, 9, 206 9 of 14 angle-dependent sputtered flux.These results can be rationalized based on the above discussed mass-dependent reflective collisions within the target surface.In the collision cascade, only B and C can be backscattered by Mo, leading to a preferential ejection of B and C close to the target normal.Mo cannot be backscattered due to a reflective collision with lighter B or C.
Simulations of the film composition that take, in addition to sputtering at the target, the scattering events within gas phase transport into account, are shown in Figure 7.The Ar or Kr pressures are identical to the experimental pressures depicted in Table 1.Generally, the experimentally-determined angle-dependent film composition data are consistent with the simulation results.Significant differences between the initial ADF and the ADF obtained after scattering during transport in the gas phase are obtained for Ar and Kr as the pressure is increased by one order of magnitude.An increase in Mo content at α = 0° of 4.7 at.% and 9.7 at.% and for both light elements a decrease of 3 at.%and 5 at.% can be obtained in Ar and Kr, respectively.At α = ±67.5°,no significant pressure-induced impact on the chemical composition can be observed.Generally, the pressureinduced variations in chemical composition are more pronounced in Kr and are in good agreement with the experimentally-determined data.Comparison to the EDX composition measurement error indicates that the pressure-dependent composition variations simulated in Ar cannot be resolved experimentally.
To identify the cause of the here discussed angle-and pressure-dependent film composition variations, the angle-resolved average trajectory lengths of the sputtered species are calculated.The average trajectory length, d, is the mean distance a particle travels from sputtering at the target to deposition at the substrate surface and is maximized for scattering events at large scattering angles and short mean free paths.The pressure-dependence of d is shown in Figure 8 for Ar and Kr.
Increasing the Ar pressure by one order of magnitude results in a relative increase of d at α = 0° of 59%, 31%, and 42% for Mo, B, and C, respectively.The same change in Kr pressure results in a relative increase of d at α = 0° of 111%, 25%, and 29% for Mo, B, and C, respectively.Hence, the average Mo trajectory length is up to 34% larger in Kr than in Ar.
The average number of Mo collisions at the maximum Ar and Kr pressures at α = 0° is 19.4 and 22.9, respectively, exhibiting a relative difference of 18.2%.As the pressure-induced increase in Significant differences between the initial ADF and the ADF obtained after scattering during transport in the gas phase are obtained for Ar and Kr as the pressure is increased by one order of magnitude.An increase in Mo content at α = 0 • of 4.7 at.% and 9.7 at.% and for both light elements a decrease of 3 at.%and 5 at.% can be obtained in Ar and Kr, respectively.At α = ±67.5 • , no significant pressure-induced impact on the chemical composition can be observed.Generally, the pressure-induced variations in chemical composition are more pronounced in Kr and are in good agreement with the experimentally-determined data.Comparison to the EDX composition measurement error indicates that the pressure-dependent composition variations simulated in Ar cannot be resolved experimentally.
To identify the cause of the here discussed angle-and pressure-dependent film composition variations, the angle-resolved average trajectory lengths of the sputtered species are calculated.The average trajectory length, d, is the mean distance a particle travels from sputtering at the target to deposition at the substrate surface and is maximized for scattering events at large scattering angles and short mean free paths.The pressure-dependence of d is shown in Figure 8 for Ar and Kr.At a constant average number of Mo collisions of 19.4, the pressure-induced increase in d of Mo is 23% larger in Kr than in Ar.At an average number of collisions of 10.7 and 12.4 for B and C, respectively, a pressure-induced increase in d of 1.3% and 0.7% was obtained for B and C, respectively.Hence, it is deduced that the average scattering angle of Mo is significantly larger in Kr than in Ar and that the evolution of the angle-and pressure-dependent film composition is determined by the average scattering angle of Mo.Assuming energy and momentum conservation, a mass-dependent expression for the maximum scattering angle of a particle with a mass larger than the gas species is given by Equation ( 6) [47,48], where ϑ is the maximum scattering angle and and are the masses of Mo and the gas atom, respectively.Consequently, maximum scattering angles for Mo of 24.6° and 60.4° in Ar and Kr, respectively, were obtained for the masses of 95.96, 39.95, and 83.80 amu for Mo, Ar, and Kr, respectively [49].Hence, the above deduced larger average scattering angle for Mo in Kr as compared to Ar is caused by the mass ratio between the sputtering gas and Mo.
The simulations carried out within this work allowed an independent consideration of the sputtering process at the target surface, as well as the scattering events within the gas phase transport.Pressure variations over one order of magnitude insignificantly influence the sputtering process, whereas the mass of the impinging ion exhibits a strong impact on the initial ADF.Sputteringinduced differences between the target and thin film composition caused by Kr + are larger compared to sputtering with Ar + , which is in agreement with the sputtering experiments at low pressures.
Gas phase scattering events induced-variations in film chemical composition depend on both the gas pressure and mass of the gas atom.The average trajectory length was shown to be a good indicator for the impact of scattering.To unravel the relative contribution of numbers of collision and average scatter angle, simulations with an identical number of collisions in Ar and Kr of 19.4,10.7, and 12.4, for Mo, B, and C, respectively, were conducted.In Kr compared to Ar a dominant pressure- Increasing the Ar pressure by one order of magnitude results in a relative increase of d at α = 0 • of 59%, 31%, and 42% for Mo, B, and C, respectively.The same change in Kr pressure results in a relative increase of d at α = 0 • of 111%, 25%, and 29% for Mo, B, and C, respectively.Hence, the average Mo trajectory length is up to 34% larger in Kr than in Ar.
The average number of Mo collisions at the maximum Ar and Kr pressures at α = 0 • is 19.4 and 22.9, respectively, exhibiting a relative difference of 18.2%.As the pressure-induced increase in average trajectory length d is caused by the number of collisions, as well as the average scattering angle, simulations were conducted where the number of collisions was kept constant to unravel the contribution of the average scattering angle.For each element, one additional simulation was conducted at a specific Kr pressure (0.89 Pa for Mo and 1.10 Pa for B and C) to match the number of collisions computed for scattering in 0.98 Pa of Ar, which are 19.4,10.7, and 12.4, for Mo, B, and C, respectively.
At a constant average number of Mo collisions of 19.4, the pressure-induced increase in d of Mo is 23% larger in Kr than in Ar.At an average number of collisions of 10.7 and 12.4 for B and C, respectively, a pressure-induced increase in d of 1.3% and 0.7% was obtained for B and C, respectively.Hence, it is deduced that the average scattering angle of Mo is significantly larger in Kr than in Ar and that the evolution of the angle-and pressure-dependent film composition is determined by the average scattering angle of Mo.Assuming energy and momentum conservation, a mass-dependent expression for the maximum scattering angle of a particle with a mass larger than the gas species is given by Equation ( 6) [47,48], where ϑ max is the maximum scattering angle and m Mo and m i are the masses of Mo and the gas atom, respectively.Consequently, maximum scattering angles for Mo of 24.6 • and 60.4 • in Ar and Kr, respectively, were obtained for the masses of 95.96, 39.95, and 83.80 amu for Mo, Ar, and Kr, respectively [49].Hence, the above deduced larger average scattering angle for Mo in Kr as compared to Ar is caused by the mass ratio between the sputtering gas and Mo.
ϑ max = arcsin m gas m Mo (6) The simulations carried out within this work allowed an independent consideration of the sputtering process at the target surface, as well as the scattering events within the gas phase transport.Pressure variations over one order of magnitude insignificantly influence the sputtering process, whereas the mass of the impinging ion exhibits a strong impact on the initial ADF.Sputtering-induced differences between the target and thin film composition caused by Kr + are larger compared to sputtering with Ar + , which is in agreement with the sputtering experiments at low pressures.
Gas phase scattering events induced-variations in film chemical composition depend on both the gas pressure and mass of the gas atom.The average trajectory length was shown to be a good indicator for the impact of scattering.To unravel the relative contribution of numbers of collision and average scatter angle, simulations with an identical number of collisions in Ar and Kr of 19.4,10.7, and 12.4, for Mo, B, and C, respectively, were conducted.In Kr compared to Ar a dominant pressure-induced increase in d of 23% for Mo, compared to 1.3% and 0.7% for B, and C, respectively, was obtained.Hence, the significantly larger average trajectory length of Mo in Kr as compared to Ar at the same number of collisions can be rationalized by the larger average scattering angle of Mo, which in turn controls the evolution of the angle-and pressure-dependent film composition.
Conclusions
The evolution of the angle-resolved composition of Mo-B-C thin films deposited from a Mo 2 BC compound target was investigated experimentally and theoretically as a function of the Ar and Kr pressure.Samples were positioned in a specific angular arrangement from α = 0 • to ±67.5 • with respect to the target normal with a fixed target substrate distance.
Considering the simulated mass-dependent initial angular distribution functions, a convex distribution for Mo was observed, whereas B and C exhibited concave distributions as a consequence of reflective collisions in the collision cascade.B and C can only be backscattered by the heavy Mo leading to the preferential ejection of B and C close to the target normal.Obviously, Mo cannot be backscattered due to a reflective collision with a lighter element.
Within experiments and simulations, the observed change in angle-resolved composition as a result of a by one order of magnitude increased Ar pressure was lower than the expected measurement error and hence, cannot be resolved by EDX.On the contrary, sputtering by Kr + results in significantly larger deviations between the target and the film composition.These deviations can be rationalized based on reflective collisions in the collision cascade.As the Kr pressure is increased, scattering during transport in the gas phase results in angular resolved compositions that approach the target composition.Furthermore, based on considering the relative contributions of the number of collisions and scatter angle to the average trajectory length, it is inferred that the significantly larger average trajectory length of Mo in Kr compared to Ar can be rationalized by an on average larger scattering angle of Mo.It is shown that the mass ratio between sputtering gas and sputtered species defines the scattering angle within the collision cascades in the target, as well as for the collisions in the gas phase, which in turn define the angle-and pressure-dependent film compositions.
Figure 1 . 13 Figure 1 .
Figure 1.XRD pattern of the powder-metallurgically manufactured Mo 2 BC compound target.Small phase fractions of Mo 2 C and MoC were detected.
Figure 3 .
Figure 3. Considered (100), (001) surfaces and (010) surface terminations for the determination of the surface binding potentials in the ab initio approach.The colored spheres represent Mo atoms in purple, B atoms in green, and C atoms in brown.The figure was made with VESTA [46].
Figure 3 .
Figure 3. Considered (100), (001) surfaces and (010) surface terminations for the determination of the surface binding potentials in the ab initio approach.The colored spheres represent Mo atoms in purple, B atoms in green, and C atoms in brown.The figure was made with VESTA [46].
Figure 4 .
Figure 4. Angle-resolved composition evolution of the deposited thin films within the pressure range from 0.1 to 1.0 Pa.The first pressure value pertains to the Ar depositions, the second value to the Kr depositions.The average oxygen content was less than 1.5 at.% for all depositions and not considered further.The target composition is marked by the black horizontal lines.
Figure 4 .
Figure 4. Angle-resolved composition evolution of the deposited thin films within the pressure range from 0.1 to 1.0 Pa.The first pressure value pertains to the Ar depositions, the second value to the Kr depositions.The average oxygen content was less than 1.5 at.% for all depositions and not considered further.The target composition is marked by the black horizontal lines.
Figure 5 .
Figure 5.The simulated angle-resolved composition of thin films with the Ar pressure range from 0.09 to 0.98 Pa.Considered surface binding energies of the two approaches (left) energy conservation law and (right) ab initio.The ideal stoichiometric target composition is marked by the black horizontal lines.
Figure
Figure The simulated angle-resolved composition of thin films with the Ar pressure range from 0.09 to 0.98 Pa.Considered surface binding energies of the two approaches (left) energy conservation law and (right) ab initio.The ideal stoichiometric target composition is marked by the black horizontal lines.The trend of the experimentally determined angle-and pressure-dependent film composition depicted in Figure 4 is reproduced.The angle-dependence of Mo is convex, while B and C show a concave angle-dependence.Films at α ≤ 22.5 • exhibit a deficiency of the heavy Mo and an enrichment of light B and C. As in the experimental data for α > 45• , an opposite trend is observed.The maximum difference in SBVs determined by the energy conservation law and ab initio approaches is 32%.This SBV difference leads to composition differences of less or equal to 0.9 at.% and 1.1 at.% for Mo sputtered in Ar (Figure5) and Kr (not shown), respectively.The magnitude of these composition
Figure 6 .
Figure 6.Angle-resolved composition evolution of the sputtered flux for different impinging ion energies of Ar + (left) and Kr + (right) ions.The first energy value pertains to Ar + sputtering, the second value to Kr + sputtering.The ideal stoichiometric target composition is marked by the black horizontal lines.
Figure
Figure Angle-resolved composition evolution of the sputtered flux for different impinging ion energies of Ar + (left) and Kr + (right) ions.The first energy value pertains to Ar + sputtering, the second value to Kr + sputtering.The ideal stoichiometric target composition is marked by the black horizontal lines.
Coatings 2019, 9 , 13 Figure 7 .
Figure 7. Angle-resolved evolution of simulated film compositions considering sputtering at the target, as well as scattering during gas phase transport.The first pressure value pertains to the Ar depositions, the second value to the Kr depositions.The ideal stoichiometric target composition is marked by the black horizontal lines.
Figure 7 .
Figure 7. Angle-resolved evolution of simulated film compositions considering sputtering at the target, as well as scattering during gas phase transport.The first pressure value pertains to the Ar depositions, the second value to the Kr depositions.The ideal stoichiometric target composition is marked by the black horizontal lines.
Figure 8 .
Figure 8. Simulated average trajectory length of the sputtered atoms transported through the gas phase at given Ar (left) and Krypton (right) pressures.The first pressure value pertains to the Ar depositions, the second value to the Kr depositions.
Figure 8 .
Figure 8. Simulated average trajectory length of the sputtered atoms transported through the gas phase at given Ar (left) and Krypton (right) pressures.The first pressure value pertains to the Ar depositions, the second value to the Kr depositions.
Table 1 .
Ar and Kr gas pressures and measured target voltages which correspond to impinging ion energies for Ar + and Kr + ions. | 2019-03-25T19:31:36.181Z | 2019-03-22T00:00:00.000 | {
"year": 2019,
"sha1": "cca4c040016f7c8bf82c2ce286de28bad84d6bfe",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-6412/9/3/206/pdf?version=1553250334",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "cca4c040016f7c8bf82c2ce286de28bad84d6bfe",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
263653674 | pes2o/s2orc | v3-fos-license | Defect inspection in semiconductor images using FAST-MCD method and neural network
Most defect inspection methods used in semiconductor manufacturing require design layout or golden die images. Unlike methods that require such additional information, this paper presents a method for automatic inspection of defects in semiconductor images with a single image. First, we devise a method to classify images into four types: flat, linear, patterned, and complex using a cosine similarity. For linear and patterned images, we obtain defect-free images that retain the structure. A flat image is then obtained by subtracting the defect-free image from the input image. The FAST-MCD method then estimates the parameters of the inlier distribution of the flat image and uses them to detect defects. A segmentation neural network is used to detect defects in complex images. Unlike conventional methods that only work on a specific structure, our method classifies structures and finds defects in each structure. We use 16 defective images in our experiments, where our method detects all 16 defective images, while the conventional methods detect fewer defective images.
Introduction
Speed, accuracy, and repeatability are required for defect inspection in semiconductor manufacturing.These requirements are becoming more stringent as the fabrication process has become more sophisticated in recent years.Defects in semiconductors affect the appearance, functionality, efficiency, and stability of devices.Manual inspection is subjective, and its precision depends on the inspector's condition, such as eye fatigue.Therefore, automatic optical inspection continues to improve to detect defects and increase yield in semiconductor manufacturing [12,20].Non-destructive visual inspection is critical in the industry to assist or replace subjective and repetitive manual inspection processes.
B Chang-Ock Lee colee@kaist.eduJinkyu Yu hortensia@kaist.ac.krSonghee Han shee33.han@samsung.com 1 Department of Mathematical Sciences, KAIST, Daejeon 34141, Korea 2 Samsung Electronics, Yongin, Gyeonggi-do 17113, Korea Defect inspection methods in semiconductor images can be classified into four types: model-based algorithm, neural network, Die-to-Database (D2DB) method, and Dieto-Die (D2D) method.Many algorithms have been developed to find anomalies in various images, e.g., phase only transform [1], principal component analysis [6], selfsimilarity [9], discrete cosine transform [33], independent component analysis [34], and A-contrario detection [17].Most of these methods assumed a specific structure, such as flat or patterned, and were proposed to fit the structure.Recently, as neural networks have shown good performances in imaging problems, many methods using neural networks have been proposed [10,55,57,60].However, unlike many neural network problems, semiconductor images do not have benchmark data.Neural networks also have the disadvantage in that they are ambiguous to interpret results.Most methods for finding defects, especially in semiconductor manufacturing, are D2DB methods or D2D methods.Traditional D2DB methods [31,36,48] require preprocessing to align the database and an image.Inspection is then performed using the aligned database.There have been attempts to apply neural network [30,39,42] to the D2DB method, but an alignment step is still needed.Traditional D2D methods [21,50,64] use golden die images to make a difference image to find defects.Like the D2DB methods, neural network [3] is applied to the 123 / Published online: 3 October 2023 The International Journal of Advanced Manufacturing Technology (2023) 129:1547-1565 D2D method, but golden die images are still needed to train the network.In addition, the D2DB and D2D methods are very sensitive to the alignment process.Multiple scanning image method [37] is possible if other sensor images are available.
In this paper, we present a method for inspecting defects that removes the ambiguity of neural networks as much as possible with one image without additional information.First, we present a method for classifying images into four types: flat, linear, patterned, and complex using a cosine similarity.A flat image is an image in which the background excluding defects is almost constant with Gaussian noise.A linear image is an image that is shift invariant in a certain direction.If a particular shape appears repeatedly with a certain period, it is a patterned image.A complex image is the one in which all three of the above characteristics are absent.For linear images and patterned images, we reconstruct defect-free images.Then, a flat image is created by subtracting defect-free image from the input image.Under the assumption of Gaussian noise, a histogram of a flat image follows a normal distribution.Defects are considered as outlier in the distribution and could affect the parameters, so we need to minimize the influence of defects by estimating the inlier distribution.This distribution can be estimated using the minimum covariance determinant (MCD) method [46].The MCD method is a highly robust estimator for multivariate location and scatter.The MCD method finds the part of the data with the minimum covariance determinant consisting only of inlier data.Then, defects are found by threshold from the inlier distribution.We use a segmentation neural network for complex images.Figure 1 shows the typical four images and their defect regions.
The rest of this paper is organized as follows: In Sect.2, we briefly review the literature of model-based algorithm for the single image inspection and explain basic tools.Section 3 describes the classifier that classifies images into four types and two ways to remove the structure for linear and patterned images.A segmentation neural network is also described in Sect.3.There are experimental results for several data sets in Sect. 4. We conclude this paper with remarks in Sect. 5.
Previous works
This section briefly introduces conventional methods for finding anomalies in a single image u ∈ R h×w .As mentioned in Sect. 1, most of these methods work on specific structures such as flat or patterned.
Works for flat images
For flat images, there are several ways to find defects.The simplest method [6] uses the mean and standard deviation of the image.Let μ u and σ u be the mean and standard deviation of the image u, respectively.Then, the binary image ỹ representing defects is obtained using the threshold as follows: The constant c is usually assigned a value between 3 and 5.Because the mean and standard deviation of the entire image are used, the results will vary if the image has a large defect.Therefore, a method for estimating the inlier distribution without being affected by defects is needed, which is presented in Sect.2.2.2.
Another method is to divide the image into two regions using linear discriminant analysis (LDA) [50].This method finds the optimal threshold t * .Let C 0 (t) = {u i j | u i j < t} and C 1 (t) = {u i j | u i j ≥ t} where u i j is the value of u at the pixel (i, j).Let μ i (t) and σ 2 i (t) be the mean and variance for the set C i (t) for i = 0, 1.The farther apart the means of the two sets, C 0 (t) and C 1 (t), the smaller the variance of the sets and the better the division.That is, the object function J (t) can be written as .
Then, we find the value t * which maximizes the object function J (t).Since this method always divides the image into two sets, it is not suitable for applying to defect-free images.
Works for linear images
There is a method to find defects in directional textured images [6].This method uses the principal component analysis (PCA) to separate defects and background structures.
To apply PCA, first the average of the column vectors of the image matrix is set to zero.The normalized eigenvalues are then used to find the directional textured background.If the normalized eigenvalue is greater than 1, the principal component represents defect-free background.Otherwise, the principal component represents defects.This method is invariant to horizontal or vertical shifting, rotation, and illumination changes of the directional texture.However, in the case of an image with a vertical linear structure as shown in Fig. 1b, the linear structure is removed in the process of setting the average of the column vectors to zero, so the defect is judged to be the main structure.Therefore, this PCA-based method is not suitable for images with vertical linear structures.
Works for patterned images
There are several methods to find anomalies in patterned images.The simplest method [9] is to first choose an appropriate patch size for each image.Then, it checks how often the patch centered on each pixel appears in the image.For each patch q, we find the k most similar patches q i for i = 1, . . ., k.Then, the reconstructed patch q is obtained by averaging {q i } as where a is a constant.This method is highly sensitive to the patch size.
Another method finds the lattice vectors which generate the pattern.If the lattice vectors generating the pattern are known, it is easy to remove the pattern.Traditional methods for detecting pattern repetition are to use autocorrelation [32] or fast Fourier transform (FFT) [53].The autocorrelation of an image u, denoted by ac ∈ R h×w , is defined as This autocorrelation ac has the largest value at the origin.Therefore, global thresholding is not suitable for finding peak points.There is a method [32] to find peak points, which uses local maxima of smoothed ac as the peak points.The basic idea of the method using the FFT is that the frequency with the maximum value of FFT is related to the number of repetitions.When the number of repetitions of the pattern is large enough, for example, if (x * , y * ) is the index at which the FFT has the maximum value, the periods in the x and y directions can be approximated by h/x * and w/y * , respectively, so that (h/x * , 0) and (0, w/y * ) can be used as lattice vectors.However, if the image has fewer repetitions of the pattern, (i.e., small x * , y * ), then h/x * and w/y * cannot be said to be approximations of the periods.Therefore, this FFTbased method is not suitable for images with few repeated patterns.
Cosine similarity
In this section, we briefly review the cosine similarity which is widely used in image problems such as face verification and clustering [24,40,58,62].In this paper, it will be used to classify images and find periods in the case of patterned images.
Let u ∈ R h×w be an image, K ∈ R k×l be a kernel, and 1 k×l be a matrix of size k ×l with all entries equal to 1.Then, the cosine similarity C S ∈ R h×w is calculated as where * is the convolution and • is the Frobenius norm.When computing the convolution, we use reflection padding on u to get a cosine similarity C S of the same size as the image u.Note that the square, square root, and division operations are calculated on entry-by-entry.A large entry in C S means that the image u has a kernel-like structure near the same indices.
Minimum covariance determinant (MCD) method
This section describes a statistical technique for estimating inlier distribution.Since the average and covariance matrix are extremely sensitive to outliers, a robust estimator is essential and MCD method [46] is one of the most widely used estimators [22,23].In this paper, for defective images, the MCD method will be used to estimate the inlier distribution without being affected by defects.For the sake of completeness, we introduce the MCD method.Let {x i } n i=1 be a finite sample of data in R d with a distribution F, where d is the number of random variables.The MCD is determined by choosing a subset S = {x i j } s j=1 of size n/2 ≤ s ≤ n, which minimizes the determinant of covariance matrix computed from the subset S.Then, α = 1 − s/n is a portion of samples that is not contained in the subset S. Assume that the distribution F has a density of the form where g : R + → R + is a non-increasing function.Then, F is an elliptically symmetric, unimodal distribution.From the average μ S and covariance matrix S of the MCD-solution S, the average μ and covariance matrix of inlier distribution can be obtained by with the consistency factor c α as where q α > 0 satisfies Here, denotes the gamma function (see [5] for more details).However, calculating all covariance determinants of n s subsets is too difficult.
FAST-MCD method
In this section, we introduce the FAST-MCD method [47] to quickly find the MCD-solution S. First, we consider the Mahalanobis distance which measures how much each sample point x i deviates.For a given average vector μ and covariance matrix , the Mahalanobis distance of a point x is defined as The main part of the FAST-MCD method is called the concentration step (C-step), which is described in Algorithm 1. Through the C-step, it holds that det( k ) ≥ det( k+1 ).Since the sequence {det( k )} is monotone and bounded below, it converges.However, there is no guarantee that det( k ) converges to det( S ) for the MCD-solution S. Therefore, the FAST-MCD method has different limits for different choices of S 1 (see [47] for more details).Despite the lack of theory for the convergence to det( S ), the FAST-MCD method has been applied to various fields [2] and empirically proven to produce good results [61].In Table 1, we show by example that the FAST-MCD method estimates the inlier distribution well.
Algorithm 1 C-step in the FAST-MCD method.
Let S 1 be an initial subset of size s.Compute the mean vector μ 1 and covariance matrix 1 for S 1 .
S k+1 : the subset of s vectors selected in order of smallest Mahalanobis distance.
Compute the mean vector μ k+1 and covariance matrix k+1 for S k+1 .end while
Methods
In this section, we introduce a method of inspecting defects in a single image.First, we propose a method using the cosine similarity to classify images into four types: flat, linear, patterned, and complex.For linear and patterned images, we present how to reconstruct defect-free image.A flat image with the structure removed is obtained by subtracting defectfree image from input image.Then, we use the FAST-MCD method to detect defects in flat images.Finally, a segmentation neural network detects defects in complex images.Figure 2 shows the flow chart of our whole algorithm.
Image classification
For convenience, we assume a gray scale image.First, we divide an image u ∈ R h×w into M × N subimages.Then, we calculate the cosine similarity C S i with the kernel K i = i th subimage for i = 1, . . ., M N .A large entry in C S i means that the image u has a K i -like structure near the same indices.We find the region P i = {(x, y) | C S i (x, y) > t i for x = 1, . . ., h and y = 1, . . ., w} for a threshold t i and call it the repeated region.The C S i 's of the flat image and the pattern image are different.Since C S i depends on the structure of the image, the threshold t i to obtain P i must be set adaptively for the image.Therefore, t i is selected a value between the maximum value 1 and the minimum value of C S i , and the results for various ratios between the maximum and minimum values are in Appendix A.1.Here, we use t i = 0.85 + 0.15 min C S i .For each P i , we consider the centroid of K i .Then, we overlap the repeated regions {P i } based on the centroid of each K i and call it the overall repeated region If M or N is so large that the kernel K i becomes smaller than the repeating pattern, the cosine similarity C S i cannot find the pattern.In Appendix A.2, there is a one dimensional example to show that a kernel with a small size cannot find the pattern.
In Appendix A.1, we compute the moment tensor I of the connected region R containing the center of the domain [1, h] × [1, w].If an image has a linear structure, P has a long connected region R, with the large axis ratio defined as the ratio of large and small eigenvalues of I , in the dominant direction defined as the direction of the major eigenvector.If the axis ratio is greater than 25, we determine that the image is linear as discussed in Appendix A.1.
For an image to have a pattern, it must be repeated at least three times.If the pattern is repeated three times in one direction, then P has five high value regions in a straight line.If the pattern is repeated three times in a triangular shape, then P has seven high value regions and can form three straight lines, each containing three high value regions.For each high value region, we can extract a peak point as the centroid of the region.That is, P of a patterned image has at least five peak points on one or two straight lines.
For a Gaussian noised flat image, the histogram of d 2
M
for the MCD-solution S follows the chi-squared distribution.Jensen-Shannon divergence (JSD) is commonly used to measure the distance between two distributions x and y [14]: Note that this JSD(x, y) is bounded by log 2. If the JSD between the histogram of d 2 M for the MCD-solution S and the chi-squared distribution is less than 5 log 2/100, we determine that the image is flat (see Appendix A.3 for more details).Now, images can be classified by using the information of the repeated region as follows: 1. Linear image: Axis ratio of the connected component R ≥ 25. 2. Patterned image: At least 5 peak points in P forming one or two straight lines.3. Flat image: JSD between the histogram of d 2 M for the MCD-solution S and the chi-squared distribution ≤ 5 log 2/100.Figure 3 shows the cosine similarities and overall repeated regions for four types of images.The axis ratio of R is displayed at the top of P. Note that the second row, which has a linear structure, shows a higher axis ratio than others.For the third row with a patterned structure, the yellow line in the last column represents the line passing through five or more peak points including the center of P. Figure 4 shows the results of our image classification method.
Defect inspection by image type
As mentioned in Sect. 1, for linear and patterned images, we present two methods to reconstruct defect-free images.The difference between the defect-free image and the input image becomes a flat image containing defects.For flat images, we use the FAST-MCD method to estimate the inlier distribution and find the defects.The segmentation neural network is applied to inspect complex images.
Removal of structure in linear images
An image in which the axis ratio of the connected component R is greater than 25 is judged to have a linear structure.For linear images, we compute the direction of the major 4 Examples of images for four types: flat [37], linear [11,37,52], patterned [41,63,67], and complex [18,50,51].(All flat images and third linear image have permission from IOP Science, and first and second complex images have permission from Elsevier and Springer, respectively) eigenvector of the moment tensor I to get the dominant direction.The defect-free image can be obtained by taking the median of average intensity of the dominant line and the intensities at both ends along the dominant line.If the line has no defects, the average value is chosen as the median.Otherwise, one of the two end values is chosen as the median.Then, we can obtain a flat image by subtracting the defectfree image from the input image.Finding defects in the flat image can be done as in Sect.3.2.3.Figure 5b shows a long connected region R colored by green.It has an axis ratio 77.039 > 25 and a vertical major eigenvector.Hence, it is classified as a linear image.Figure 5d shows the defectfree image obtained using the dominant lines with the same direction as the major eigenvector.In Fig. 5e, the defects are prominent in the flattened image where the linear structure is removed.
Removal of structure in patterned images
If an image is not linear, we consider a straight line passing through the center of overall repeated region P.As mentioned in Sect.3.1, if there exist one or two straight lines passing through at least five peak points, the image is judged to have a patterned structure.For a patterned image, we extract two lattice vectors {w 1 , w 2 } (see Appendix A.4 for details on how to extract the lattice vectors).Depending on the pattern of the image, one lattice vector can be the zero vector.Using the two lattice vectors {w 1 , w 2 }, we create lattice points (see Algorithm 2).We overlap the image u ∈ R h×w so that the top left of the image is located at each lattice point.After taking the average values of the overlapped images, a defect-free image can be obtained by cropping the averaged image of size h × w in the middle.Since the defects do not appear repeatedly, the average image gives a defect-free Algorithm 2 Lattice points generation. Assume , and , with the notation x for the smallest integer greater than x.else[ image.Then, a flat image can be obtained by subtracting the defect-free image from the input image.Finding defects in the flat image can be done as in Sect.3.2.3.Figure 6 shows a graphical description of lattice point generation.We overlap the image u ∈ R h×w so that the top left of the image is located at each lattice point.The figure shows when the top left of the input image is placed on the orange dot.After the overlapping process, the averaged image is obtained.Then, we can obtain a defect-free image by cropping the green box. Figure 7c shows the lattice points generated from Algorithm 1.The lattice points appear regularly in the upper right corner of each patterned circle.In Fig. 7e, the defects are prominent in the flattened image where the patterned structure is removed.
Detecting defects in flat images
For the image which is not linear or patterned, we check whether the image is flat.To do this, we compute the JSD between the histogram h S (x) of d 2 M for the MCD-solution S and the probability density function of chi-squared distribution.If the JSD is less than 5 log 2/100, then the image is judged to have a flat background.This section describes how to find defects in flat images using the FAST-MCD method in Sect.2.2.3.
A histogram of a flat image with Gaussian noise follows a normal distribution N (μ, ) with g(r 2 ) = e −0.5r 2 (2π) d/2 having a negative derivative.Therefore, the consistency factor c α in (2) can be used to estimate the inlier distribution.Table 1 shows the estimate results of the gray scale flat image in Fig. 1a with for estimating the inlier distribution.Since we assume the Gaussian noise, the square of the Mahalanobis distance, d 2 M (x i , μ, ), of inlier part follows a chi-squared distribution.We find defects with a threshold d 2 M (x i , μ, ) > χ 2 1, p .Here, p can be adjusted according to the level of defect detection.For example, p = 0.99 means that approximately 1% of the area is detected in the defect-free flat image.For our purpose, a defect-free image should be judged to be defectfree.Therefore, we use the threshold for detecting defects as p = 1 − 1/(4hw) for an h × w image.Here, the use of p = 1 − 1/(4hw) means that about 0.25 pixel is detected in a defect-free flat image, regardless of the size of the image.From now on, we will use α = 0.25 and p = 1−1/(4hw) for gray scale images.Figure 8b shows that the MCD-solution S contains no defects.Figure 8d shows that the histogram of d 2 M for S is similar to the chi-squared distribution.(i.e., MCD-solution S follows a Gaussian distribution).
Detecting defects in complex images
An image that is not judged flat, linear, and patterned is called complex and inspected for defects through a segmentation neural network.Let f θ : R h×w → [0, 1] h×w be a segmentation network with parameters θ, which gives a probability output.Let y = f θ (x) be the probability output of an input image x.Let ŷ be a ground-truth (label) segmentation region of the input x: ŷi j = 1 if (i, j) belongs to the target region and 0 otherwise.
The dice score (DS) is used to measure the performance of segmentation problems, which is defined by .
It takes maximum value 1 when A = B. Similarly, DS(1 − A, 1 − B) gives the performance of background
is used to evaluate multiple class segmentation [8], and it can be used to measure the tiny segmentation.It reduces the well-known correlation between the dice overlap and region size.From this GDS, the generalized dice loss is widely used in small segmentation problems in the form [54]: Originally, the weights w D and w B are determined by the ratio of the total training dataset, like weighted cross entropy loss, thus the same weights w D and w B are used for all images.But there is a difference between cross entropy loss and dice loss.Since the cross entropy loss is calculated for each pixel, weights can be given using the number (area) of pixels of each class in the total training dataset.On the other hand, as the dice loss is calculated for each image, it is not appropriate to use fixed weights for dataset with various sizes of class areas.Our training dataset contains images with various defect sizes, such as 0.0002 ≤ 1 hw i j ŷi j ≤ 0.3889.So, instead of using fixed weights, we use adaptive weights for each image We also use the boundary loss [29] L B (y, ŷ) = i j φ( ŷ) i j (y i j − ŷi j ) , where φ( ŷ) is the signed distance function as Here, ∂ ŷ is the boundary of ŷ.Then, our loss function is a weighted sum of these two losses [29]: During training, the weight λ was initially set to 1 and decreased gradually to 0.5 at the end of training.The optimizer Adam is used to minimize the loss function with parameters β 1 = 0.9, β 2 = 0.999, = 10 −8 , and learning rate 0.001.The network architecture based on U-Net [45] with ResNet [19] is shown in Fig. 9.
Figure 10d shows that the histogram of d 2 M for S and the chi-squared distribution are different.This means that some structure is present in the input image.Remark 1 We might consider applying the segmentation network to all images.We observed that in most images, the neural network does not give as accurate results as our proposed mixed method.Also, we do not know what the neural network will do for new, untrained structural images.
Pre-processing and post-processing
Before applying the proposed method, we take a denoising step.In denoising methods based on isotropic diffusion, diffusion at the edge can smear the edge and remove the texture of the object.However, denoising methods based on anisotropic diffusion consider both spatial distance and intensity difference, thus preserving edges while reducing noise in non-edge regions.We use Perona-Malik anisotropic diffusion [44], the most popular model, to denoise the image: where κ is a constant.We use κ = 0.14 and the time increment= 0.1 with five iterations.
If a defect appears on the boundary of the image, it is not known whether it is an actual defect or a part of the structure.Therefore, if a defect appears on the boundary of the image, it is excluded.For the remaining defects, we perform the morphological opening and closing to remove the dot defects (noise) and to connect nearby defects, respectively.We use structuring elements with a radius of 1 pixel for opening and a radius of 5 pixels for closing for 256 × 256 images.If there is a hole inside the defect, we fill it in during the post-process.
Experimental results
The proposed method was implemented to evaluate the performance of defect inspection for images with various structures.Since there is no testing database for semiconductor defects, we used 171 images in the literature [4, 11, 13, 15, 16, 18, 21, 25-28, 35, 37, 38, 41, 49-52, 56, 59, 63, 65, 67]; see [66] for specific image information.The size of images under inspection is 256 × 256.To train the network, we use the following data augmentation strategy: • Normalization and the use of complement, • Eight types of rotation and flipping.
For a 256 × 256 gray scale image u, we use the normalized image ū = (u − μ)/(10σ ) + 0.5 and the complementary image ūc = 1 − ū, where μ and σ 2 are the average and the variance of u, respectively.Then, we rotate the image by π/2, π and 3π/2 radians and flip these vertically.The network is trained with 155 × 2 × 8 = 2, 480 defective images, and the validation dataset has 16 defective images and corresponding defect-free images.We created the defect-free images using the exemplar-based inpainting method [7] and manual processes with some graphical tools.Figure 11 shows the examples of defect-free images generated with graphical tools.We chose the parameters of the neural network with the highest G DS( ỹ, ŷ) (3) for the 16 defective images in the validation dataset, where ỹ is a threshold result for y ≥ 0.5.The weights are the same as in (4).The network is trained on 200 epochs with a batch size of 64.
Figure 12 shows the defect inspection results for defective images.We show the input images, ground truths, results of our method, neural network-1 [55], neural network-2 [57], and self-similarity method [9] in order from left to right column.We generated the ground truth with threshold and some manual process.Since we do not have a design layout, we cannot implement the D2DB inspection methods.The neural network-1 (NN-1) extracts features using 2D convolution with kernel size of 5×5, ReLU activation function, and 2×2 max pooling.The NN-1 method gives 32 × 32 outputs.To find the region in 256 × 256 input images, we use the bicubic interpolation.The second neural network-2 (NN-2) method which takes 512 × 512 images as inputs has a W-shape cascaded autoencoder architecture.Each U-shaped autoencoder uses a dilated 2D convolutions and ReLU activation function to extract features and has skip connections.To use this network architecture, we obtained 512 × 512 images using the bicubic interpolation again.We trained the parameters in the networks with our dataset.The number of parameters in the network is 15.4M, 61.9M, and 11.7M in order of NN-1, NN-2, and our method.NN-2, a full machine learning method, is inferior to our method despite using 5 times more parameters.For the self-similarity method, NFA= 10 −10 was used.The self-similarity method is known to work well for images with repetitive structures.However, if a specific structure does not appear repeatedly even though it is not a defect, the self-similarity method judges it as a defect.Table 2 shows the mean of GDS, standard deviation (std) of GDS, mean of IoU, and std of IoU for 16 defective validation images.
Here, IoU score is another metric mainly used in segmentation problems with the formula: where |•| denotes the number of pixels.We also provide the number of true positives for 16 defective validation images and the number of false positives for 16 defect-free validation images.
All model-based algorithms were implemented in MAT-LAB.All neural networks were implemented in Python with PyTorch [43], and all computations were performed on a cluster equipped with Intel Xeon Gold 6148 (2.4GHz, 40C) CPUs, NVIDIA RTX 3090 GPUs, and the operating system Ubuntu 18.04 64 bits.
Conclusion
Unlike the defect inspection methods mainly used in semiconductor manufacturing, we proposed a method of inspecting defects in a single image.Our method consists of classifying images and detecting defects according to each type.The cosine similarity, the moment tensor, and JSD were used to classify the image types.We proposed two methods for removing structures: one for a linear structure and the other for a repeated pattern.For the linear structured image, we found the dominant angle and removed the linear structure by subtracting the median of the average intensity and those at both ends on the dominant line.For the repeated patterned image, we selected two lattice vectors and made the lattice points.By overlapping the input image at each lattice points and averaging them, we obtained the defect-free reference image.From the difference image between the input image and the reference image, we found a flat image.The FAST-MCD method is used to detect defects in flat images.For an image with complex structure, we found the defects using a segmentation network.
Among the existing methods, most model-based inspection methods for a single image assume a special image structure (e.g., flat or patterned).Our method has the advantage of being more general in that it classifies the types of images and finds defects according to the types.Depending on the type of image with defect, it will be possible to know in which process the defect occurred.Machine learning methods have a disadvantage that it is difficult to explain the reason for the result.However, our method reduced the ambiguity of the results by classifying images into four types and then detecting defects in flat, linear, and patterned images by statistical methods and applying machine learning only to complex images.
A.1 Selection of the parameter t i
Here, we experimentally show the role of the parameter t i used in the image classification in Sect.3.1.We obtain the repeated region P i = {(x, y) | C S i (x, y) > t i for x = 1, . . ., h and y = 1, . . ., w} using the threshold t i .We overlap P i based on the centroid of each kernel and call it the overall repeated region P ⊂ [1, h] × [1, w].Since the value of C S i at the centroid of K i is always 1, P i contains the centroid of K i .It means that P contains the center of domain [1, h] × [1, w].For R, the connected region containing the center of domain, we compute the moment tensor I as where the components are defined as For the moment tensor I , we compute eigenvalues and corresponding eigenvectors.If an image has a linear structure, P has a long connected region R, with the large axis ratio defined as the ratio of large and small eigenvalues, in the Figure 13 shows the overall repeated region P with various t i = (1−r )+r min C S i for r = 0.1, 0.15, 0.2, 0.25, 0.3.The axis ratio of R is displayed at the top of each P. The maximum axis ratio of nonlinear images is 7.53, and the minimum one of linear images is 41.83.Therefore, we take the value 25 as the threshold for determination of linear images.Linear images always have an axis ratio greater than 25, regardless of t i values.The peak points of the patterned images are aligned on specific lines.
A.2 Kernel size for cosine similarity
Here, we give a one-dimensional example of the cosine similarity for different kernel sizes.Consider a long vector in which (10110) is repeated.Then, the cosine similarities for the three types of one-dimensional kernels are as follows: Algorithm 3 GHT method of extracting the lattice vectors [32].
Let {v i } n i=1 be the peak points with vector representation in an ascending order of lengths, where v 1 = (0, 0).Set score matrix , with the notation [a] for the rounded integer of a.
. end for end if end for end for Let the entry with the highest value in L be î, ĵ .Select the pair of vectors with the smallest length in {v If the kernel does not include a pattern as in the first case, the repeated region of C S cannot find the pattern.On the other hand, if the kernel represents a pattern as in the second and third cases, the repeated region of C S finds the pattern.Therefore, we suggest small M and N values like 2, 3, and 4 so that the kernel includes the pattern.
A.3 JSD between histogram and chi-squared distribution
We explain the details of the JSD between the histogram of Let {l i } be a set of straight lines passing through the center of P, in descending order of the number of peak points in l i .
For the same number of peak points, the line with the smaller distance between the points comes first.for i = 1 : 2 do Let {v j | j = 1, . . ., n i } be the vectors in the line l i .Find the minimum length vector vi in l i .for j = 1 : n i do .
end for Let ŵi = E 1 [a j ] v j .end for Select the pair of vectors with the smallest length in { ŵ1 , ŵ2 , ŵ1 + ŵ2 , ŵ1 − ŵ2 } as lattice vectors {w 1 , w 2 }. χ 2 1, p is defined to satisfy P(X > χ 2 1, p ) = p for X ∼ χ 2 1 .However, for defective images, the MCD-solution S appears outside the defects, and the maximum value of d 2 M in the MCD-solution S increases.The increment depends on the size of the defects.Therefore, we use a slightly modified chisquared distribution as follows.Let f 1 (x) be the probability density function for χ 2 1 distribution and l be the maximum value of d 2 M in the MCD-solution S.Then, the cropped probability density function f 1 (x) is In Sect.3.1, we measure the JSD between h S (x) and f 1 (x) to check whether h S (x) is close to f 1 (x) or not. Figure 14 shows the JSDs for flat and complex images.For flat images, JSD is below 0.01 log 2. For complex images, JSD is greater than 0.1 log 2. Therefore, we take the value 0.05 log 2 as the threshold for determination of flat images.
A.4 Lattice vector extraction
In this appendix, we briefly describe how to extract the lattice vectors from the overall repeated region P in Sect.3.2.2.Before starting, the locations of peak points are regarded as vectors originating from the center of domain [1, h]× [1, w].The main idea of the generalized Hough transform (GHT) method in [32] of extracting lattice vectors is to build a parallelogram grid with each pair of linearly independent vectors and score how close the peak points are to the grid (see Algorithm 3).The usage of 1/ v i leads to a high score in L for v i with small length.However, this method only uses the vectors v î and v ĵ to determine the lattice vectors and does not use other vectors that are constant multiples of them.
We modified the method slightly to use more linearly dependent vectors to obtain the lattice vectors.We consider a straight line passing through the center of domain.As mentioned in Sect.3.1, a straight line passing through three peak points is judged to have pattern information.If the line passes through more peak points, the pattern is better represented.To find the lattice vectors, we take two straight lines containing the largest number of peak points.If the straight lines have the same number of peak points, we take the line with the smaller distance between the points.The proposed lattice vector extraction algorithm is described in Algorithm 4. The lattice points generated from the lattice vectors extracted by our proposed algorithm are more accurate because the error is reduced by using more linearly dependent vectors.The amount of computation of our proposed extraction algorithm is also less than that of the GHT method.
Figure 15a shows the lattice points for two methods in input image.Blue dots and red dots represent the lattice points generated by the GHT method and our method, respectively.A flattened image obtained with the blue lattice points is shown in Fig. 15b.Near the boundary of the image, traces of structures remain.Figure 15c shows the flattened image obtained by our method.Compared to the GHT method, our method produces more accurate lattice points.
Fig. 1
Fig. 1 Typical four types of semiconductor images and defect regions.(First and third images have permission from IOP Science and IEEE, respectively)
Fig. 2
Fig.2The flow chart of our algorithm
Fig. 3
Fig. 3 Cosine similarities and repeated regions for four types of images.First column shows the input images with 2×2 partitions.Columns 2-5 show each cosine similarity C S i , and blue contours show the repeated region P i .The centroids of the subimages used as kernels are indicated
Fig. 5
Fig. 5 Inspection process of a linear image.a Input image, b overall repeated region P.The red dot indicates the center of P, and the green region shows the connected region R containing the center of the domain.The red lines in c show the dominant lines.The defect-free image is shown in d. e shows the flattened image with linear structure removed.The defects detected by the FAST-MCD method are shown in f
Fig. 6 Fig. 7
Fig. 6 Graphical description of lattice point generation using two lattice vectors w 1 and w 2 which are denoted as red and blue arrows, respectively.The white dots are the lattice points in [1, 2h]×[1, 2w] generated from these two lattice vectors
Fig. 8 Fig. 9
Fig. 8 Inspection process of a flat image.a Input image.b MCDsolution S. The defects detected by the FAST-MCD method are shown in c.The histogram of d 2 M for the MCD-solution S and the probability density function of chi-squared distribution are shown in d
Fig. 10
Fig. 10 Inspection process of a complex image.a Input image.b MCDsolution S. The defects detected by the segmentation network are shown in c.The histogram of d 2 M for the MCD-solution S and the probability density function of chi-squared distribution are shown in d
Fig. 11
Fig. 11 Examples of defect-free images generated by graphical tools.a and c are defective images, and b and d are defect-free images
Fig. 12
Fig.12 Comparison of several methods for defective images: Our proposed method classifies images into four types: flat, linear, patterned, and complex.The red and blue contours show the ground truth and segmentation region, respectively
Fig. 13 Fig. 14
Fig. 13 Experiment results for various t i .The first column shows the input images.Columns 2 through 6 show P for r = 0.1 to r = 0.3.The red dot indicates the center of domain, and green region shows the connected region R containing the center of domain.The blue dots indicate the peak points
d 2 M 2 MAlgorithm 4
for the MCD-solution S and the chi-squared distribution.Let h S (x) be the histogram of d 2 M (x i , μ, ) for the MCDsolution S. For a defect-free image, the maximum value of d in the MCD-solution S is close to χ 2 1,1−α where the quantity Our method of extracting the lattice vectors.
Fig. 15
Fig.15 Lattice points and results of GHT method and our method | 2023-10-05T15:13:38.752Z | 2023-10-03T00:00:00.000 | {
"year": 2023,
"sha1": "e3b647912565defd3a6d16c6b71433bdfc5a72c0",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-2643690/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "Springer",
"pdf_hash": "8c8c88a46f2ceac8f9ee847c4f4c9e74bc4bd72f",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
} |
52053772 | pes2o/s2orc | v3-fos-license | Food Bioactive HDAC Inhibitors in the Epigenetic Regulation of Heart Failure
Approximately 5.7 million U.S. adults have been diagnosed with heart failure (HF). More concerning is that one in nine U.S. deaths included HF as a contributing cause. Current HF drugs (e.g., β-blockers, ACEi) target intracellular signaling cascades downstream of cell surface receptors to prevent cardiac pump dysfunction. However, these drugs fail to target other redundant intracellular signaling pathways and, therefore, limit drug efficacy. As such, it has been postulated that compounds designed to target shared downstream mediators of these signaling pathways would be more efficacious for the treatment of HF. Histone deacetylation has been linked as a key pathogenetic element for the development of HF. Lysine residues undergo diverse and reversible post-translational modifications that include acetylation and have historically been studied as epigenetic modifiers of histone tails within chromatin that provide an important mechanism for regulating gene expression. Of recent, bioactive compounds within our diet have been linked to the regulation of gene expression, in part, through regulation of the epi-genome. It has been reported that food bioactives regulate histone acetylation via direct regulation of writer (histone acetyl transferases, HATs) and eraser (histone deacetylases, HDACs) proteins. Therefore, bioactive food compounds offer unique therapeutic strategies as epigenetic modifiers of heart failure. This review will highlight food bio-actives as modifiers of histone deacetylase activity in the heart.
Introduction
Cardiovascular disease (CVD) remains the leading cause of death worldwide [1]. Moreover, CVD and its related co-morbidities financially strain the healthcare system in which total U.S. medical cost is estimated at $656 billion. Costs are expected to rise to $1.1 trillion by 2035 [1]. As a consequence, the American Heart Association (AHA) has initiated strategies aimed to reduce healthcare burdens that entail behavior modifications such as changes in dietary choices [1].
Heart failure (HF) is a cardiovascular condition in which the heart fails to deliver an adequate supply of oxygen-rich and nutrient-rich blood to the body [2]. Currently, 5.7 million U.S. adults are diagnosed with HF with a projected increase to 8 million of U.S. adults by 2030 [3]. Standards of care for the treatment of HF include angiotensin converting enzyme inhibitors (ACEi) and β-blockers [4]. Despite overall improvements in total HF mortality rates over the last several decades due to these therapies, five-year mortality rates post-HF diagnosis remain high at approximately 50% [3]. This further warrants behavioral dietary interventions or novel pharmaceuticals and/or nutraceuticals that effectively prevent and/or treat HF. further warrants behavioral dietary interventions or novel pharmaceuticals and/or nutraceuticals that effectively prevent and/or treat HF.
Multiple stressors including hypertension and inflammation stimulate the heart to undergo remodeling. Cardiac remodeling is characterized by heart enlargement (hypertrophy) and fibrosis (scarring) as well as contractile dysfunction and apoptosis [5]. All of these conditions can contribute to the progression of HF. Standard treatments such as ACEi and β-blockers target intracellular signaling cascades and disrupt cell surface receptors in order to inhibit cardiac remodeling and improve contractile function. For example, β-blockers act as competitive and reversible antagonists of β-adrenergic receptors (β-ARs). HF is associated with adrenergic nervous system hyper-activity that results in stimulation of β-ARs and leads to increased oxygen demand and myocardial work. β-AR hyper-activation ultimately contributes to increased intracellular signaling cascades that drives apoptotic signaling, cardiac enlargement, and cardiac contractile dysfunction. Thus, treatment with β-blockers attenuates these actions and improves systolic cardiac function [2,[6][7][8]. However, inhibition of cell surface receptors and/or intracellular signaling cascades does not account for signaling cross-talk and redundancy, which limits the current therapeutics from completely inhibiting or reversing cardiac dysfunction. In other words, current therapies fail to inhibit all downstream regulators of cardiac disease. This has given rise to drugs that target the epi-genome.
It has been reported that histone deacetylase (HDAC) activity is elevated in models of cardiac remodeling [9][10][11][12]. However, its activity in human heart failure, to our knowledge, has not been reported. Nonetheless, class I and II HDAC inhibitors represent a group of small molecule epigenetic modifiers that have demonstrated efficacy in animal models of HF over the last decade [11,[13][14][15][16][17][18][19]. HDACs remove and histone acetyl transferases (HATs) add acetyl-marks to the ε-amino terminal tails of histones in nucleosomal DNA [20]. Deacetylation of histones via HDACs generally results in heterochromatin formation and gene repression while acetylation via HATs promotes gene expression [20]. Currently, 18 mammalian HDACs have been grouped into one of four classes ( Figure 1): class I (HDAC1, 2, 3, and 8), class II (HDAC4, 5, 6, 7, 9, and 10), class III (SIRT1-7) and class IV (HDAC11). HDAC classes I, II, and IV require zinc as a cofactor to catalyze deacetylase activity while class III HDACs, which is also known as the sirtuins, require the cofactor nicotinamide adenine dinucleotide (NAD + ). Class II HDACs are further subdivided into IIa (HDAC4, 5, 7, and 9) and IIb (HDAC6 and 10) [21]. Unlike class I and II HDACs, activation of class III HDACs (sirtuins) appears cardio-protective [22,23]. As such, a majority of this review will focus on the regulation of lysine acetylation via zinc-dependent HDACs.
HDAC Inhibitors
HDAC inhibitors were originally studied in cancer since different cancer cells expressed patterns of histone hypo-acetylation. Cancer cell hypo-acetylation has been associated with cancer
HDAC Inhibitors
HDAC inhibitors were originally studied in cancer since different cancer cells expressed patterns of histone hypo-acetylation. Cancer cell hypo-acetylation has been associated with cancer progression. Treatment with HDAC inhibitors ameliorated cancer hypo-acetylation along with several hallmarks of cancer including proliferation and cancer cell survival [24,25]. Since these early studies, four HDAC inhibitors -Vorinostat, Romidepsin, Panobinostat, and Belinostat -have been approved by the US Food and Drug Administration (FDA) to treat T-cell lymphoma. At least 12 more HDAC inhibitors are in clinical trials for various cancers [26][27][28][29]. In addition, valproic acid, which is a short-chain fatty acid HDAC inhibitor, has been approved to manage epilepsy [30]. However, there are no HDAC inhibitors currently on the market or in clinical trials for the treatment of CVD/HF.
The classic zinc-dependent HDAC inhibitor structure is characterized by a cap, which is a zinc-binding domain within the active site and a hydrocarbon linker that connects the cap and binding domain [31][32][33]. Moreover, HDAC inhibitors have been categorized into five chemical classes known as hydroxamic acids, short-chain fatty acids, benzamides, ortho-aminoanilides, and cyclic peptides [33,34]. Differences amongst HDAC inhibitors include toxicity and potency [33,35]. For example, hydroxamic acids such as Vorinostat exhibit strong chelating properties that allow for pan-HDAC inhibition at nanomolar concentrations. Conversely, short-chain fatty acids such as valproic acid, exhibit weaker potencies with inhibition observed at milli-molar concentrations. In addition, while short-chain fatty acids elicit physiochemical properties that allow for easy uptake and transportation, they lack specificity and, therefore, have multiple off-target actions [30,33]. Benzamines and ortho-aminoanilides are structurally similar and are often selective of class I HDACs. Lastly, cyclic peptides such as Romidepsin are characterized by many alkyl-binding and chelating-binding properties that permit their high potency [36]. This review will primarily discuss class I and II HDACs and HDAC inhibitors.
HDAC Inhibitors and Heart Failure
The role for HDACs in the heart have been researched for over a decade. Mechanisms and functions of HDACs in the heart are complex and actions differ between HDAC classes and experimental techniques as well as genetic versus pharmacological inhibition. For example, results from in vitro and in vivo experiments have suggested that class IIa and III HDACs are cardio-protective where pharmacological or genetic inhibition contributes to cardiac dysfunction [22,37,38]. Classical genetic loss-of-function studies demonstrated that class IIa HDACs bind the transcription factor myocyte enhancer factor-2 (MEF-2) that resulted in transcriptional repression of hypertrophic genes. Knockout of class IIa HDACs, HDAC4 and 5, resulted in MEF-2 transcriptional activation and dilated cardiomyopathy [10,38,39]. These studies ultimately demonstrated that in response to stress, calcium-mediated activation of calmodulin-dependent protein kinase (CaMK) stimulated the dissociation of class IIa HDACs from MEF2, which resulted in MEF2 activation and pathological cardiac hypertrophy [40].
Like class IIa HDACs, early loss-of-function studies suggested a critical developmental role for class I HDACs where whole animal knockout of HDACs 1, 2 or 3 was shown to be embryonic or perinatal lethal [11,[41][42][43]. Cardiac-specific knockout studies of HDACs 1, 2 and 3 was also lethal in a TAC-induced model of heart failure with lethality observed in rodents at postnatal day 14 [11]. In contrast to class IIa HDACs, however, small-interfering RNA-mediated knockdown of class I HDACs attenuated cardiac hypertrophy in cell culture [19,44]. Since these early studies, class I HDAC activity has been further observed to increase with cardiac remodeling and dysfunction [12,45,46]. These observations suggest multiple actions for class I HDACs in addition to their deacetylase function.
Not surprising then, pan-and class I-selective HDAC inhibitors are efficacious in pre-clinical models of HF. Trichostatin A (TSA), for example, is a pan-HDAC inhibitor that has been shown to inhibit pathological cardiac hypertrophy and fibrosis [47]. While TSA has been shown to regulate histone hyper-acetylation and gene expression [48,49], its actions on pathological heart enlargement appear to be regulated, in part, through inhibition of mitogen-activated protein kinase (MAPK) signaling [50]. These data would suggest epigenetic and non-epigenetic (e.g., signaling mediated) mechanisms of action. Similar results were shown when treated with class I-selective HDAC inhibitors in which cardiac hypertrophy and fibrosis were attenuated [19,50,51]. It should be noted Nutrients 2018, 10, 1120 4 of 35 that differences between the class I HDACs, HDACs 1 and 2 can be difficult to distinguish with pharmacological tools. This is due to the high sequence homology between the two HDACs and their redundant actions toward histone targets. The use of genetic and pharmacological tools suggest that inhibition of HDACs 1/2, HDAC3 or HDAC8 in combination or individually attenuated cardiac remodeling and improved cardiac function [19,46,50,52,53]. Therefore, class I-selective HDAC inhibition as opposed to pan-HDAC inhibition may offer better therapeutic strategies with limited off-target consequences.
Like the class I HDACs, class IIb HDAC activity is increased in the heart in models of hypertension [12]. Moreover, genetic or pharmacological inhibition of the class IIb HDAC, HDAC6, improved systolic contractile function independent of cardiac enlargement and fibrosis in a rodent model of hypertension [54]. Similarly, genetic or pharmacological inhibition of HDAC6 was reported to ameliorate cardiac proteotoxicity by preventing protein aggregation through improved autophagy-mediated protein degradation [55]. Unlike class I HDACs, HDAC6-mediated regulation in these studies was directed at sarcomere protein deacetylation [54] or tubulin hyperacetylation [55], which suggests that the class IIb HDAC, HDAC6 regulates cardiac function through non-epigenetic mechanisms.
Lastly, the most recent studies have shown that the FDA-approved HDAC inhibitor Vorinostat as well as Givinostat (ITF2357), which is currently in phase III clinical trials for patients with Duchenne muscular dystrophy, attenuated and even reversed cardiac dysfunction in rabbits exposed to ischemia reperfusion (I/R) injury [16] and in aged mice with diastolic failure [56]. These reports highlight the efficacy of HDAC inhibitors for treating and potential reversing cardiac disease. In addition, these reports relied on HDAC inhibitors that are currently FDA approved or undergoing human clinical trials.
Unfortunately, many identified HDAC inhibitors are expensive to synthesize and are not likely to see human HF trials due to their off-target effects [57,58]. Conversely, nutraceutical phytochemicals provide a cheaper and safer alternative to pharmaceuticals. It was recently delineated that HDAC inhibitors have a common phenyl ring that governs their biological activity [59]. These findings are interesting since multiple phytochemicals in our foods have phenyl rings that drive their bioactivity. This suggests that the chemicals in our foods may improve health via acetyl-lysine modification in addition to their well-established roles in oxidative stress and inflammation.
Phytochemicals
Diet and nutrition play a key role in health and disease in which dietary intervention can ameliorate type II diabetes, cancer progression, and CVD [60]. Poor dietary habits attribute 13.2% to overall CVD mortality in the U.S. [1]. Similarly, hyper-caloric intake is linked to the development of hypertension and type II diabetes, which are two major risk factors for CVD and HF [61]. The American Heart Association, the World Health Organization, and the Academy of Nutrition and Dietetics have stressed that the consumption of fruits, vegetables, and other plant-based foods should compose the majority of one's diet to reduce the risk of developing CVD and other morbidities [62][63][64][65][66]. These foods are high in vitamins, minerals, and phytochemicals that actively participate in biological processes that govern health. This is evident by the lower mortality rates for HF patients on the dietary approaches to stop hypertension (DASH) diet or the Mediterranean diet; these diets emphasize plant-based foods [67]. Unfortunately, plant-based foods that contain beneficial nutrients and phytochemicals are, for the most part, under-consumed in the U.S. [1,62].
Phytochemicals are secondary plant metabolites that are synthesized to help a plant thrive or deter competitors, predators and pathogens [68,69]. Phytochemicals can further interact in human biological processes after ingestion to promote health. Fruits, vegetables, nuts, seeds, legumes, whole grains, herbs and natural spices are common dietary items that contain phytochemicals in varying concentrations. Moreover, phytochemicals and their parent plants have been used in traditional medicines for centuries. Thousands of phytochemicals have been identified to date with more that are likely to be discovered and characterized [70]. Currently, phytochemicals are characterized into one of six different classes: polyphenols/phenolics, alkaloids, N-containing compounds, organosulfur compounds, phytoesterols and carotenoids [71]. Following is a brief description of the different phytochemical groups as well as compounds within these groups that regulate lysine acetylation and their implications in HF. A list of these compounds and their respective roles in the regulation of HDAC activity and histone acetylation can be found in Table 1.
Polyphenols
The structure of polyphenols have been intensively reviewed [105][106][107]. Polyphenols are highly abundant in the plant kingdom and comprise a family of molecules with more than 8000 structural variants. These secondary metabolites contain many aromatic rings with one or more hydroxyl moieties [108]. Hydroxyl groups are classically recognized in oxidation-reduction reactions. Thus, many studies have focused on the anti-oxidant role for polyphenols in CVD [68]. Since polyphenols are among the most abundant bioactive molecules in the plant kingdom, it is not surprising that polyphenols are among the most abundant phytochemicals consumed in the human diet. For this reason, polyphenols are important compounds to study in human health and disease. While oxidative stress and inflammation are the classical targets for polyphenol health protection, recent research indicates an important role for polyphenols in diet-gene regulation [109,110].
Polyphenols are divided by chemical structure into two primary groups: phenolic acids and flavonoids. Moreover, polyphenols are distinguished by their hydroxyl moiety and their aromatic phenyl rings. Phenolic acids contain the subgroups hydroxycinnamic acids and hydroxybenzoic acids while flavonoids contain the subgroups flavanols, flavonols, flavones, flavanones, anthocyanidins, isoflavonoids and proanthocyanidins. Other polyphenol groups include lignans, stilbenes, and quinones. Below, we highlight the role for these polyphenol subgroups and their compounds as epigenetic regulators in the heart.
Phenolic Acids
Studies suggest that phenolic acids are inversely correlated with coronary heart disease mortality and heart attack incidence [111]. Phenolic acids contain two subgroups including hydroxycinnamic acids and hydroxybenzoic acids, which differ in carbon backbone length. Hydroxycinnamic acids contain an additional carbon bond. Both hydroxycinnamic acids and hydroxybenzoic acids contain a functional carboxyl group with potent metal chelation properties [112]. This would imply that hydroxycinnamic acids and hydroxybenzoic acids can chelate zinc in order to inhibit zinc-dependent HDAC activity. Docking studies using HDAC8 confer that the carboxylic group of phenolic acids strongly interacts with the zinc ion, which results in high HDAC inhibition potency [112]. Below, we discuss recent findings regarding phenolic acid HDAC inhibitors in the heart.
Hydroxycinnamic Acids
Caffeic acid is one of the most abundantly consumed hydroxycinnamic acids [72]. Caffeic acid is found in most fruits including the skin of ripened fruit [113]. However, the largest source for caffeic acid consumption is coffee [114]. Coffee has been linked to improvements in CVD where coffee consumption was inversely correlated with death after acute myocardial infarction [73]. These epidemiological findings suggest that coffee and its phytochemicals have cardio-protective effects. Unfortunately, the emergence of energy drinks has given rise to misinformation regarding coffee consumption and arrhythmias. Very high doses of caffeine have been reported to have sympathomimetic effects likely caused by phosphodiesterase inhibition and increased intracellular calcium. In this regard, energy drinks have been linked in numerous case reports with atrial and ventricular tachyarrhythmias. However, coffee consumption of three cups per day did not increase the risk of atrial fibrillation or ventricular arrhythmias [115]. Further in vitro and in vivo reports demonstrated efficacy for caffeic acid in CVD models [116]. Caffeic acid ethanolamide, which is a caffeic acid derivative, ameliorated cardiac oxidative stress in isoproterenol-induced HL-1 cells as well as in isoproterenol-induced cardiac diseased mice [116]. Additionally, caffeic acid attenuated cardiac dysfunction and fibrosis through HDAC regulation [116]. Similar to the pan-HDACi Vorinostat, caffeic acid phenethyl ester attenuated cardiac hypertrophy and ameliorated cardiac dysfunction in I/R-injured rabbits [117]. These therapeutic actions occurred, in part, by inhibiting MAPK activation [118]. Since HDACs have been shown to regulate MAPK activity [50], these data suggest that caffeic acid-mediated inhibition of HDACs protect the heart via MAPK inactivation. More recently, caffeic acid was shown to inhibit class I, IIa, and IIb HDAC activity in cardiac lysate [119]. Unfortunately, no other studies have further examined the role for caffeic acid as a zinc-dependent HDAC inhibitor in heart failure. Further delineation of the cardio-protective actions of caffeic acid and its derivatives would be of great interest due to their high intake through coffee consumption. Additionally, other dietary hydroxycinnamic acids such as coumaric acid and ferulic acid should be examined as regulators of HDAC activity in the heart. Both coumaric acid and ferulic acid have been reported to attenuate pathological cardiac remodeling. In addition, studies suggest that ferulic acid inhibits HDAC activity [74,[120][121][122][123]. Combined, these studies would suggest that hydroxycinnamic acids protect the heart, in part, through direct changes in gene expression. Hydroxycinnamic acids inhibit HDAC activity, which leads to hyper-acetylation of nucleosomal histones.
Hydroxybenzoic Acids
Compared to other phenolic acids, hydroxybenzoic acids are consumed less and have lower phytochemical concentrations within food [124]. However, berries such as blackberries and strawberries are commonly consumed and contain substantial amounts of the hydroxybenzoic acids known as gallic acid and ellagic acid. Black tea is also a good source of gallic acid and is of particular interest due to its large consumption and its correlation with reduced risk for coronary heart disease as well as stroke [125,126]. In addition, these compounds have been examined as nutraceuticals that can protect the heart [127][128][129][130]. For example, gallic acid has been shown to repress cardiac remodeling through the inhibition of genes involved in advanced glycation end products (AGE) in rats [127]. Moreover, Umadevi et al. [127] reported that gallic acid attenuated cardiac fibrosis by inhibiting matrix melloproteinase (MMP) gene expression of MMP-2 and MMP-9. Inhibition of MMP gene expression was linked to decreased inflammation and intracellular signaling cascades nuclear factor kappa B (NF-κB) and extracellular signal-regulated kinase (ERK). HDACs have been reported to regulate both NF-κB and ERK signaling where HDAC inhibition attenuated NF-κB and ERK activity [50,51,131]. These data suggest that cardio-protective actions of gallic acid are partially mediated through HDAC inhibition. Gallic acid was shown to dose-dependently inhibit class IIa and IIb HDAC activity, which resulted in cardiac protection [128]. While this study supports that hydroxybenzoic acid HDAC inhibitors protect the heart through changes in gene expression, the Nutrients 2018, 10, 1120 8 of 35 evidence is far from conclusive. Thus, further studies are warranted to examine the role for gallic acid and other hydroxybenzoic acids on global changes in histone acetylation and gene expression.
Flavonoids
The largest polyphenolic group known as flavonoids are aglycone structures that contain two active phenyl rings, which vary in hydroxylation between its subgroups: flavanols, flavonols, flavones, flavanones, anthocyanidins, isoflavonoids and proanthocyanidins. Currently, there are approximately 6000 flavonoids that are found in fruits, vegetables, herbs and medicinal plants.
Research has shown that diets high in flavonoids reduced a person's risk for developing CVD as well as reduced CVD mortality rates [132,133]. Moreover, a meta-analysis of 15 cohort studies with 386,610 individuals and 16,693 deaths showed flavonoid intake was inversely correlated with CVD mortality in a dose-dependent manner [134]. Such findings confirm the importance of and validate policies directed towards consuming more fruits and vegetables. Notably, reports have shown that flavonoids have metal-binding chelating properties [135,136] and, therefore, suggest potential roles for flavonoids as HDAC inhibitors for cardio-protection.
Flavonols
Flavonols are 3-hydroxy derivatives of flavones and contain a number of commonly studied phytochemicals that include quercetin. Quercetin is the most consumed flavonol and is abundant in tea, apples, onions and berries [137,138]. Quercetin intake is inversely correlated with ischemic heart disease mortality in a dose-dependent manner [139]. In addition, quercetin has been shown to protect against ischemia/reperfusion injury, isoproterenol-induced cardiac injury, aortic constriction-induced cardiac remodeling and diabetic cardiomyopathy [75][76][77]140,141]. Two independent double-blind, placebo-controlled trials demonstrated that quercetin ameliorated hypertension in patients at risk for CVD and reduced plasma oxidized low-density lipoproteins (oxLDLs), which are responsible for atherosclerotic disease [142,143]. Few reports, however, have shown quercetin's mechanistic action of cardio-protection through acetyl-lysine regulation. Hung et al. showed that quercetin attenuated oxLDL-induced atherosclerotic injury by increasing the class III HDAC Sirt-1 [144]. Our lab demonstrated that quercetin inhibited class I and II HDACs in bovine cardiac tissue [119]. Other studies have reported that quercetin can inhibit class I HDACs in cancer cell models and that these actions are, in part, responsible for the anti-carcinogenic actions associated with quercetin [145,146]. As an HDAC inhibitor, quercetin would alter the electrostatic interactions between DNA and histone proteins, which is directly impacting gene expression and, therefore, effecting cellular fate. While the role for quercetin in cardio-protection is undeniable, studies examining the epigenetic impact for quercetin remain underexplored. Thus, further investigation for quercetin as an HDAC inhibitor in cardiac biology is warranted.
Kaempferol is a flavonol found in a variety of foods like teas, tomatoes, hops, grapes, grapefruit, strawberries, broccoli, honey, apples and beans [147]. Kaempferol is the second-most consumed flavonol in the U.S. behind quercetin and is mostly consumed in the form of green and black tea [137]. Similar to quercetin, kaempferol intake is inversely correlated with ischemic heart disease mortality [139] and kaempferol treatment is efficacious in in vitro and in vivo CVD models [78,79,88,148,149]. I/R-induced cardiac injury was ameliorated with kaempferol treatment. This was linked to the inhibition of the MAPK pathway [79,148]. Since HDAC inhibitors have previously been shown to attenuate MAPK signaling in the heart, these data would suggest a potential role for kaempferol as an HDAC inhibitor [50,51]. Kaempferol has also been shown to attenuate cardiac injury and oxidative stress in I/R-injured rats by inhibiting glycogen synthase kinase-3β activation (GSK-3β) [149]. The class I HDAC, HDAC2 was recently shown to regulate GSK-3β signaling [150]. These data support the postulate that kaempferol protects the heart in an HDAC-dependent manner. Consistent with this postulate, kaempferol was recently shown to inhibit HDAC activity, which led to increased histone acetylation [151]. Berger et al. [151] further showed that kaempferol docked to class I HDACs 2 and 8 as well as class IIa HDACs 4 and 7, which suggests that this binding may inhibit HDAC activity. Lastly, we reported that kaempferol inhibited HDAC activity and increased histone acetylation in cardiac lysate [119]. As the next step, experiments are underway to determine if the cardio-protective effects of kaempferol are mediated through HDAC-dependent inhibition. These studies would also examine the impact for green and black tea extracts in regulating HDAC inhibition and cardiac disease even though additional tea compounds would likely impart additive or synergistic actions towards HDAC activity (e.g., EGCG). As others have shown that anti-carcinogenic actions for kaempferol are regulated, in part, through changes in lysine acetylation [152], we anticipate promising findings that would demonstrate that kaempferol-dependent HDAC regulation links diet-gene interactions in an epigenetic-dependent manner in the heart.
Myricitrin and its aglycone, myricetin, are two naturally occurring flavonols that were first isolated in the early 1900s from the bark of the bayberry tree (Myrica nagi) [153]. Bayberry has been a cultural staple in Asian countries for over 2000 years [154] and the tree's therapeutic properties in traditional medicines have led to current studies of these two flavonols. Myricitrin is primarily synthesized in the bayberry tree's fruit, bark and leaves [155] while myricetin is also found in a variety of other foods including tea, wine, berries and vegetables. The majority of myricetin consumption is from tea. However, its intake is quite low in comparison to other flavonoids like kaempferol and quercetin [137]. The bioactivity of myricetin and myricitrin are very similar to each other due to the sharing of functional groups. Both phytochemicals exhibit anti-inflammatory and anti-oxidant properties [154,155], which have been suggested as a major mechanism for their cardioprotective actions [156][157][158]. However, additional studies have reported that cardio-protection for myricitrin and myricetin involve regulation of intracellular signaling cascades and gene expression. For instance, myricetin was shown to attenuate I/R-induced cardiac injury by inhibiting signal transducer and activator of transcription 1 (STAT1) activation [159]. Inhibition of JAK/STAT signaling would be expected to alter gene expression in the heart. Two other reports showed that myricitrin attenuated diabetic cardiomyopathy as well as hyperglycemia-induced cardiomyocyte apoptosis through changes in PI3K/Akt and MAPK signaling [160,161]. Cardiac myocytes exposed to hyperglycemic conditions and treated with myricitrin had reduced apoptosis via Akt-nuclear factor erythroid 2-related factor 2 (Nrf2) inhibition [160]. Similarly, myricitrin attenuated diabetic cardiomyopathy by inhibiting ERK phosphorylation, Nrf2 expression and NF-κB [161]. Since Nrf2 and NF-κB are transcription factors, these data would suggest that myricitrin regulates cardiac gene expression through the regulation of intracellular signaling cascades. HDAC inhibitors have previously been shown to regulate Akt [162], MAPK phosphorylation [50,51] and NF-κB [131]. Only one report to date, however, has shown myricetin and myricitrin regulated lysine acetylation through HDAC inhibition [119]. Thus, investigation into the role for these two compounds as bioactive HDAC inhibitors in the heart is warranted.
Flavones
Flavones are synthesized from flavanones via flavone synthases. These polyphenols distinctly contain a double bond between carbons two and three on the heterocyclic pyran ring (also known as the C ring), which is further attached to an aromatic phenyl ring [163]. Multiple hydroxyl groups that are attached to this phenyl ring provide flavones with their function especially with regard to redox reactions [163]. Flavone consumption is less than flavonols, but these are well-represented in research studies. Apigenin and luteolin, as well as their glycosides, are two of the major flavones currently being investigated in the heart. Apigenin is found in citrus fruits, onions, parsley and chamomile [80]. Several reports have shown that apigenin is cardio-protective [81,[164][165][166]. Similar to other flavonoids, apigenin was shown to attenuate I/R-induced cardiac injury by inhibiting MAPK signaling [81,165] and Nrf2 transcriptional activation [164]. These reports are interesting since they suggest that apigenin protects the heart through intracellular signaling and gene expression. Again, inhibition of HDACs has been linked to MAPK inactivation and control of the transcription factor activation [50,51,131].
In addition, we and others have shown that apigenin inhibits class I HDAC activity [119]. Inhibition of HDAC activity by apigenin has been linked to hyper-acetylation of histone proteins in cancer models that contributes to cancer cell death [167,168]. Collectively, these data would suggest that cardio-protective actions for apigenin is controlled, in part, via HDAC-dependent mechanisms that necessitate epi-genome wide changes in gene expression.
Luteolin is commonly found in celery, parsley, broccoli, onions, carrots, peppers, cabbages and apples [169]. These foods and other plants such as the chrysanthemum flower have been used in traditional Chinese medicine for the treatment of hypertension as well as for treating microbial infections [170]. Unlike other flavonoids, epidemiological studies examining the cardio-protective role for luteolin remains unclear [171,172]. This may partly be explained by the low intake of this flavone in the diet [139]. In the cell culture and rodent models, however, luteolin has shown clear cardio-protection. Mechanistic actions for luteolin generally involve the regulation of sarcoplasmic reticulum Ca 2+ -ATPase 2a (SERCA2a) [82,173,174]. SERCA2a is decreased in the failing heart, which leads to impaired calcium reuptake and cardiac contractile dysfunction [175]. Post-translational modification of SERCA2a has been suggested as critical for SERCA2a function. Modifications from small ubiquitin-related modifier 1 (Sumo1) and phosphorylation via MAPK activation appear vital for SERCA2a-dependent calcium re-uptake into the sarcoplasmic reticulum [175,176]. Recent findings showed that class I HDAC inhibition promoted SERCA2a SUMOylation [177]. This would be expected to improve cardiac contractility. Notably, luteolin was reported to inhibit class I HDAC activity as well as increase lysine acetylation on histone H3 in cardiac myoblasts [119]. Furthermore, docking studies demonstrated that luteolin binds within the catalytic domain of class I HDACs to inhibit HDAC activity [178]. Lastly, luteolin was reported to attenuate cardiac dysfunction by regulating Akt and MAPK signaling [174,179]. Similar to other flavonoids, these data would suggest that luteolin attenuates MAPK phosphorylation by inhibiting HDAC activity and the data suggest that this attenuates cardiac remodeling and dysfunction. This postulate is currently being tested.
Scutellaria baicalensis was used as an herbal remedy in traditional medicine to treat bacterial and viral infections especially hepatitis, but it has since shown efficacy for the treatment of hypertension, inflammation, oxidative stress and cancer [180]. While over 50 flavonoids have been isolated from this mint plant for traditional Chinese and Japanese medicine, baicalin and baicalein constitute its major phytochemicals [181]. These two phytochemicals only differ in that baicalein has a distinguishable aglycone [182]. With regard to the heart, baicalein [183][184][185] and baicalin [83][84][85][186][187][188][189] have shown efficacy in ischemia-induced and isoproterenol-induced cardiac dysfunction. Similar to other flavonoids, baicalein and baicalin elicit cardio-protection by inhibiting oxidative stress and inflammation as well as attenuating MAPK signaling [178][179][180]183,[185][186][187][188]. Baicalein was also reported to inhibit cardiac hypertrophy and fibrosis in mice exposed to aortic constriction [190]. This was partly explained by the inhibition of ERK phosphorylation [190]. Similar results were shown for baicalin in which baicalin-mediated ERK inactivation improved isoproterenol-induced cardiac dysfunction [188], bleomycin-induced pulmonary hypertension [84] and myocardial infarction [189]. These studies did not examine the epigenetic actions for baicalein or baicalin in regulating heart function. However, it has been reported that baicalein can inhibit HDAC4 and HDAC5 [191] while baicalin was shown to inhibit HDAC2 [192] and HDAC1 [193] in various models of disease. These findings demonstrate that baicalein and baicalin act as HDAC inhibitors. Coupled with our more recent findings that baicalein and baicalin inhibited HDAC activity in cardiac tissue [119], these data would suggest that future studies for these two phytochemicals as epigenetic regulators of cardiac function is warranted.
Flavanols
Flavanols or catechins are structurally similar to flavonols but differ in the heterocyclic C ring. Flavanols do not contain a double carbon bond that allows four diastereoisomers to form from two chiral centers [194]. These phytochemicals are commonly found in chocolate, in the skins of apples and berries as well as in teas. Notably, epigallocatechin gallate (EGCG) is a flavanol that is abundant in the leaves of the green Camellia sinensis plant [195]. The compounds in these leaves are mostly consumed as the beverage green tea and have been used in traditional medicines for thousands of years around the world. Epidemiological research suggests that tea consumption is cardio-protective particularly in overweight and obese individuals [196]. Heart function improvements have been linked with the anti-oxidant and anti-inflammatory actions of EGCG, which are attributed to the eight hydroxyl groups on EGCG [86,87,[197][198][199][200][201][202]. In these reports, EGCG was shown to inhibit diabetic cardiac dysfunction [198,199] and chemotherapy-induced cardiotoxicity [86,200]. In addition to its actions as an antioxidant and as anti-inflammatory, EGCG acts as a chelator [203,204]. This suggests that EGCG can interact with and chelate zinc within the catalytic domain of HDACs. In support of this, EGCG has been reported to inhibit HDAC activity even though docking studies have yet to be performed [119]. In addition, EGCG was shown to attenuate age-related cardiac dysfunction, in part, through increased acetylation of histone H3 at the cardiac troponin I promoter. This increased troponin's expression and improved muscle function [205]. Increased histone acetylation was likely due to the inhibition of class I HDAC activity [205]. Additional reports have shown that EGCG inhibited HDAC3 activity in the heart, which also led to FoxO1 hyper-acetylation and attenuation of hyperglycemia-induced apoptosis [206]. FoxO1 plays an important role in apoptosis [9]. Based on these findings and considering that tea is heavily consumed worldwide, it would be interesting to elucidate HDAC activity in the peripheral blood mononuclear cells (PBMCs) of patients before and after green tea consumption. PBMCs have been used as indirect read-outs for disease states in patients with type II diabetes and CVD [207,208].
Flavanonols
Flavanonols are 3-hydroxy derivatives of flavanones and are also known as dihydroflavonols [194]. Phytochemicals identified as flavanonols are sparse within the literature. However, dihydromyricetin is a flavanonol that has been implicated in health and disease [209]. With regard to the heart, reports have shown that dihydromyricetin is protective in I/R-induced cardiac injury [210], angiotensin II-induced cardiac fibrosis [211,212], diabetic cardiomyopathy [213] and lipopolysaccharide (LPS)-induced cardiac injury [214]. Dihydromyricetin elicited its cardio-protective effects, in part, by acting as an anti-oxidant, anti-inflammatory, and an inhibitor of the NF-κB pathway [211][212][213][214]. While no study has examined the role for dihydromyricetin in the epigenetic regulation of gene expression, recent findings from our lab showed that dihydromyricetin inhibited HDAC activity [119]. These data, while preliminary, highlight the potential for dihydromyricetin as an epigenetic modifier of gene expression for the prevention and or treatment of cardiac disease.
Proanthocyanidins
Proanthocyanidins are abundant in the diet since they are found in fruits such as grapes, peaches, apples, pears and berries as well as wine, tea and beer [215]. These compounds are the subsequent products of catechins and form dimer, oligomer, and polymer complexes that promote their bioactivity [216]. Studies show that proanthocyanidins protect the heart in models of atherosclerosis. Many of these studies reported anti-oxidant and anti-inflammatory properties for proanthocyanidins [216]. For example, grape seed procyanidin (GSP) was shown to improve cardiac function by inhibiting inflammation and oxidative damage [217,218]. A systematic review/meta-analysis examined GSP intake in regulating blood pressure, heart rate, low density lipoprotein (LDL), high density lipoprotein (HDL) cholesterol, total cholesterol, triglycerides and C-reactive proteins [219]. This report demonstrated that GSP extract lowered systolic blood pressure and heart rate but did not significantly affect other cardiac markers. Other reports have shown that proanthocyanidins are efficacious for treating human hypertension [220]. Consistent with these reports, experimental rodent models of cardiac disease demonstrated that GSP extract protected the heart in response to a high fat diet [217,218,221], doxorubicin-induced cardiotoxicity [222][223][224][225][226], heavy metal-induced cardiac stress [227][228][229], isoproterenol-induced HF [230][231][232] and I/R injury [233][234][235][236][237]. Additional studies reported that GSP extract lowered liver and blood cholesterol as well as triglycerides [238][239][240][241]. This would suggest CVD protection. Moreover, GSP extract was shown to inhibit HDAC activity, specifically HDACs 2 and 3, and increase histone acetylation in the liver [241]. This was suggested to impact nuclear hormone receptor expression and lower serum triglycerides [241]. These results are interesting and suggests that cardio-protective actions of GSP result from HDAC inhibition. This postulate is currently under investigation.
Quinones
Plants contain enzymes including polyphenol oxidase that catalyzes a multitude of reactions such as the oxidation-reduction. Quinones are one product of these reactions and are synthesized from organic, aromatic compounds [242]. Quinones are not aromatic but conjugated and contain at least one benzene-like ring with redox functionality [243]. Anthraquinones are a subgroup of quinones that participate in redox reactions such as the regulation of hydrogen peroxide [243]. Emodin is an anthraquinone that can be found in rhubarb, aloe vera and fo-ti root, which is also known in China as he-shou-wu [244]. Traditional Chinese medicine used these plants to treat viral and bacterial infections as well as bowel abnormalities. Due to its strong redox function and recently discovered anti-inflammatory properties, emodin has been investigated in the heart. Reports showed that emodin inhibited I/R-induced cardiac damage through improvements in the mitochondrial redox regulation [245,246]. Emodin was also reported to attenuate cardiac dysfunction in left coronary artery ligated mice, in part, by inhibiting NF-κB signaling and subsequent inflammation [247]. However, emodin is a strong metal chelator [248], which suggests that emodin can bind to and inhibit zinc-dependent HDACs. Consistent with this hypothesis, our lab published that emodin inhibited HDACs and increased histone acetylation in cardiac myoblasts [119]. Further unpublished data from our lab suggest that emodin inhibits cardiac myocyte hypertrophy, in part, through HDAC-dependent mechanisms. These observations would suggest an epigenetic function for emodin through HDAC inhibition. Our lab is currently investigating the in vitro and in vivo epigenetic implications for emodin and emodin-rich foods like rhubarb to delineate their roles in diet-gene interactions.
Stilbenes
Stilbenes are a small group of phytochemicals that are derived from the phenyl-propanoid pathway via stilbene synthase [249]. While stilbene concentrations are low in the diet, resveratrol is an exception. Resveratrol is found in wine as well as grapes and berries [250] and has been credited for the "French Paradox". CVD rates in France are lower than the rest of the world despite their high intake of saturated fats [251]. Studies suggest that resveratrol is cardio-protective [89,250,252,253]. Resveratrol was reported to attenuate cardiac damage in response to myocardial infarction [254][255][256][257][258], pressure overload [259][260][261][262][263] and hypertension [90,[264][265][266][267][268]. These reports demonstrated resveratrol inhibited oxidative stress and upregulated AMP-activated protein kinase (AMPK) expression and activity [257]. Other reports have confirmed resveratrol improves AMPK levels in the heart [263]. AMPK senses energy needs and stress in the heart. In response to cardiac remodeling, compensatory mechanisms activate AMPK [269]. AMPK activation has been shown to improve cardiac dysfunction [269]. Thus, resveratrol-mediated activation of AMPK is considered cardio-protective. In addition, resveratrol has been shown to stimulate class III sirtuin HDAC activity. This topic has been thoroughly reviewed [91]. Notably, the class III HDAC, Sirt1 regulates AMPK, which leads to a mechanism by which resveratrol-mediated activation of Sirt1 stimulates AMPK expression and activity [257]. Sirt1 is a deacetylase that has been shown to deacetylate lysine residues on histone tails [92]. Thus, most studies have shown that, unlike the phytochemicals discussed above, resveratrol attenuated diabetic cardiac remodeling concomitant with histone H3K9 deacetylation and changes in gene expression. This would suggest that class III HDAC inhibition has negative consequences in the heart. It should be noted that recent proteomic studies have shown that mitochondrial proteins are hyper-acetylated in failing hearts. Moreover, hyper-acetylation of mitochondrial proteins likely result from down-regulation of class III HDACs, which predominantly localize to the mitochondria [270,271]. While these data support a role for resveratrol in the "French Paradox", doses of resveratrol used in these studies significantly exceed concentrations found in the diet [250]. Nutraceutical companies, however, have developed supplements for human consumption. These nutraceutical companies may impart benefits since a recent double-blind, randomized control trial demonstrated that patients that received 500 mg resveratrol had reduced histone H3K56 acetylation, increased anti-oxidant activation in peripheral blood mononuclear cells (PBMCs) and reduced body fat [272]. While resveratrol activates class III HDACs, its role with zinc-dependent HDACs remains less well-studied. Resveratrol was shown to inhibit class I, II and IV HDACs in hepatoma cells [273]. This would suggest that resveratrol can stimulate the activity of class III NAD + -dependent HDACs and can also inhibit zinc-dependent HDACs. Thus, bioactive food compounds may serve multiple epigenetic roles in the control of human health and disease.
Other Polyphenols
Turmeric is a yellow-pigmented spice that has been used in several cultures including Indian and Southeast Asian cultures for centuries. Turmeric was traditionally used to treat inflammation and flu-like illnesses [274]. Turmeric is isolated from rhizomes of the plant Curcuma longa and contains several phytochemicals known as curcuminoids including the well-studied curcumin [275]. Curcumin is a polyphenol that has several hydroxyl groups and two aromatic phenyl rings with each containing a functional methoxy group [275]. Curcumin has been studied for the treatment of many diseases including cancer, Alzheimer's disease, rheumatoid arthritis and cardiac disease [276]. In the heart, curcumin has been shown to attenuate free fatty acid-induced injuries [277], I/R-induced injuries [278], chemo-induced cardiotoxicity [279,280], hypertension-induced cardiac remodeling [281], diabetes-induced cardiac injuries [282,283] and trauma-induced cardiac dysfunction [284]. Moreover, reports suggest that curcumin's cardio-protective effects can be converted to humans [93][94][95][285][286][287]. Of these, curcumin was shown to reduce circulating triglycerides [94,95,287] and improve cholesterol status [94], which are two known risk factors in the development of heart disease. Recently, curcumin was shown to inhibit p300/cAMP response element binding protein (p300/CBP)-mediated GATA4 acetylation through the inhibition of HAT activity [96,288]. GATA4 acetylation by p300/CBP stimulates GATA4 transcriptional activation and promotes pathological cardiac gene expression leading to cardiac hypertrophy [289]. Moreover, adrenergic-agonist-induced cardiac myocyte hypertrophy was attenuated with curcumin treatment concomitantly with GATA4 de-acetylation as well as inhibition of GATA4-DNA binding in hypertensive rats [290]. In addition to its inhibitory actions on HATs, curcumin was shown to act as a pan-HDAC inhibitor targeting zinc-dependent HDACs in cancer [291]. Similar to resveratrol, these data suggest multiple levels of epigenetic regulation for curcumin in regulating diet-gene interactions. These data also highlight curcumin as a promising nutraceutical for CVD and HF. However, continued work on curcumin bioavailability is warranted [292,293].
Alkaloids
Dietary alkaloids are widely consumed. Alkaloids are precursor compounds that can be derived from ornithine, lysine, tyrosine, tryptophan, nicotinic acid and purine [294]. For example, berberine is an isoquinoline alkaloid derived from tyrosine that naturally occurs in edible and herbal plants including Hydrastis canadensis, Coptis chinensis, Berberis aquifolium, Berberis vulgaris and Berberis aristata. Moreover, traditional Indian and Chinese medicines have used berberine-enriched plants for the treatment of viral and bacterial infections [295]. More recently, berberine was shown to attenuate diabetes and improve metabolic function [296,297]. In these studies, berberine improved insulin sensitivity through AMPK activation [296] as well as reduced LDL, total cholesterol, circulating triglycerides and increased HDL in the blood [297]. This is of interest since diabetes and metabolic dysfunction are major risk factors for the development of cardiac disease. In this regard, a bioactive capsule that contained several compounds including berberine hydrochloride was shown to attenuate myocardial fibrosis in diabetic rats. These actions were mediated through the inhibition of TGF-β1/Smad [298]. It should be noted that this capsule contained several phytochemicals and, therefore, the impact for berberine hydrochloride on myocardial fibrosis remains unclear. However, it has been reported that berberine improved cardiac function in hypertensive rats by inhibiting STAT3 binding and promoting STAT5a binding to the promoter region of the relaxin gene. This increased relaxin gene expression and subsequently attenuated cardiac fibrosis [299]. Switching of STAT3 for STAT5a at the relaxin gene promoter is controlled by histone H3 acetylation [300]. This is critical since we found that berberine hydrochloride inhibited class I and II HDAC activity [119]. Combined, these data would suggest that berberine-mediated HDAC inhibition would increase histone H3 acetylation at the relaxin gene promoter to inhibit cardiac fibrosis. Further examination of this hypothesis in the heart would be interesting and would provide epigenetic mechanisms by which berberine regulates gene expression.
Danggui Longhui Wan is an active alkaoloid that has been used for more than 4000 years. Danggui Longhui Wan was the customary treatment for chronic myelocytic leukemia and has had moderate success in leukemic disorders without major side effects [301]. The primary bioactive phytochemical in the medicinal recipe, indirubin, has since been isolated and characterized with several aromatic rings. The role for indirubin in cancer has been extensively reviewed [302]. With regard to the heart, indirubin and its derivatives protect against hyperglycemia-induced cardiac injury, aortic constriction-induced hypertrophy, I/R injury, hyperlipidemia-induced cardiac injury and diabetes-induced cardiomyopathy [97][98][99][100]303,304]. Cardiac protection was shown to be mediated in part through the attenuation of c-Jun-N-terminal kinase (JNK) signaling, caspase-3-directed apoptosis, and NF-kB expression [303]. Others have reported that indirubin regulated GSK-3β signaling in order to protect cardiac function [97][98][99][100]304]. These results are interesting since class I HDACs have been shown to regulate GSK-3β signaling [150], JNK phosphorylation [50] and NF-κB activation [131]. Since we reported that indirubin inhibited HDAC activity in cardiac tissue [119], these data would suggest that cardio-protection is mediated, in part, through HDAC-dependent actions. Further investigation is needed to elucidate the epigenetic role for indirubin in diet-gene regulation within the heart.
Isothiocyanates
Many foods contain phytochemicals with one or more sulfur groups and are commonly known as organosulfur compounds. Of these, isothiocyanates have been linked with the attenuation of cancer, diabetes, and CVD. Sulforaphane is an isothiocyanate that is found in cruciferous vegetables like broccoli and cauliflower. Early studies showed that sulforaphane inhibited zinc-dependent HDAC activity and, thus, blocked cancer proliferation and induced cancer cell death [101,102,[305][306][307][308]. Furthermore, these studies showed that sulforaphane blocked HDAC activity in the cell culture while rodents and humans fed broccoli sprouts [101,102,[305][306][307][308]. In the heart, sulforaphane attenuated chemotherapy-induced cardiotoxicity [309,310], I/R injury [311,312], angiotensin II-induced hypertrophy [313], myoblast apoptosis [314], diabetes-induced cardiomyopathy [315,316] and aortic constriction-induced HF [317]. These studies consistently showed that cardio-protective effects of sulforaphane occurs due to the inhibition of oxidative stress. This likely resulted from Nrf2 upregulation [315,316], which is a transcription factor that regulates genes involved in the oxidative stress response. As previously mentioned, class III HDACs regulate Nrf2 [131,318,319]. In addition to its actions directed at Nrf2 induction, sulforaphane was shown to block oxidative stress-induced AMPK inhibition [315]. AMPK is downstream of Nrf2 and upstream of the class III HDAC, Sirt1 [257]. In addition to its role in the regulation of zinc-dependent HDACs and sirtuins, sulforaphane was also shown to attenuate cardiac hypertrophic gene expression by inhibiting GATA4/6 transcriptional activation. This was likely mediated through the inactivation of the MAPKs [320]. HDAC inhibition has previously been shown to inhibit MAPK activity [50]. HAT inhibition controls GATA4 acetylation and subsequent activation [96,288]. However, no report examined the role for sulforaphane in the HDAC-dependent regulation of CVD or HF. This is interesting considering its historical role as a pan-HDAC inhibitor in cancer. Moreover, sulforaphane has been translated to the bedside, which demonstrates the efficacy for this compound as an HDAC inhibitor [307,308]. Combined, these studies suggest further investigation of sulforaphane as an epigenetic regulator of gene expression and cardiac function. Similar to curcumin and resveratrol, sulforaphane likely regulates many epigenetic pathways in the control of human health and disease and these studies should be performed. Lastly, other isothiocyanates including phenethyl isothiocyanate (PEITC) should be investigated in the heart as preliminary evidence, which suggests a cardio-protective role for PEITC [321] as well as a potential role for PEITC as an HDAC inhibitor [322,323].
Other Food Bioactives
Butyrate is a short-chain fatty acid that is metabolized from bacteria within the large intestine and is a well-known short-chain fatty acid HDAC inhibitor [103]. Recent data suggests that gut bacteria play an important role in biological function that governs human health and disease [104]. For example, these bacteria or gut microbiota synthesize butyrate from consumed fibrous, plant-based foods and, once synthesized, butyrate has been shown to inhibit cancer [103], diabetes [324], and CVD [325]. While no epidemiological studies were found linking butyrate to heart health, there is no doubt that consuming fruits, vegetables, and other fibrous, plant-based foods is cardio-protective. Moreover, experimental studies have shown that butyrate is cardio-protective. These studies demonstrated that butyrate protects the heart in an HDAC-dependent manner [326,327]. Butyrate was shown to improve cardiac function through HDAC inhibition in diabetic mice [326]. Moreover, GLUT1 and GLUT4 were upregulated via GLUT1 acetylation and p38 phosphorylation, which leads to improvements in glucose uptake [326]. Similarly, butyrate improved serum cholesterol and left ventricle function via HDAC inhibition in diabetic mice [327]. Like butyrate, valproic acid has been shown to improve cardiac function by acting as an HDAC inhibitor [328]. Since valproic acid is currently approved for the treatment of epilepsy, these data would suggest that short-chain fatty acid HDAC inhibitors are safe and tolerated in humans. Therefore, investigation of HDAC activity in the PBMCs of patients treated with short-chain fatty acids would be of interest. However, it should be cautioned that milli-molar doses of short-chain fatty acids are required for HDAC inhibition and, thus, these compounds likely elicit off-target actions that may contraindicate their therapeutic use for treating CVD/HF.
Whole Foods
Much of this review has focused on individual bioactive food compounds in regulating heart disease. However, phytochemicals are packaged in combination within fruits and vegetables. As a result, it is imperative that we understand how phytochemicals within whole foods solicit epigenetic changes to regulate human health and prevent cardiac disease. It has been reported that grape powder extract improved blood lipid profiles in mice. Improvements in blood lipids occurred, in part, by inhibiting HDACs 2 and 3. This led to peroxisome proliferator-activated receptor alpha (PPARα) gene expression. PPARα regulates hepatic lipid metabolism [241]. Thus, consumption of procyanidin-rich grapes, grape juice, or wine has the potential to elicit epigenetic changes in a manner consistent with heart health [329]. Similarly, foods such as cereals enriched with flavanoids and phenolic acids has been inversely correlated to mortality from coronary heart disease and heart attacks [111]. It remains unclear if the protective actions for fortified cereals on heart disease were mediated through the HDAC inhibition. Considerable work is still needed to understand the epigenetic impact for whole foods on cardiac health.
Conclusions
In this review, we discussed the role for HDAC inhibitors as potential therapeutics for the treatment of HF (Figure 2). In addition, we highlighted food bioactive HDAC inhibitors and discussed their potential implications for the prevention and/or treatment of CVD and HF (Figure 2). The role for diet-gene interactions in human health and disease has been studied extensively over the last couple of decades. Yet recent technologies have improved our understanding for food bio-actives as epigenetic regulators of gene expression [330][331][332]. This diet-epigenetic-gene interaction (nutri-epigenetics) has yielded new and significant insight in the field of nutrition. mediated through the HDAC inhibition. Considerable work is still needed to understand the epigenetic impact for whole foods on cardiac health.
Conclusions
In this review, we discussed the role for HDAC inhibitors as potential therapeutics for the treatment of HF (Figure 2). In addition, we highlighted food bioactive HDAC inhibitors and discussed their potential implications for the prevention and/or treatment of CVD and HF (Figure 2). The role for diet-gene interactions in human health and disease has been studied extensively over the last couple of decades. Yet recent technologies have improved our understanding for food bio-actives as epigenetic regulators of gene expression [330][331][332]. This diet-epigenetic-gene interaction (nutriepigenetics) has yielded new and significant insight in the field of nutrition.
Figure 2.
Model demonstrating that food bioactives (phytochemicals) inhibit histone deacetylase (HDAC) activity as a cardio-protective mechanism. HDACs catalyze the removal of acetyl groups from lysine residues on histone tails. Deacetylation of histones leads to changes in electrostatic interactions between DNA and histone proteins that lead to chromatin condensation and gene repression. Conversely, histone acetyl transferases (HATs) add acetyl marks contributing to relaxed Figure 2. Model demonstrating that food bioactives (phytochemicals) inhibit histone deacetylase (HDAC) activity as a cardio-protective mechanism. HDACs catalyze the removal of acetyl groups from lysine residues on histone tails. Deacetylation of histones leads to changes in electrostatic interactions between DNA and histone proteins that lead to chromatin condensation and gene repression. Conversely, histone acetyl transferases (HATs) add acetyl marks contributing to relaxed chromatin and gene expression. Increased HDAC activity is linked to cardiac dysfunction while inhibition of HDACs is cardio-protective. Thus, food bioactive HDAC inhibitors promote heart health via epigenetic regulation of gene expression.
The majority of the reports described in this review studied individual dietary compounds in the control of cardiac disease. However, our diet is composed of a plethora of macro-nutrients and micro-nutrients that potentially act in a competitive, additive, or synergistic manner to control cellular function. It has been surmised that the increased intake of fruits, vegetables, and whole grains is beneficial for human health because of the multitude of interactions between macro-molecules and micro-molecules in the regulation of cell function. This can be seen in studies that examine combined food bioactive interactions in various disease models. For example, combination treatment with luteolin and fisetin ameliorated NF-κB signaling and subsequent inflammation in the treatment of hyperglycemia [333]. In addition, food freshness and food preparation, e.g., steaming vs. raw can impact nutrient content and composition as well as phytochemical properties. Thus, it is imperative that future studies investigate the role for food freshness and preparation on macro-molecule and micro-molecule concentration and whether this impacts cellular function. Lastly, future studies examining macro-molecule and micro-molecule interactions on a cellular function will be important for future studies in order to expand our overall understanding within the nutrition field.
While a major extent of this review focuses on the protective effects of phytochemicals in the heart, it should be noted that over-consumption can contribute to adverse effects. For example, a recent randomized, placebo-controlled crossover trial that supplemented healthy participants on a high-fat diet with curcumin and resveratrol found that serum triglycerides were elevated six hours postprandial [334]. This is consistent with other reports that demonstrated a significant increase in serum triglycerides and total cholesterol in diabetic patients that received resveratrol [335]. Additional reports in humans have associated gastrointestinal/abdominal distress with resveratrol doses at or above 500 mg [336]. However, this may be on an individual basis since this dose and higher have been well-tolerated [335]. Curcumin also has reported adverse effects at high-doses (500-12,000 mg) that include diarrhea, headache, rash, and yellow stool [337]. Phytochemical dose-response and dose-dependency experiments (e.g., IC50 and LD50) in different pathological models are currently underway to alleviate such concerns. However, despite the established safety of many phytochemicals, negative side effects may exist. As such, studies examining safety and efficacy are equally as important as studies elucidating phytochemical benefits.
Current FDA approved HDAC inhibitors have been developed for the treatment of T-cell lymphoma [26][27][28][29]. Additional HDAC inhibitors are undergoing the long and strenuous process of phase 1-3 trials needed for FDA approval, but none are currently meant for the treatment of HF. The Dietary Supplement Health and Education Act of 1994 (DSHEA) allows for lenient IRB and FDA approval of food-derived substances and phytochemicals [338]. Phytochemicals/nutraceuticals, therefore, can more-readily see human trials compared to current HDAC inhibitors. Several phytochemical nutraceuticals, e.g., curcumin, resveratrol, and sulforaphane have been shown to modulate histone acetylation in human PBMCs, which is described above. These results do not suggest, however, that systemic acetyl-histone modification provide direct mechanisms for human cardio-protection or health. What these results do suggest is that these phytochemical nutraceuticals or their metabolites are capable of inhibiting HDAC activity in the blood. This is important because many phytochemicals that are efficacious in vitro and in vivo are not absorbed or bioavailable. Curcumin, resveratrol, sulforaphane, and other identified phytochemical HDAC inhibitors such as butyrate require further investigation in human subjects but show promise. It would be particularly interesting to supplement foods and nutraceuticals containing these compounds in CVD-susceptible human subjects and examine classic circulating CVD markers such as the natriuretic peptides atrial natriuretic peptide (ANP) and brain natriuretic peptide (BNP). In addition, non-invasive examination of blood pressure as well as cardiac wall-thickness and function via echocardiography would provide useful insight for phytochemical therapeutics in CVD/HF patients.
In conclusion, food bioactive HDAC inhibitors act as epigenetic regulators of chromatin structure and gene expression. This leads to diet-genome interactions that appear to promote human health and deter cardiac disease. Research investigating food bioactive HDAC inhibitors in the heart is ongoing and will likely yield novel insights within the field of nutritional epigenomics. While this review focused on the role for food bioactive HDAC inhibitors in the heart, it would be naïve to believe that these molecules only target proteins involved in acetylation/deacetylation. This is evident with sulforaphane, which is a molecule that inhibited zinc-dependent HDACs [101,102,[305][306][307][308] and activated sirtuins [131,315,316,318,319]. Sulforaphane has also been shown to regulate DNA methylation in order to control gene expression [339][340][341]. Thus, our understanding of food bioactive epigenetic modifiers in health and disease is in its infancy. Lastly, diet-microbiome interactions are likely to yield metabolites that also impact the epigenome. This diet-microbiome-epigenome axis likely plays a critical role in human health. Future studies are likely to explore this relationship, which is currently happening in the gut and brain [341][342][343]. | 2018-08-22T21:31:15.734Z | 2018-08-01T00:00:00.000 | {
"year": 2018,
"sha1": "9dce91653f02a1fc6d800b38076e7814f15f2deb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/10/8/1120/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9dce91653f02a1fc6d800b38076e7814f15f2deb",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
248153756 | pes2o/s2orc | v3-fos-license | Study Protocol: A Randomized Controlled Prospective Single-Center Feasibility Study of Rheopheresis for Raynaud’s Syndrome and Digital Ulcers in Systemic Sclerosis (RHEACT Study)
Introduction Raynaud’s phenomenon (RP) and digital ulcers (DU) are frequent manifestations of Systemic Sclerosis (SSc). Despite being very common in SSc patients, both conditions have proven to be notoriously difficult to study. There are very few available approved drugs with varying efficacy. It has been shown that the presence of DU is associated with increased whole blood viscosity (WBV). Rheopheresis (RheoP) is an extracorporeal apheresis technique used to treat microcirculatory disorders by improving blood viscosity. Improved blood flow and wound healing after RheoP treatments have been reported in single case reports. Methods and Analysis We report the clinical trial protocol of “A randomized controlled prospective single-center feasibility study of Rheopheresis for Raynaud’s syndrome and Digital Ulcers in Systemic Sclerosis (RHEACT).” RHEACT aims to investigate the efficacy of RheoP on the Raynaud Condition Score (RCS) as the primary efficacy outcome measure after 16 weeks from baseline. Thirty patients will be randomized in a 1:1:1 ratio to one of two RheoP treatment groups or assigned to the standard of care (SoC) control group (intravenous iloprost). Secondary endpoints include changes in DU, changes in nailfold video capillaroscopy and patient-reported-outcomes (Scleroderma Health Assessment Questionnaire, FACIT-Fatigue, and the Disability of Arm, Shoulder, and Hand, quick version). Discussion Apheresis techniques have been investigated in SSc but mainly in observational, retrospective studies, or single case reports. RheoP is a pathophysiologically driven potential new therapy for heavily burdened patients with SSc-associated secondary RP with or without DU. Ethics and Dissemination The study was registered at clinicaltrials.gov (Identifier: NCT05204784). Furthermore, the study is made publicly available on the website of the German network of Systemic Sclerosis “Deutsches Netzwerk Systemische Sklerodermie (DNSS).”
INTRODUCTION
Systemic Sclerosis (SSc) is an autoimmune disease of unknown etiology characterized by organ fibrosis and vasculopathy (1). The latter manifests clinically as Raynaud's phenomenon (RP), present in 90-100% of patients with SSc (2). While primary RP, in which the cause is unknown, is usually benign, secondary RP occurs in the context of distinct disorders, and one should suspect a predisposing disease (3). In its classic form, Raynaud's phenomenon leads to pallor, cyanosis, and reactive hyperemia of affected fingers and toes. A critical complication of RP is the development of digital ulcers (DU), and skin necrosis, digital (auto-)amputation, and functional impairment may occur subsequently (4). Current standard of care (SoC) treatment options aim at vasodilation. They include conservative procedures such as cold exposure prevention and hand-warming but also medical therapy with antihypertensive drugs, such as calcium-channel blockers (CCB) or the intravenous application of the vasodilating agent iloprost (1,2,4). However, these antihypertensive or vasodilating drugs are often not well tolerated by patients with reported side effects, including hypotension, migraine-type headache, or chest pain in up to 92% of patients (5). Further, intravenous iloprost infusions are typically performed in hospitalized patients for five to seven consecutive days and may require repeat administrations.
Other treatments, such as phosphodiesterase 5 (PDE5) inhibitors or endothelin receptor antagonists (ERA), have been studied (6) but have not been approved either for RP or DU in many countries. Bosentan, an ERA family member, has been approved for the prevention of new DU (7) but not for RP. Therefore, there is a clear need for additional treatment options.
Recently, whole blood viscosity (WBV) has been shown to be increased in a pilot study of SSc patients with DU compared to patients with a history of DU or without DU (8). Therefore, treatments that can positively affect blood viscosity might be a potential therapeutic option for patients with SSc-associated RP or DU. Plasma exchange (PEX) or variants thereof have been used in SSc with mixed results (9). In this regard, rheopheresis (RheoP) is an extracorporeal therapeutic intervention and a variant of PEX without needing a replacement fluid (fresh frozen plasma or albumin) using an additional rheofilter. It is, therefore, also referred to as double-filtration plasmapheresis (DFPP). In addition, RheoP has been investigated in other conditions affecting the microvascular circulation, such as agerelated macular degeneration, sensorineural hearing loss, critical limb ischemia, or diabetic foot syndrome after failure of standard treatments (10)(11)(12)(13).
This feasibility study aims to explore therapeutic RheoP as a novel treatment option for SSc-associated RP and DU and compare it to SoC treatment (iloprost). However, RheoP has thus far only been used in single case reports or case series (14,15). Therefore, the optimal treatment modality, duration, or frequency of RheoP in SSc has not been established yet.
Study Design
RHEACT (ClinicalTrials.gov Identifier: NCT05204784) is a randomized controlled, prospective single-center study conducted at the Department of Nephrology and Rheumatology of the University Medical Center (UMG) in Göttingen, Germany. RHEACT will compare two different RheoP treatment regimens with iloprost over 24 weeks. The primary endpoint will be assessed at 16 weeks. The decision to evaluate the primary endpoint at 16 weeks was chosen to maximize patient retention in the trial until the primary endpoint assessment. A total of 30 patients will be allocated at random to one of three treatments.
Randomization
Patients will be randomized using block randomization with random block length, stratified for the month of inclusion (October to March or April to August) to minimize bias due to ambient temperatures on the primary outcome measure. Randomization will be performed electronically after the assessment of eligibility.
Patients
All patients ≥ 18 years of age fulfilling the ACR/EULAR Classification Criteria for SSc (Supplementary File 1) (16) are eligible. The presence of RP with or without DU is required. Furthermore, the failure of at least one SoC therapy has to be reported. The Raynaud Condition Score (RCS), a patientreported outcome measure used in many studies assessing RP, has to have a value ≥ 4 (17)(18)(19). To perform the RheoP procedure, appropriate venous access, either through a peripherally or centrally inserted catheter, must be established.
Exclusion criteria include significant anemia (hemoglobin < 8 g/dL), clinically relevant hemorrhagic diathesis or coagulopathy, diabetes mellitus, and severe acute or chronic kidney (eGFR < 30 ml/min/1.73 m 2 ) or liver failure. In addition, patients with hypotension (systolic blood pressure < 100 mmHg) are not considered eligible. Chronic viral infections like HIV and Hepatitis B and C also preclude participation in this study. Patients with relevant neurological diseases like epilepsy, psychosis, dementia, or other relevant neurologic conditions are also excluded from participation. Other general exclusion criteria include a life expectancy of fewer than 12 months, abuse of alcohol, drugs, and a reported long-term serious tobacco abuse with documented consequential damage like severe vascular disease (Fontaine stage III or higher). Furthermore, patients with severe hyperlipoproteinemia, defined as a significant elevation of LP(a) or LDL cholesterol despite standard doses of medical therapy, are also not eligible for participation in this study. The main inclusion and exclusion criteria are summarized in Table 1.
Rheopheresis Procedure
The RheoP procedure will be performed and supervised by experienced technical and nursing staff using a Plasauto Sigma blood purification machine (DIAMED Medizintechnik GmbH, Cologne, Germany, and Asahi Kasei Medical Co., Ltd., Tokyo, Japan). The RheoP circuit is depicted in Figure 1. When patients are cannulated peripherally, a blood flow of ∼70-80 ml/min will be used. A maximum blood flow of 100 ml/min will be used in centrally cannulated patients. The plasma flow is aimed at around 25% of the blood flow (∼25 ml/min). The target treatment volume is calculated using the formula: 42 ml × kg body weight (Example: 42 mL × 70 kg BW = 2,940 mL [∼3,000 mL]). A treatment is considered technically appropriate if a target volume of 0.8-1.0 is reached. Heparin is used as an anticoagulant to prevent blood clotting during the procedure. Typical doses are 2500 IU given as a bolus at treatment initiation and 2000 IU per hour as continuous infusion given through the apheresis machine. All products used in this study are CE certified as per regulatory requirements.
Treatment
Patients will receive one of three treatments in a 1:1:1 ratio. The treatment groups (RheoP) will be randomized to two treatment schedules (Figure 2).
Treatment group 1: This group initially receives two rheopheresis treatments per week for 2 weeks, followed by 8 weeks without treatment. After 8 weeks, the patients will receive another 2 weeks of two treatments per week. Patients in this group will receive a total of eight RheoP treatments.
Treatment group 2: Patients in this group will receive two rheopheresis treatments in week one, followed by treatment intervals of one treatment every 2 weeks. In total, this group also receives eight treatments.
Control group: The control group is supplied with standard medical therapy for RP, consisting of intravenous iloprost therapy given as continuous infusion via an infusion pump over a minimum of 6 h (dose range 10-40 µ g per day).
All patients will be advised to comply with general recommendations to avoid RP attacks, such as smoking cessation, avoidance of cold temperatures, stress reduction, and optimized skin care.
Visit Schedule and Assessments
Study visits are performed on eight occasions: An initial screening visit to assess eligibility and a baseline visit for randomization. Then, study visits are conducted every 4 weeks up to week 24 (Figure 2). Each study visit consists of a physical examination with vital signs and evaluation of the RCS and modified Rodnan skin score (mRSS). The laboratory analyses include a complete blood count, fibrinogen, antithrombin, d-dimers, uric acid, blood urea nitrogen, creatinine, creatine kinase, nt-pro brain natriuretic peptide, troponin-I, erythrocyte sedimentation rate, low-density lipoprotein (LDL) cholesterol, immunoglobulins, protein electrophoresis, C-reactive protein (CRP), complement factor C3 and C4, and SSc-associated antibodies. All these values are either part of the routine assessment or required to evaluate the technical adequacy of the RheoP procedure (fibrinogen, albumin, IgG, IgM, LDL cholesterol). The baseline visit and end of treatment visits include a nailfold video capillaroscopy (NVC). Other non-invasive assessments include a transthoracic echocardiography, pulmonary function testing (PFT), and a pulse wave analysis (PWA). An overview of the scheduled assessments is given in Table 2.
Primary Outcome
The study's primary outcome measure is the change in RCS after 16 weeks (Supplementary File 2). The RCS is assessed at baseline and every 4 weeks before and after each RheoP ( Table 2); it will also be evaluated in the control group receiving SoC therapy. The RCS incorporates the frequency, duration, severity, and impact of RP attacks on a 0-10 numerical rating scale and can be documented using paper or electronic diaries.
Secondary Outcomes
Secondary endpoints are the frequency of new DU, worsening of DU, time to healing of existing DU, changes of laboratory parameters, the proportion of patients with an improvement FIGURE 1 | Schematic of the rheopheresis procedure. After obtaining venous access, anticoagulated blood is pumped through a plasma filter. The plasma is then run through the rheofilter, and large plasma proteins are removed. Finally, cells are reinfused, and blood is returned to the patient. The figure was created with biorender.com. in non-invasive assessments, and changes in patient report outcome measures.
Patient-Reported Outcomes Measures
The assessed patient-reported outcome (PROs) measures include the patient global assessment-visual analog scale (PaGA-VAS), the German versions of the Functional Assessment of Chronic Illness Therapy (FACIT) -Fatigue Scale (Supplementary File 3), the Scleroderma Health Assessment Questionnaire (SHAQ, Supplementary File 4), and the Quick Disabilities of the Arm, Shoulder, and Hand (Quick DASH, Supplementary File 5).
Safety
Adverse events will be explicitly assessed at every study visit and throughout the entire after the inclusion of every subject. In addition, adverse events will be reported according to the Common Terminology Criteria for Adverse Events (CTCAE, v5.0, November 2017).
Data Collection and Management
Clinical data for all patients, including frequency and duration of Raynaud attacks and the RCS, is collected during the routine clinic visits at least every 6 months. Study-specific data will be collected at screening, baseline, and the defined study visits (Figure 2). Data will be collected through electronic case report forms (eCRF) and stored in a provided GCP-compliant database (REDCap R ). Data is collected in compliance with Good Clinical Practice (GCP) and following standard operating procedures (SOP) of the Clinical Trials Unit UMG to ensure high data quality.
Methods Against Bias
Selection bias is minimized by random allocation in a 1:1:1 ratio stratified by the season of admission. Block randomization with random block length will be performed. Performance and detection bias is reduced as the patient's treatment group assignment will be concealed to a blinded team of study site investigators. Assessments will be performed before and after the treatments. In order to minimize bias related to outside temperatures, we will record the ambient temperatures during the study periods.
Proposed Sample Size/Power Calculations
The objective of this study is to gather initial data on the efficacy of different treatment protocols. When the sample size is 10, a two-sided 95% confidence interval for the difference in paired means of RCS will extend 1.178 from the observed mean, assuming that the standard deviation is known to be 1.9 and the confidence interval is based on the large sample z statistic. A standard deviation of 1.9 of the difference of the mean RCS was observed in prior studies on iloprost, e.g., Wigley et al. (5). Sample size calculation was performed using nQuery Version 8.3.1.0.
Data Analysis
Although the study has a confirmatory design that intends to test for group differences, the primary aim is to gather initial data on the efficacy of RheoP as a novel treatment option. Therefore, both treatment groups' Pre-Post treatment effects (baseline vs. 16 weeks) will be reported with 95%-confidence intervals. Further, RCS at the end of treatment (at 16 weeks) will be compared between groups by ANCOVA with treatment group as factor and baseline RCS and season as covariates. The secondary endpoint new DUs will be compared using Poisson regression or, in the case of apparent overdispersion, negative binomial regression. Patient proportions will be summarized in tables and compared between groups using Chi Square-Test. Line plots are evaluated, where possible, to descriptively assess the influence of the intervention on observations. Estimators are calculated following the treatment policy with the intention-totreat principle. Secondary endpoints are analyzed analogously to the primary endpoint. Finally, a sensitivity analysis with the per-protocol population will be performed. Additional vasoactive therapies, if present, will be considered as potentially confounding variables during the analysis.
DISCUSSION
RHEACT is the first controlled study to evaluate the efficacy of therapeutic RheoP in RP with or without DU in SSc. With this study, we seek to offer a potential new treatment option in patients with refractory RP or non-healing DU despite standard therapy. Raynaud's phenomenon is almost universal in SSc. In our experience, most SSc patients can be managed with symptomatic or medical treatment alone. However, a significant proportion of patients require additional treatment, including iloprost, ERA, or PDE-5 inhibitors. This is supported by the latest EULAR recommendations (20), but none of these therapies is licensed for RP in SSc, and results from clinical trials have been mixed (21,22). We acknowledge that the RCS is not a perfect primary outcome measure because it heavily relies on subjective impressions by the patients. Nevertheless, it is currently the most widely accepted outcome measure in studies for RP. A recent survey among SSc experts showed that the RCS is mainly used in clinical trial settings and has several limitations (19): it may be subject to seasonal variation and recall bias. Also, an individual patient's RP characteristics may change over time. We try to overcome the first limitation by block randomization according to the season of inclusion (see Methods section). Due to the relatively short observational period (24 weeks), changes over time secondary to vessel obliteration will likely not influence the results significantly. RCS also has the advantage of being a PRO.
In RHEACT, we try to gain insights regarding other secondary outcomes, such as the healing of existing DU or the development of new DU and additional PRO, including fatigue and daily function. Further, more objective outcome measures to study RP in SSc and other conditions are clearly required. For example, we recently investigated microvascular imaging (MVI) as a novel ultrasound-based method to quantify microvascular blood flow (23). However, our preliminary findings must be confirmed before applying them in clinical practice or a clinical trial setting.
Our first experiences with RheoP in refractory RP showed that it is a feasible and well-tolerated therapy (14), which may offer a novel, pathophysiologically based treatment in heavily burdened patients.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Ethics Committee of the University Medical Center Göttingen, Göttingen, Germany (protocol number 36/7/21). The patients/participants will provide their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
J-GR wrote the first draft and edited the manuscript. BT conceived the study and edited the manuscript. AB edited and reviewed the manuscript and is the study coordinator. RB assisted with the writing of the manuscript. AF edited and reviewed the manuscript and helped with the planning of the study. TA planned the statistical analysis and edited the manuscript. PK conceived the study, wrote the manuscript, created the figures, and acquired funding for the study. All authors contributed to the article and approved the submitted version. | 2022-04-14T22:56:20.843Z | 2022-04-14T00:00:00.000 | {
"year": 2022,
"sha1": "453996c80e32461c0f0f9fcda02d1b4067ede423",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "453996c80e32461c0f0f9fcda02d1b4067ede423",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220843907 | pes2o/s2orc | v3-fos-license | Radical resection and reconstruction of the sternum for metastasis of hepatocellular carcinoma
Metastatic hepatocellular carcinoma of the sternum is rare and a few cases of surgical resection have been reported. Anterior chest wall reconstruction after radical resection of the sternum and ribs aims to protect the heart and lung from external damage and herniation and restore physiologic stability of the chest wall during respiration. A variety of reconstruction techniques using various materials have been reported, but so far there are no definitive guidelines for the reconstruction of chest wall defects. Recently, we encountered a rare case of metastatic cancer of the sternum from hepatocellular carcinoma in which radical resection of the sternum and ribs, and anterior chest wall reconstruction with acellular dermal matrix and titanium plates were performed.
of the sternum from HCC, for which radical resection of the sternum and ribs, and anterior chest wall reconstruction with acellular dermal matrix and titanium plates were performed.
An 80-year-old female presented with anterior chest pain for a month. She had undergone a laparoscopic posterior sectionectomy of the liver due to a 4-cm-sized hepatitis B-related HCC 1.5 years ago. Adjuvant systemic chemotherapy was not performed. Physical examination showed tenderness on the lower half of the sternum with normal external appearance. Tumor marker values were within normal ranges (serum alpha-fetoprotein, 2.92 ng/ mL; prothrombin induced by vitamin K absence-II, 23.9 mAU/mL). Chest computed tomography scans revealed a 7.0 × 4.5 × 2.7 cm sized enhancing soft tissue mass with destruction of the sternum in the anterior chest wall (Fig. 1a). Whole-body bone scintigraphy demonstrated a photon defect accompanied by rim activity in the lower sternum (Fig. 1b). We decided to conduct a radical sternal resection with curative intent because the patient had symptoms with only a single metastasis., In the absence of definitive surgical resection, it has been shown that medical treatment can result in survival of less than 1 year [4]. We performed surgical resection in a supine position. The sternal tumor was clearly distinguished from the deep muscle fascia. Radical resection was performed. Margins of the frozen tissue section were confirmed to be negative. Radical resection was performed with lower two-thirds of the sternum, including the tumor, bilateral ribs (part of the third costal cartilage -7th costal cartilage), and pericardial fat around the tumor. The defect of the anterior chest wall was covered with acellular dermal matrix (12 × 12 cm; MegaDerm®, L&C BIO, Seoul, Korea) over the pericardium (Fig. 1c). Additional skeletal reinforcement was performed with titanium plates (RibFix Blu™, MIMMER BIOMET, Jacksonville, FL, USA) to stabilize the chest wall and restore physiologic respiratory movement (Fig. 1d). Drains were placed over the pericardium and over the MegaDerm. They were removed on the 4th postoperative day and 6th postoperative day, respectively. There were no intraoperative or postoperative complications. The postoperative course was uneventful. The patient was discharged in good condition with relief from preoperative pain on the 8th postoperative day. The final pathology of the resected tumor confirmed metastatic HCC.
Unlike primary sternal tumors, there is no consensus on the treatment for metastatic sternal tumors because of their low incidence and limited data [2]. Some authors have suggested that surgical resection can provide survival benefits if patients with one or two isolated extrahepatic metastases simultaneously show good functional preservation of the liver and favorable performance conditions. Surgical resection can also successfully treat intrahepatic HCC [5].
Large defects after radical resection of the anterior chest wall need reconstruction to protect intrathoracic organs and restore physiologic chest wall movement. A variety of reconstruction techniques using various materials have been reported. However, definitive guidelines for the reconstruction of chest wall defects have not been reported [6]. Materials used in reconstruction include synthetic materials (polytetrafluoroethylene and polypropylene), biologic materials (acellular dermal matrix), metallic materials (titanium), and allograft/homograft. Each has its advantages and disadvantages. Polypropylene mesh is widely used due to its solidity, manageability, long-term solubility, low-frequency foreign body reaction, and low infection rates. However, it is used as a sandwich technique with methylmethacrylate because covering the defect with only polypropylene mesh is a somewhat weak cover for a significant defect and molded methylmethacrylate alone is hard to be fixed to adjacent bones. The first layer of polypropylene mesh is fixed directly to the base of the chest wall defect. Then molded methylmethacrylate is added to the defect as the second layer of prosthesis and third layer of polypropylene mesh is covered to fix the molded methylmethacrylate [1,6]. Acellular dermal matrix consists of organic collagen-based matrix which stimulates regeneration by allowing for native tissue re-growth and revascularization. Unlike synthetic material, it can be placed directly over the lung and viscera without complications. However, the achieved stability does not result in a rigid reconstruction of the chest wall [6]. Surgical wound complications are crucial factors when selecting reconstruction materials. Some authors have reported no difference in the occurrence of surgical wound infections between the use of acellular dermal matrix and polypropylene for the skeletal chest wall reconstruction [7]. However, incidence of surgical wound complications including infections, wound dehiscence, skin necrosis, pneumothorax, pleural effusion, seroma, and hematoma are low when acellular dermal matrix is used [7]. Titanium has high corrosion resistance, low specific weight, and remarkable traction resistance. It is biologically inert and highly biocompatible [6]. Another sandwich reconstruction technique using titanium plates and biologic meshes, based on that the characteristics of biologic mesh consent its safe use with a second prosthetic material, has been reported [8]. This technique fixes titanium plates to the resected sternum and ribs between two layers of biologic meshes. The inner mesh is used to protect intrathoracic organs and the middle metallic plate is used to create an anatomic appearance and physiologic movement of the chest wall. The outer mesh is used to reconstruct the muscular plane. Biologic matrix is safely used with the titanium plate to create a precise shape and provide excellent reinforcement for defects due to its uniform tensile strength. It also creates an ideal substrate to avoid lung herniation and damage [8]. In our case, we performed radical resection of the sternum and costal cartilages without excising the soft tissues over the sternum. Reconstruction of significantly large anterior chest wall defects used an inner acellular dermal matrix, middle titanium plates, and the outer musculocutaneous tissue of the patient.
In summary, we encountered a rare case of metastatic HCC of the sternum that occurred one and a half years after hepatectomy. We radically resected the sternal tumor and reconstructed the anterior chest wall defect with acellular dermal matrix and titanium plates. The patient's postoperative course was uneventful. Her pain subsided. Long-term surveillance is needed to determine the survival benefit of this surgery. | 2020-07-29T14:58:57.935Z | 2020-07-29T00:00:00.000 | {
"year": 2020,
"sha1": "19e0825b1d4288eeb9d6e39a57fde758624be232",
"oa_license": "CCBY",
"oa_url": "https://cardiothoracicsurgery.biomedcentral.com/track/pdf/10.1186/s13019-020-01247-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "19e0825b1d4288eeb9d6e39a57fde758624be232",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
49611706 | pes2o/s2orc | v3-fos-license | A novel sample holder for 4D live cell imaging to study cellular dynamics in complex 3D tissue cultures
Three dimensional (3D) co-cultures to mimic cellular dynamics have brought significant impacts in tissue engineering approaches for biomedical research. Herein, we present a novel sample holder combined with time-lapse fluorescence imaging technique, referred as 4D live cell imaging, allowing direct visualization of various cells up to 24 hours. We further extended our approach to monitor kinetics and dynamics of particle uptake by cells and translocation across tissue membranes.
In vitro 3D co-culture models have offered many great advantages in the field of tissue engineering especially in providing more physiological environments and predictive output towards conventional 2D cultures 1,2 . Indeed it has been shown that cells cultured in a 3D configuration differ morphologically and physiologically 3 , and they possess more similar cellular behaviors to in vivo systems in comparison to their 2D culture counterparts 2 , allowing more realistic and reliable studies in a setting that resembles the in vivo environment 4 . A few examples have demonstrated the potential of 3D cell models in drug discovery 1 or biopharmacokinetics study of nanodevices in specific organs [5][6][7] , where, these models have comprised of a combination of different cell types in a 3D scaffold comprised of biological polymers, and/or the co-culture of various cell types. In particular, 3D models of the human lung epithelial tissue barrier have been established which has allowed for the accurate in vitro simulation of bacterial airway infection 8 and particulate uptake/translocation as well as cellular responses mimicking inhalation pathways 5 . Difficulties lay in the spatial characterization of thick in vitro tissues composed of several cell layers (i.e. >60 μm). Characterization of cell culture is normally performed by fluorescence light confocal microscopy, however, with 3D cell culture, the analytical challenge lies in the characterization of live co-culture/tissue in real time. Solving this will allow for the deeper understanding of the behavior of cells grown in a 3D co-culture configuration, as well as the study of the kinetics and dynamics of fluorescently-labelled drugs or nanocarriers over a longer period of time. In this brief communication, we report the design and fabrication of a sample holder by 3D printing technology optimized for 3D cell culture models cultured on widely used and commercially available permeable membrane inserts, combined with a simple time-lapse fluorescence confocal imaging technique, which we refer to as 4D live cell imaging. The system allows for the direct visualization of a live 3D cell model, without the need for the removal of membrane inserts and cell fixation. By using this method, the uptake and translocation of fluorescently-labelled silica particles across cellular tissue barriers has been followed in a 3D co-culture lung model consisting of three cell types namely, epithelial cells and macrophages on the apical and dendritic cells on the basolateral side of permeable inserts 9-11 over several hours up to one day (Fig. 1a).
The design of the imaging chamber consists of an insert holder fabricated by 3D printing with a "twist-fastener" lock mechanism (see methods for details) and a commercially available and widely used glass bottom dish (Fig. 1b, Supplementary Fig. 1). The system was engineered to hold or hang the permeable inserts and maintain a minimal distance between the bottom of the insert and the glass bottom dish in order to keep the cells on the lower side of the insert membrane alive. The distance of less than 0.5 mm allows for the use of typical long working distance objective lenses on the microscope. In the first instance, in order to investigate the applicability of the designed system, 2D monocultures of monocyte-derived macrophages (MDM) were cultured on either the apical or basolateral side of permeable inserts and labelled with a fluorescence dye for live cell tracking. Imaging acquisitions were performed using a confocal laser scanning microscope with 20X magnification lens (working distance Scientific REPORtS | (2018) 8:9861 | DOI:10.1038/s41598-018-28206-2 0.55 mm and numerical aperture 0.8). The image was acquired in a z-stack and in time-lapse mode (with the slice thickness of 2 μm and a time frame of 15 min). Figure 1c displays the final output represented as mean fluorescence intensity images revealing the dynamics (e.g. cell movement) of live MDM both in apical and basolateral configuration (see Supplementary Video 1). Single cell analysis shows the cells cultured on basal side (i.e. hanging cells) possessed a different morphology (i.e. more elongated shape and higher surface area, SA) than in the apical side ( Fig. 1d) but no significant difference in terms of motility speed (ѵ) for cells cultured in both orientation was observed (ѵ ca. 0.1 µm/min).
This technique was then applied to characterize systems that are more complex: 3D co-cultures. A 3D lung model was constructed according to a previously reported procedure 10-12 , where the system resembles human lung epithelial tissue. This barrier consists of three different types of cells, namely, human epithelial type-II cells (A549), MDM (on the apical side of the insert), and monocytes-derived dendritic cells (MDDC; on the basal side; Fig. 1a) and the cell type specific response within this model has been determined previously via multicolor flow cytometry 11 . Each cell type was fluorescently labeled with different fluorophores to distinguish between the cells and the corresponding emission channels were recorded sequentially to avoid any signal overlap (see Methods section). The fastest scanning rate for the three channels and 35-40 slices (slice thickness 2 μm) was ca. 3-4 min. To avoid any cell stress and possible light-induced cell killing due to extended light exposure, the time frame was increased to 20 min and the imaging was performed for up to one day. The obtained raw data was processed and rendered using a 3D rendering software for the better visualization of cells.
Our study shows, to the best of our knowledge, the behavior of living cells in a 3D co-culture model resembling the human lung epithelial tissue barrier for the first time (Supplementary Video 2). The three different cell types are easily distinguished (Fig. 1e,f) by the different fluorophores. The intensity of the fluorophores for live cell tracking was not reduced during the experiments and it was always possible to clearly identify the cells. Moreover, both MDM (red) and MDDC (green) are primary non-proliferating cells while epithelial cells in differentiated tissue have a slow proliferation rate, hence the intensity of the fluorophores were not reduced due to proliferation. The movement of MDM and MDDC was followed over 24 h (see corresponding tracks in Fig. 1g and Supplementary Video 2). In particular for MDM, the measured motility speed is 0.18 ± 0.14 µm/min, i.e. ca. two times faster than MDM's movement (i.e. 0.1 ± 0.08 µm/min) cultured only on the insert. We hypothesized that this difference is twofold. Firstly, the phagocytotic nature of macrophages to clear debris from dead cells in the surrounding: the MDM are compelled to move and clean the cell debris from the apoptotic epithelial cells. The difference in velocity can be also attributed to substrate stiffness, i.e. insert vs. epithelial cell carpet. It is important to note that only a few cells underwent apoptosis, however, no significant reduction of the cell number was observed during the imaging experiment indicating the effect of long acquisition time can be well tolerated with higher time frame. In addition, we have performed cytotoxicity test based on lactate dehydrogenase (LDH) assay and we found out that cells were still viable even after the imaging experiment ( Supplementary Fig. 2).
MDM and MDDC are the most prolific immune cells in the respiratory tract, where their movements within the lung epithelia are essential to their function. Macrophages are professional phagocytotic cells, whereas dendritic cells are antigen-presenting cells which can take up antigens both within and directly below the surface epithelium by extending protrusions into the respiratory lumen 13 . Upon activation, the dendritic cells migrate to the draining lymph nodes and interact with T-cells 14,15 . Hence, it was our aim to be able to visualize parts of this process (e.g. uptake of antigens and vertical transmigration across the epithelial layer to the dendritic cells) in real time. During the image acquisition, no vertical movement of macrophages from apical to basolateral side nor of dendritic cells from basal to apical side was observed (Supplementary Video 2). However, we were able to capture and reconstruct the establishment of cellular contact between MDDC and MDM within the epithelial layer (Fig. 2a) which so far has been only shown in 3D in vitro using fixed tissue imaging 10 . Our time-lapse data provide a mechanistic explanation of the establishment of contact between the immune cell types which was initiated by the formation of membrane protrusion by MDDC, followed by mechanical interaction between the two cells and retraction of the contact (Fig. 2b). The duration of the contact between two cells occurred ca. 40 min (see Supplementary Video 3).
Our approach was further extended to visualize cellular uptake and translocation of particles within the 3D lung model (Fig. 3a). Previous fixed imaging data of co-culture models have shown the internalization and translocation of particles both for singular or aggregated forms from apical side (macrophages and epithelial cells) to basolateral one (dendritic cells) 9,12,16 , however the biokinetics of these processes have never been visualized in real time. To validate first the ability of our system in monitoring particle uptake, we exposed our lung model to large size rhodamine B-labeled silica particles (1.2 µm in diameter, see material characterization in Fig. 3b,c) to ease visualization of the particles. To avoid any signal overlap with rhodamine B, the MDM and MDDC were labelled using the same fluorophore (i.e. Vybrant ® DiD) and their distinction was only detected by their position (basal vs. apical). Figure 3d shows the kinetic of cellular uptake of silica particles (yellow) by MDM (red) on A549 epithelial carpet (blue). As can be seen, majority of the particles were internalized by the MDM after 22 h, and very few particles by epithelial cells. We also noticed that only small numbers of particles were translocated to MDDC site which can be associated to reduction of particle translocation due to their large size ( Supplementary Fig. 3a). This result is in agreement with an earlier finding where polystyrene particles (1 µm in size) were found more in MDM rather than in epithelial cells or MDDC in the same co-culture model 12 . The 3D lung model was further exposed to smaller size rhodamine B-labeled silica particles (260 nm in diameter, Fig. 4a) at the initial concentration of 20 µg/mL (see material characterization for the particles in Supplementary Fig. 3b, Electronic Supporting Information). The kinetics of cellular uptake of silica particles (yellow) by MDM and epithelial cells (Supplementary Video 4) was recorded. Both cell types in the apical sides (i.e. epithelial and macrophages) internalized the particles as can be seen from z-stack experiments (Fig. 4b,c and Supplementary Fig. 3c). For the first time, the translocation kinetics of silica particles from apical side (i.e. MDM and epithelial cells) was visualized, passing through the membrane insert, to basal side (MDDC; Fig. 4b). Semi quantitative image analysis of the amount of particles in both the apical and basal side (which is expressed as particle surface area) show first a slow (0-5 h), and then significant (5-10 h) increase of particle intensity at the apical side indicating the uptake of particles by MDM and epithelial cells. Meanwhile, the earliest particle presence in basal side was detected only ca. 7-8 h post particle incubation indicating translocation and this signal, as expected, was found increasing over time (Fig. 4d). This provides a reasonable explanation of a route of particle translocation as the presence of free particles was often found in basal side before they were uptaken/phagocytosed by MDDC (Supplementary Fig. 4). We hypothesized that these free particles might be originally from externalized products by epithelial cells, i.e. exocytosed particles. This finding presents another particle uptake pathway by MDDC, building upon the possible extension of cytoplasmic processes or through particle transfer from MDM to MDDC as reported previously for polystyrene particles 10 .
In summary, a novel sample holder has been designed and built by 3D printing, and combined with 3D cell imaging allowing for the direct visualization of a live co-culture model that mimics human lung epithelial tissues. The developed system is robust, does not need any cellular fixation or insert membrane removal, and hence can provide a simple and useful platform to study many cellular processes of 3D cell models including dynamics of cell adhesion, transmigration, wound healing, immune response, etc. i.e. in the presence of nanoparticles or bacteria mimicking infection.
Methods
Design of insert holder. The design of the live imaging chamber consists of two main components. First is a donut-shaped cap with a "twist-fastener" lock mechanism (outer diameter 40 mm, inner diameter 16.6 mm, length vs width of opener 4.3 mm × 3 mm) which is designed specifically for polyethylene terephthalate (PET) transparent BD Falcon permeable inserts (growth area of 0.9 cm 2 , PET membranes for 12-well plates, pore size 3.0 µm in diameter; BD Biosciences). To design the cap we used the open-source program FreeCAD, which was subsequently 3D printed on a UltiMaker 2+ (UltiMaker, The Netherlands). The material we used was polylactic acid polymer. Before the imaging experiment the holder was autoclaved for sterilization purpose. Second component is a commercially available and widely used glass bottom dish (Mattek Inc, US).
Preparation of co-culture model. The 3D co-culture model consisted out of three different types of cells namely human alveolar epithelial type II cell line (A549) which was obtained from the American Type Culture Collection (ATCC, USA), human blood monocyte-derived macrophages (MDM) and dendritic cells (MDDC) which were isolated from buffy coats provided by the blood donation service SRK Bern and purified using CD14 Microbeads (Milteny Biotech) following the procedure reported previously 17 . The cells were grown in cell culture media containing RPMI 1640 (Gibco, Life Technologies Europe B.V., Zug, Switzerland) supplemented with 10% (v/v) fetal bovine serum (FBS; PAA Laboratories, Chemie Brunschwig AG, Basel, Switzerland), 1% (v/v) L-Glutamine (Life Technologies Europe) and 1% (v/v) penicillin/streptomycin (Gibco) and kept in a humidified incubator (37 °C, 5% CO 2 ) until reaching 90% cell confluency of T-75 culture flask (Thermo Fisher Scientific, Germany). The co-culture models were prepared as previously described 10 . Shortly, A549 cells (5.10 5 cells/mL, 0.5 mL, apical side) were seeded on a PET transparent BD Falcon permeable inserts (growth area of 0.9 cm 2 , pore size 3.0 µm in diameter, PET membranes for 12-well plates; BD Biosciences) placed in a 12 well plates BD Falcon tissue culture plates (BD Biosciences) containing 1.5 mL medium (lower chamber). Cells were cultured for 4 days and the medium was changed after the 2nd day. On day 5, medium was removed from the apical and basolateral chambers and the monolayer was stained with nuclei stained Hoechst 33342 (Invitrogen) or Vybrant ® DiO (Thermo Fisher Scientific, Germany) following the protocols provided by manufacturers for 30 min. The layer was washed three times with PBS. The inserts were gently turned up-side down, placed in a petri dish and possible cells grown on the basolateral side of the membrane were gently removed with a cell scraper. MDDC (8.10 5 cells/mL, 65 µL) were priorly stained with Vybrant ® DiI (Thermo Fisher Scientific, Germany) for 15 min and washed in PBS, were then pipetted onto the basolateral side of the inserts and incubated for 70 min. The inserts containing A549 and MDDC were held in the 3D printed insert holder and the bottom part was placed in the glass bottom dish containing 1.5 mL medium. MDM (4.10 4 cells/mL, 0.5 mL) which were pre-labeled with Vybrant ® DiD (Thermo Fisher Scientific, Germany) for 15 min and washed in PBS, were added on the apical side (i.e. the top of A549) and the MDM were let to sediment for 30 min before the imaging experiment.
Synthesis and characterization of rhodamine B-labeled silica particles.
Two different sizes of silica particles were synthesized following the Stöber method previously described in literature 18 . For small size particles, shortly, 9 mL of the silica precursor (i.e. tetraethyl orthosilicate, TEOS; Sigma Aldrich, Germany) was added to a preheated (60 °C) mixture of 100 mL of ethanol, 18 mL of deionized water and 14 mL of ammonium hydroxide (Sigma Aldrich, Germany). After 1 min of core formation, 300 µL of (3-aminopropyl) triethoxysilane (APTES; Sigma Aldrich, Germany) -rhodamine B conjugate, prepared the previous day by mixing 7.5 µL of APTES with 528 µL of rhodamine B isothiocyanate in ethanol (10 mg/mL) and stirred overnight, was added to the mixture to form fluorescently-label layers around these initially formed cores. The reaction was further stirred overnight and purified by centrifugation at 5,000 g and washed with ethanol three times followed by redispersion in autoclaved milliQ water 3 times. For larger particles, 2 mL of TEOS was added dropwise (2 mL/h) at room temperature to a mixture of 75 mL of isopropanol, 25 mL of methanol and 21 mL of ammonium hydroxide. After 1 hour of core formation, premixed solution of TEOS (6 mL) and APTES rhodamine B isothiocyanate (300 µL) was added dropwise (2 mL/h) to the reaction mix. The reaction was further stirred overnight and purified by centrifugation at 100 g and washed with ethanol two times followed by redispersion in autoclaved milliQ water 3 times. The synthesized particles were then visualized using a transmission electron microscope (FEI Tecnai Spirit, US) and their corresponding size was determined using FIJI software (NIH, US). The hydrodynamic diameter and zeta potential were measured by dynamic light scattering and zeta potential analyzer (Brookhaven, US), Scientific REPORtS | (2018) 8:9861 | DOI:10.1038/s41598-018-28206-2 respectively. The particle concentration was determined by measuring the weight of 2 mL of particle suspension after evaporating the water at 50 °C. Cellular uptake experiment. The experiment was performed by incubating the 3D lung model with 260 nm or 1.2 µm rhodamine B-labeled silica particles at the initial concentration 20 µg/mL or 50 µg/mL in 500 µL of cell culture media. After particle addition, the imaging experiment was immediately conducted.
Fluorescence imaging. All of the fluorescence images were acquired using Zeiss LSM 710 confocal laser scanning inverted microscope set up with 20X magnification, numerical aperture, NA, 0.8 of Zeiss LCI Plan-NEOFLUAR objective lens (Zeiss GmbH, Germany). Different fluorophores (Hoechst 33342, Vybrant ® DiI, rhodamine B, and Vybrant ® DiD) were excited sequentially at 405, 541 and 633 nm and their emissions were collected correspondingly by the detector with the frame size 512 pixel × 512 pixel. The image was acquired in a z-stack and in time-lapse mode with the slice thickness 2 μm and time between each frames 15-20 min. Image processing (i.e. mean intensity projection) was carried out directly using Zen 2010 software (Zeiss GmbH). 3D rendering was performed using Imaris (Bitplane, Switzerland). False color images were adjusted to better distinguish different types of cells and nanoparticles.
Image analysis. Single cell (surface) analysis was performed using self-written macro and 2D cell tracking was analysed by TrackMate plugin in Fiji software (NIH, US). 3D cell tracking was performed in Imaris. Semi quantitative image analysis of particle uptake (particle surface area measurement) in apical and basal side was performed using Fiji and Matlab (MathWorks, US). Shortly, the fluorescence channel of particles was classified depending on their position (basal vs. apical). Using sum slice projection in Fiji the corresponding time-lapse z-stack images of particles was reconstructed. Particle surface area (in µm 2 ) was calculated through measurement of pixel area in the entire single frame after intensity thresholding and binarization and it was plotted against the incubation time (different time frame). Data availability. Experimental data are available from the corresponding author upon reasonable request. | 2018-07-11T00:44:18.060Z | 2018-06-29T00:00:00.000 | {
"year": 2018,
"sha1": "6e78ba032f729ffe984feda95d423f50e9dde309",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-28206-2.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6e78ba032f729ffe984feda95d423f50e9dde309",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
49569904 | pes2o/s2orc | v3-fos-license | Prophylactic Hypogastric Artery Ligation during Placenta Percreta Surgery: A Retrospective Cohort Study
Objective To evaluate if prophylactic hypogastric artery ligation (HAL) decreases surgical blood loss and blood products transfused. Study Design This is a retrospective cohort study comparing patients with placenta percreta undergoing prophylactic HAL at the time of cesarean hysterectomy versus those who did not. Data were presented as means ± standard deviations, proportions, or medians with interquartile ranges. Demographic and clinical data were compared in the groups using Student's t -test for normally distributed data or the Mann–Whitney U test for nonnormally distributed data. Fisher's exact test was used for proportions and categorical variables. Data are reported as significant where p was <0.05. Results There were 26 patients included in the control group with no HAL and 11 patients included in the study group. Estimated blood loss for the study group was 1,000 mL versus 800 mL in the control. Units of PRCBs transfused were 4.5 units in the study group versus 2 units for the control group. None of these measures were found to be statistically significant. Conclusion Our data suggest there was no benefit in the use of prophylactic HAL in decreasing surgical blood loss or amount of blood products transfused in patients who had a cesarean hysterectomy performed for placenta percreta. Précis Prophylactic HAL does not decrease blood loss during surgery for placenta percreta.
Hypogastric artery ligation (HAL) has been a surgical technique utilized to reduce hemorrhage during pelvic and obstetrical surgeries. HAL has the potential of being a life-saving measure that has been used when other more common modalities fail. 1 The technique has been used to reduce pelvic blood flow when intraoperative hemorrhage is anticipated. 2 The theoretical physiological change that occurs after HAL is a decrease in pulse pressure transforming an arterial system into a venous system, which decreases blood flow and therefore blood loss. 3 Fewer obstetricians and gynecologists are performing prophylactic HAL for intraoperative hemorrhage control due to lack of experience and training. 4 The use of HAL may have complications such as incomplete ligation, ureteral injury, hypogastric vein injury, or continued bleeding secondary to collateral circulation. 5 There have been studies done on the use of prophylactic Results There were 26 patients included in the control group with no HAL and 11 patients included in the study group. Estimated blood loss for the study group was 1,000 mL versus 800 mL in the control. Units of PRCBs transfused were 4.5 units in the study group versus 2 units for the control group. None of these measures were found to be statistically significant. Conclusion Our data suggest there was no benefit in the use of prophylactic HAL in decreasing surgical blood loss or amount of blood products transfused in patients who had a cesarean hysterectomy performed for placenta percreta. Précis Prophylactic HAL does not decrease blood loss during surgery for placenta percreta.
HAL, including a prospective trial that evaluated HAL at the time of radical hysterectomy and lymphadenectomy in gynecologic oncology patients. The trial found no significant decrease in surgical blood loss. 6 Abnormally invasive placentation (AIP), such as placenta accreta, increta, and percreta, occurs secondary to uncontrolled angiogenesis of trophoblastic tissue of the placenta invading through the decidua, into the uterine myometrium, and possibly to adjacent structures. Invasive placentation can induce vascular remodeling of myometrial vessels, leading to significant hemorrhage if removal is attempted. 7 The risk of maternal morbidity and mortality is high, especially in patients with placenta percreta. Morbidities include massive hemorrhage, maternal morbidity of a cesarean hysterectomy, blood transfusion, abdominal organ injury, mechanical ventilation, and intensive care unit admission. 8 The most common treatment modality for patients with AIP is cesarean hysterectomy with or without HAL. 9,10 The pelvis has extensive collateral blood flow, which can prevent adequate control of hemorrhage even after HAL. In a previous study by Clark et al, 42% of documented cases in which HAL was performed achieved adequate cessation of bleeding; however, only 1 out of 19 of these patients had AIP. 11 To our knowledge, there are no data available that analyze HAL for obstetrical patients with AIP requiring cesarean hysterectomy.
The purpose of our study is to evaluate the effect of prophylactic HAL in decreasing total blood loss and amount of blood products transfused at the time of a cesarean hysterectomy for placenta percreta.
Materials and Methods
Our study is a retrospective cohort study in which all patients included were evaluated and treated at the Center for Abnormal Placentation at Hackensack University Medical Center from 2003 to 2015. All procedures were performed by the same team of surgeons who routinely performed these cases, utilizing the same technique and protocol for each procedure. This is an institutional review boardapproved study, Pro00001951. Informed consent and ethics approval were obtained.
All patients with a preoperative diagnosis of placenta percreta suspected by ultrasound and magnetic resonance imaging (MRI) who underwent a cesarean hysterectomy and had a histopathological diagnosis of placenta percreta were included in this study.
All patients for whom the final histopathology was not a placenta percreta were excluded from this study. This was done to create a homogenous sample, making these results more generalizable to surgeons planning their percreta surgeries. We also excluded surgically staged procedures, in which HAL was purposefully not performed in preparation for embolization to the site of AIP and hysterectomy in a separate surgical procedure. The study group was composed of all patients who met the inclusion criteria and had an HAL during the time of cesarean hysterectomy. All HALs were performed bilaterally after the cesarean delivery and before hysterectomy for prophylaxis in anticipation of further blood loss. The control group was composed of the patients who met the inclusion criteria and did not undergo prophylactic HAL at the time of cesarean hysterectomy. A description of our multidisciplinary team and our surgical protocol has previously been described. 10 For all the patients who met the inclusion criteria for this study, hospital admission data including operative reports were assessed. The data collected included maternal demographics, abnormal placentation known risk factors, and intraoperative data, which included estimated blood loss (EBL) and number of packed red blood cells (PRBCs) received.
Data were presented as means AE standard deviation (SD), proportions, or medians with interquartile ranges. Data were analyzed using GraphPad Prism (La Jolla, CA). Demographic and clinical data were compared in the no HAL versus HAL groups using Student's t-test for normally distributed data or the Mann-Whitney U test for nonnormally distributed data. Fisher's exact test was used for proportions and categorical variables. Data are reported as significant where p was <0.05.
Results
A total of 45 patients were identified as having a preoperative diagnosis of placenta percreta by ultrasound and MRI. The positive predictive value for the histopathological diagnosis of placenta percreta was 100% in this patient cohort. The diagnosis was made by the pathologist if the villi penetrated the uterine serosa. Eight patients were excluded due to having a second staged surgical procedure for completion of the hysterectomy. The control group included 26 patients with no HAL, and the study group included 11 patients who underwent prophylactic HAL.
The groups were compared with respect to age, gravity, parity, body mass index (BMI), and other known risk factors for abnormal placentation, which we demonstrated in ►Table 1. The BMI in the study group was found to be significantly lower in comparison to the control group (►Table 1). This was the only variable found to be statistically significant.
The intraoperative data are also shown in ►Table 1. The average EBL for the study group was 1,000 mL and for the control group the average EBL was 800 mL. The average PRBC units transfused were 4.5 units for the study group and 2 units for the control group (►Table 1). Neither of these measures was found to be statistically significant.
Discussion
Our data suggest there is no benefit in the use of prophylactic HAL to decrease surgical blood loss or the amount of blood products transfused in patients having a cesarean hysterectomy performed for placenta percreta. This finding is similar to previous studies that looked at prophylactic HAL to decrease surgical blood loss during gynecologic oncology procedures. 1,4,6 The use of HAL has been done prophylactically in other pelvic surgeries due to the potential decrease in pulse pressure limiting the pelvic blood flow; however, its use has not been analyzed for patients with a preoperative diagnosis of placenta percreta. 3 AIP is a unique surgical case compared with other gynecologic procedures. The gravid uterus, especially one with abnormal placentation, causes both a physiologic and pathological increase in large-diameter collateral blood vessels with the potential to hemorrhage. This can occur despite ligation of the hypogastric artery. For this reason, placenta percreta cases are known to have a risk for massive postpartum hemorrhage and maternal morbidity and mortality. 2 HAL technique may have surgical complications, which include ureteral damage, perforation of the internal iliac vein, damage to the hypogastric nerve plexus, and buttock claudication. 5 For this reason, it is imperative to assess if the use of prophylactic HAL is beneficial in decreasing maternal morbidity and mortality associated with the massive hemorrhage during placenta percreta surgery. Fortunately, we encountered none of these adverse events.
In our study, we did not identify a statistically significant difference in the total EBL between the control and study group. A limitation of the study may be that the blood loss was estimated and not quantified. However, the findings are supported by a similar change in preoperative and postoperative hemoglobin levels between both groups (see ►Table 1).
Another important variable analyzed associated with surgical blood loss is number of PRBC units transfused. The number of PRBC units transfused in the study group, albeit higher, is not statistically significant. This finding may be explained by the small number of patients in the study group. Two patients out of the 11 total patients in the study group had a massive postpartum hemorrhage exceeding 3,000 mL. In this small cohort of study patients, it is expected to see that these events may influence the results. Another limitation of our study is its retrospective design. The decision to perform an HAL was made intraoperatively by the surgeon. The decision to use prophylactic HAL may have been biased by the patient's anatomy and ability of the procedure to be completed bilaterally, abnormal placentation complexity, extent of invasion, and BMI of the patient. Patients who have severe abnormal placentation in which the surgeon believed that the completion of the surgery would be safer through a second staged procedure did not have an HAL so that embolization could be performed. Hence, the severity of the abnormal placentation may not be a main factor influencing the decision toward HAL, and the most complex cases did not receive HAL. We also noted that the BMI was higher in the control group. Having a higher BMI may present a surgical challenge when performing an HAL and this finding could also be a potential selection bias. Finally, the sample size for both groups was small, limiting the ability to draw large conclusions.
We have previously identified and published the main factors that helped to decrease surgical blood loss in these cases, which included an intraoperative multidisciplinary approach and the learning curve of the surgeon. 10 The incidence of AIP is low and the ideal study to answer the question if prophylactic HAL is of value at the time of placenta percreta surgery should be a larger prospective randomized trial.
Some research has been done to look at perioperative hypogastric artery balloon occlusion during gynecologic oncology procedures, as well as surgery for abnormal placentation. Although promising, further research is needed to see if this is a beneficial alternative. 2,5,10,[12][13][14] This modality overall has minimal procedure-related risks, but there have been reports of buttock claudication and lower extremity weakness. 2 This may potentially be a better option than prophylactic HAL since the hypogastric artery is not surgically occluded and embolization of the pelvic vessels may be performed if a decision of a staged surgery is made. | 2018-07-07T01:45:23.485Z | 2018-04-01T00:00:00.000 | {
"year": 2018,
"sha1": "c8ff089e1fb3f82357f020f842dd1274ad63a8d4",
"oa_license": "CCBYNCND",
"oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0038-1666793.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c8ff089e1fb3f82357f020f842dd1274ad63a8d4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
23939538 | pes2o/s2orc | v3-fos-license | Observation of stable HO$_4$$^{+}$ and DO$_4$$^{+}$ ions from ion-molecule reactions in helium nanodroplets
Ion-molecule reactions between clusters of H$_2$/D$_2$ and O$_2$ in liquid helium nanodroplets were initiated by electron-induced ionization (at 70 eV). Reaction products were detected by mass spectrometry and can be explained by a primary reaction channel involving proton transfer from H$_3$$^{+}$ or H$_3$$^{+}$(H$_2$)$_n$ clusters and their deuterated equivalents. Very little HO$_2$$^{+}$ is seen from the reaction of H$_3$$^{+}$ with O$_2$, which is attributed to an efficient secondary reaction between HO$_2$$^{+}$ and H$_2$. On the other hand HO$_4$$^{+}$ is the most abundant product from the reaction of H$_3$$^{+}$ with oxygen dimer, (O$_2$)$_2$. The experimental data suggest that HO$_4$$^{+}$ is a particularly stable ion and this is consistent with recent theoretical studies of this ion.
3
The lack of a permanent electric dipole moment makes the detection of O2 in astrophysical settings a significant challenge. It is only very recently that direct observations of O2 have been made in the interstellar medium (ISM) through the detection of weak rotational lines driven by magnetic dipole transitions in the millimeter region of the spectrum. 1-5 Nevertheless, there are significant differences between predicted and observed abundances of O2 in the ISM 6 and therefore an alternative means of quantifying O2 would be valuable.
Many years ago it was suggested that detection of protonated molecules might provide an indirect method for quantifying homonuclear diatomics such as N2, O2 and C2. 7 When protonated these molecules are expected to possess a substantial electric dipole moment and should therefore be easy to detect by rotational or vibrational spectroscopy if they are reasonably abundant. The most likely source of protons in the ISM is H3 + and laboratory studies have shown that this ion protonates N2 in a fast exothermic reaction. 8 The resulting N2H + ion is a tracer molecule for N2 and has indeed been used to determine the abundance of N2 in dense interstellar clouds. 7 The possibility therefore exists for using O2H + in a similar manner. Unfortunately, O2 possesses a lower proton affinity than N2 and a detailed analysis has shown that the reaction H3 + + O2 → HO2 + + H2 (1) is slightly endothermic. 9 Although endothermic by only 50 9 cm -1 at 0 K, this is enough to provide a strong obstacle to this reaction at the low temperatures found in many astronomical environments and means that HO2 + is of little or no value as a tracer for oxygen in the ISM.
Very recently, it has been suggested that HO4 + might be used as an alternative to HO2 + as a tracer molecule for O2. 10,11 The basic assumption here is of reaction of the (O2)2 dimer, instead of O2, with H3 + , although as discussed later it is questionable whether the dimer is 4 significant in the ISM. The proton affinity of the oxygen dimer, (O2)2, has not been measured but one would expect a higher value than for monomeric O2 because the proton can be shared between two molecules. This presumption is confirmed by ab initio calculations, which predict a proton affinity for the dimer which is 0.84 eV higher than that of O2. 11 This will now make the O4 equivalent of reaction (1) substantially exothermic and, given that exothermic ion-molecule reactions usually have no activation energy, proton transfer from H3 + is likely to approach the diffusion limited rate. Moreover, HO4 + is an ion which has received little prior study by theory or experiment. The first and only previous experimental observation of H(O2)n + ions was derived from mass spectrometric work in the gas phase using a high pressure ion source. 12 On the basis of observed abundances of ions as a function of temperature an O2-O2H + binding energy of 86.1 kJ mol -1 was deduced. This value is more than an order of magnitude greater than the binding energy of the neutral (O2)2 dimer 13 and shows that the proton induces quite strong binding between the two O2 molecules. Two recent and related ab initio studies have suggested that HO4 + is a rather interesting molecule with a Zundel-like structure reminiscent of the protonated water dimer, H5O2 + . 10,11 According to these calculations the most stable structure is a trans isomer possessing C2h point group symmetry.
In this study we have explored the ion-molecule chemistry between hydrogen and oxygen in helium nanodroplets and we report specifically on the observation of ions of the type HmOx + , where x is even and includes HO4 + . Helium droplets provide a very low temperature (0.37 K) 14 and gas-like environment in which to initiate ion-molecule reactions.
We have performed experiments with both H2 and D2, where the latter makes it easier to rule out contributions from ions such as H2O + . Despite this potential complication for H2 we nevertheless see similar results for H2 and D2. However, for the sake of simplicity we present data only from the D2 experiments here. Oxygen and deuterium molecules were added 5 sequentially to helium nanodroplets having a mean size in the region of 3 10 5 helium atoms.
At the partial pressures employed we estimate an average pick-up of 13 O2 and 14 D2 molecules per droplet, although a broad distribution of mixed cluster sizes is expected on account of the stochastic nature of the pick-up process. The droplets were then subjected to bombardment by electrons at energies of 70 eV and any resulting ions were detected by a high resolution reflectron time-of-flight mass spectrometer. Full details of the apparatus can be found elsewhere. 15 The ionization of pure hydrogen clusters in helium nanodroplets has been studied previously by Jaksch et al. 16 As well as seeing abundant Hn + clusters with odd n, clusters with even n were also detected. The preferential formation of odd n ions is a consequence of the facile reaction of H2 + with H2 to give H3 + + H. The resulting H3 + can then combine with one or more H2 molecules to give Hn + ions with odd n and these are the dominant species observed. We take these ions as the starting point for the discussion here and consider what happens when oxygen is also added to the helium droplets.
To demonstrate the quality of the mass spectrometric data, Figure 1 shows part of the mass spectrum recorded for a D2/O2 mixture in helium nanodroplets. Figure 2 shows the yields of DmOx + ions as a function of m for x = 2, 4, 6 and 12. In all four cases we see an oddeven intensity alternation in m, with the odd m ions having a greater abundance than those with even m. This is consistent with the known findings for pure H2 and D2 in helium droplets and suggests that D3 + and its clusters are generated by the route indicated in the previous paragraph. Although molecular oxygen has a lower ionization energy than hydrogen, we expect the hydrogen to be ionized initially because it is added second to the helium droplets and therefore will be the first to come into contact with He + or He*. Further evidence in favour of initial ionization of hydrogen comes from the known ionization behavior of molecular oxygen clusters, (O2)n. Electron ionization of these clusters preferentially produces 6 ions with even n. 17 The mass spectrum in Figure 1 illustrates the predominance of even oxygen cluster ions in our observations but these even oxygen ions are expected to be unreactive with H2 and D2. 18 Consequently, our observations strongly suggest that the ionmolecule chemistry is initiated by reactions of cationic hydrogen and deuterium clusters.
We first consider detected ions containing O4. Here the most abundant ion is DO4 + . At a slightly lower abundance is D2O4 + , but thereafter the ion yields drop significantly and the plot relaxes into a simple odd-even oscillation pattern. The presence of significant excesses of only DO4 + and D2O4 + ions allows us to rule out simple clustering between Dm + and (O2)n as the source of their high abundance, or 'magic' character. Instead we attribute their high abundance to ion-molecule reactions which deliver specific ionic products with significant stabilities (see below). Clearly one option is deuteron transfer by reaction of D3 + with the oxygen dimer, (O2)2, although larger oxygen cluster may also contribute to the DO4 + signal.
For ions containing O2 the only ion with magic character is D2O2 + , which shows a very prominent excess abundance. For O6 the most strongly magic ion is clearly D2O6 + , although DO6 + also shows significant abundance. For O12 we see the greatest abundance for D2O12 + and D3O12 + and this is typical for ions derived from even larger (O2)n clusters (not shown here).
The marked difference between the ion yields for O2 and (O2)2 is potentially revealing about the ion-molecule chemistry taking place in helium droplets. We assume that the formation of DmOx + ions can be initiated by reactions of D3 + or their cluster equivalents, D3 + (D2)p, although for simplicity we will restrict discussion to the former. Presumably the production of D3 + is initiated by collision of the droplet with a 70 eV electron, which can generate either metastable electronically excited helium (He*) or He + in the droplet. These reagents then ionize D2 either by Penning ionization (He*) or charge transfer (He + ).
Formation of the lowest metastable state of atomic helium requires 20.6 eV of energy and the ionization threshold lies at 24.6 eV. 19 Since the adiabatic ionization energy of D2 is at 15.4 7 eV, 20 ionization of D2 via either route will deliver several eV of excess energy into the helium droplet. In principle this excess energy will appear as heat and is far in excess of that necessary to initiate the deuterated equivalent of reaction (1). In view of this the absence of any strongly abundant DO2 + product can be explained in two ways: (a) the ionized cluster aggregate is quickly cooled by the surrounding helium after D3 + is made and therefore reaction with O2 is prevented by the small but non-zero endothermicity, or (b) a secondary reaction takes place that efficiently removes the DO2 + . Explanation (a) is unlikely, since it has been shown in many previous studies of ion-molecule reactions in helium droplets that the products are often consistent with hot reaction conditions, despite the very low temperature and high thermal conductivity of superfluid helium. It seems that relatively slow reactions resulting from significant structural rearrangement can be quenched, 21 whereas simple bond fissions are often too fast for even superfluid helium to provide any quenching. [22][23][24][25] For explanation (b) we presume that the reaction DO2 + + D2 → D2O2 + + D (2) takes place. Using the enthalpies of formation at 0 K of gaseous HO2 + (1109 kJ mol -1 ) 26 , H (216 kJ mol -1 ) 27 and H2O2 (-130 kJ mol -1 ) 26 together with the adiabatic ionization energy of H2O2 (1021 kJ mol -1 ), 28 and assuming that deuteration has a negligible effect on these thermodynamic quantities, we can calculate the enthalpy change for reaction (2). We find that the reaction is essentially thermoneutral, with an exothermicity of only 2 kJ mol -1 and a margin of error of comparable size. Gas phase kinetic studies have shown that this reaction is close to the collision-limited rate 29 and so it is certainly plausible that reaction (2) could efficiently consume any DO2 + formed. 8 The reaction between (O2)2 and D3 + has a different end-product distribution, with the dominant ion being DO4 + . In this case there is no doubt that proton transfer from D3 + to (O2)2 should readily occur because the reaction is exothermic and this is consistent with the observed ion abundance. However, the experimental data also indicate that DO4 + is much less willing than DO2 + to undergo a secondary reaction with D2. In Figure 3 we further illustrate the predominance of DO4 + production by showing a plot of the ratio of DOx + to the D2Ox + and D3Ox + ion signals for ions with even x in the range 2 x 12. We suspect that the D3Ox + ions, which become the most abundant ions for large (O2)n clusters, are derived from the secondary association reaction between DOx + and D2. Figure 3 shows that DO4 + is by far the most resistant of the DOx + ions to secondary reactions, suggesting an enhanced stability for DO4 + .
Although we have provided experimental evidence which supports a theoretical prediction that HO4 + can form from reactions at low temperature, 10 and that this ion is stable, the possibility of using HO4 + as a tracer molecule for O2 in the ISM is questionable. The principal obstacle here is the formation of the (O2)2 dimer,. 13 It is not clear where the threebody collisions necessary to form this dimer could come from in the highly dilute conditions of the ISM. However, we note that there are other sources of oxygen in astronomical environments from which oxygen clusters might be formed. For example, Bieler et al. have recently reported a surprisingly (several per cent) high content of molecular oxygen in the nucleus of a comet. 30 If oxygen is trapped in any significant quantities on cold grains and within water ice then release of dimers might be possible. In order to facilitate a possible search for HO4 + , the infrared spectrum of this ion was recently predicted from ab initio calculations. 11 There would certainly be value in carrying out further laboratory studies to characterize HO4 + , and in particular to confirm its spectroscopic signature. | 2017-10-24T16:47:27.027Z | 2016-05-11T00:00:00.000 | {
"year": 2016,
"sha1": "f56b8c3d1f1dcf8b60edc061b5db7dabbceaaa78",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1805.00883",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f56b8c3d1f1dcf8b60edc061b5db7dabbceaaa78",
"s2fieldsofstudy": [
"Chemistry",
"Physics"
],
"extfieldsofstudy": [
"Chemistry",
"Physics",
"Medicine"
]
} |
118903623 | pes2o/s2orc | v3-fos-license | Dynamics and Formation of the Near-Resonant K2-24 System: Insights from Transit-Timing Variations and Radial Velocities
While planets between the size of Uranus and Saturn are absent within the Solar System, the star K2-24 hosts two such planets, K2-24b and c, with radii equal to $5.4~R_E$ and $7.5~R_E$, respectively. The two planets have orbital periods of 20.9 days and 42.4 days, residing only 1% outside the nominal 2:1 mean-motion resonance. In this work, we present results from a coordinated observing campaign to measure planet masses and eccentricities that combines radial velocity (RV) measurements from Keck/HIRES and transit-timing measurements from K2 and Spitzer. K2-24b and c have low, but non-zero, eccentricities of $e_1 \sim e_2 \sim 0.08$. The low observed eccentricities provide clues regarding the formation and dynamical evolution of K2-24b and K2-24c, suggesting that they could be the result of stochastic gravitational interactions with a turbulent protoplanetary disk, among other mechanisms. K2-24b and c are $19\pm2~M_E$ and $15\pm2~M_E$, respectively; K2-24c is 20% less massive than K2-24b, despite being 40% larger. Their large sizes and low masses imply large envelope fractions, which we estimate at $26^{+3}_{-3}\%$ and $52^{+5}_{-3}\%$. In particular, K2-24c's large envelope presents an intriguing challenge to the standard model of core nucleated accretion that predicts the onset of runaway accretion when $f_{env} \approx 50\%$.
INTRODUCTION
The vast majority of our current understanding about the masses and orbits of extrasolar planets is based on two techniques: radial velocities (RVs) and transittiming variations (TTVs). Typically, RVs constrain M p sin i, the planet mass modulo an unknown inclination angle. For high signal-to-noise datasets, deviations from sinusoidal RV curves can reveal orbital eccentricities, and for a few exceptional systems, non-Keplerian orbital dynamics have been observed (see, e.g., GJ876; Rivera et al. 2010;Nelson et al. 2016;Millholland et al. 2018). For transiting systems, the sin i ambiguity is negligible and RVs constrain planet mass and bulk composition directly. Such measurements have been made for planets as small as Earth (see, e.g., Kepler-78b; Howard et al. 2013;Pepe et al. 2013). Accordingly, RV mass measurements of transiting planets have helped reveal important trends in planetary bulk compositions, such as the onset of low density envelopes above R p ≈ 1.5 R ⊕ Weiss & Marcy 2014;Rogers 2015).
While the early theoretical work on TTVs was developed a decade ago (Agol et al. 2005;Holman & Murray 2005), TTVs were not observed until NASA's Kepler mission provided high precision, long baseline photometry (Holman et al. 2010). The TTV technique has achieved some remarkable results such as precision mass measurements of small planets in the Kepler-36 system (Carter et al. 2012), the discovery of a Laplace-like res-onance in the Kepler-223 system (Mills et al. 2016), and mass measurements of non-transiting planets in the Kepler-88 system (Nesvorný et al. 2013).
While the RV and TTV techniques have been applied to many individual systems, only a handful of systems have benefited from joint analyses. Systems with TTVs have almost exclusively been discovered during the prime Kepler mission (Borucki et al. 2010;2009-2013, which surveyed only 1/400 of the sky. While ≈40% of Kepler planets are in multi-planet systems (Rowe et al. 2014), planets typically need to be near mean-motion resonance to produce detectable TTVs. Holczer et al. (2016) reported TTVs for ≈260 Kepler planets, but most are too faint for precision RV measurements with current-generation instruments, which typically require host stars with V 13 mag. As a result, fewer than 10 systems have mass constraints from both the TTV and the RV techniques (Mills & Mazeh 2017).
K2-24 has two known transiting planets, which were observed by Kepler during K2 operations (Howell et al. 2014). Petigura et al. (2016), P16 hereafter, reported mass measurements based Keck/HIRES RVs spanning one observing season. While P16 predicted TTV amplitudes of several hours based on their proximity to the 2:1 mean-motion resonance, the 80 day K2 baseline was too short to observe deviations from linear ephemerides.
Here, we present an extended RV time series and additional transit-timing measurements from Spitzer (Section 2). Our extended RV dataset enables tighter constraints on the planet masses and reveals a third candidate planet in the system (Section 3). In Section 4, we perform a joint TTV/RV analysis, which provides improved constraints on planet masses, eccentricities, and core/envelope fractions (Section 5). In Section 6, we interpret the observed eccentricities in the context of system dynamics and formation scenarios, and we conclude in Section 7. K2-24 was observed during campaign 2 of the K2 mission from 2014-08-23 to 2014-10-13. To extract transit times, we used the photometry published in P16 and fit individual transits. We multiplied our transit model by a third-order polynomial to account for the long timescale variability seen in the photometry. For each transit, we first adopted the best-fit transit parameters from P16, which assumed linear ephemerides. We then fit the transit allowing the time of conjunction T c and the polynomial coefficients to vary. Figure 1 shows the K2 photometry along with the best-fit transit models.
Care is required when assigning reasonable uncertainties to the measured transit times. K2 photometry contains correlated, non-Gaussian systematics that are mostly, but not entirely, removed during detrending. 1 The derived transit times depend most sensitively on photometry collected during ingress or egress, which span one or two 30-minute long cadence measurements. Therefore, outliers have a significant effect on the derived transit times if they occur during ingress or egress. As an example, Benneke et al. (2017) found that a single outlier that occurred during one of the transits of K2-18b resulted in a ≈ 7σ error in the ephemeris reported in Montet et al. (2015).
We estimated the K2 transit-timing errors errors via bootstrap resampling. For each transit, we created 1000 realizations by randomly shuffling the residuals to the best-fit light curve and adding the shuffled residuals to the best-fit model. We then fit these bootstrap realizations using the methods described above and derived T c for each sample. We adopted the standard deviation of the resampled T c as the uncertainty on T c . The bootstrapped uncertainties were roughly twice as large as the formal uncertainties, which assumed white and Gaussian distributed noise. Our measured transit times are listed in Table 1. P16 used analytic approximations developed by Lithwick et al. (2012) to predict the expected TTVs of K2-24b and c. These approximations predicted anticorrelated sinusoidal TTVs having a "super-period" of roughly 4 years. Given the proximity of K2-24b and c to the 2:1 mean-motion resonance, P16 predicted large TTV amplitudes of several hours. However, the limited 80-day K2 baseline sampled only 5% of the TTV superperiod, too small a fraction for TTVs to accumulate to detectable levels.
To cover a significant fraction of the expected TTV super-period, we used Spitzer to observe two additional transits of K2-24b on 2015-10-27 and 2016-06-13 and two additional transits of K2-24c on 2015-11-12 and 2016-06-10. 2 The combined K2 /Spitzer dataset includes transit observations at three well-separated epochs, which is sufficient to constrain the mean transit period as well as the amplitude and phase of the approximately sinusoidal TTV signal.
When planning our 2015 Spitzer observations, we centered our observing sequence using the best-fit transit times of K2-24b and c based on the K2 data alone. To account for the substantial uncertainty due to TTVs, we observed K2-24b and c for 14 hours each. As shown in Figure 2, we observed a complete transit of K2-24b and a partial transit of K2-24c. We centered our 2016 Spitzer observations the best-fit linear ephemeris that incorporated the K2 and 2015 Spitzer observations, and we observed K2-24b and c for 12 and 16 hours, respectively. Again, we observed a complete transit of K2-24b and a partial transit of K2-24c. In hindsight, after collecting the 2015 Spitzer transits we should have performed a preliminary TTV model using plausible masses and eccentricities in order to better center our 2016 Spitzer observations.
Following common practice, we included a 30-minute pre-observation sequence to mitigate the initial instrument drift in the science observations resulting from telescope temperature changes after slewing from the preceding target (Grillmair et al. 2012). To enhance the accuracy in positioning K2-24 on the IRAC detector, observations were taken in peak-up mode using the Pointing Calibration and Reference Sensor (PCRS) as a positional reference. We chose Spitzer/IRAC Channel 2 (4.5 µm) over Channel 1 (3.6 µm) because the instrumental systematics due to intra-pixel sensitivity variations are smaller (Ingalls et al. 2012). Our exposure times were set to 2 seconds to optimize the integration efficiency while remaining in the linear regime of the IRAC detector.
Following Benneke et al. (2017), we extracted multiple photometric light curves for each Spitzer dataset using a wide range of fixed and variable aperture sizes. The purpose of extracting and comparing multiple photometric light curves is to choose the aperture that provides the lowest residual scatter and red noise. We normalized the light curve by the median value and binned the data to a 60-second cadence. We found that this moderate binning did not affect the information content of the photometry, but provided more signal per data point allowing an improved correction of the systematics.
Raw aperture photometry from Spitzer contains large systematics due to the motion of the target star across the IRAC detector with percent-level intra-pixel sensitivity variations. To extract reliable transit times, we adopted the standard practice of modeling the Spitzer systematics and transit profile simultaneously. We used the pixel-level decorrelation (PLD) algorithm, first proposed by Deming et al. (2015), with modifications described in Benneke et al. (2017). In our model, the following transit parameters were allowed to vary: transit midpoint T c , planet-to-star radius ratio R p /R , and impact parameter b. In addition, we parameterized the systematics in the Spitzer model using nine PLD coefficients, a white noise component, and two coefficients describing a polynomial trend of flux with time. Ideally, we would have allowed the transit duration T 14 to vary in our fits. However, because our Spitzer transit observations of K2-24c missed ingress, they could not meaningfully constrain T 14 . For both K2-24b and c, we fixed T 14 to the value measured by P16 from K2 photometry. We explored the likelihood surface using Markov Chain Monte Carlo (MCMC). The maximum likelihood fits to the Spitzer photometry are shown in Figure 2, and the associated transit times are listed in Table 1.
Keck/HIRES Spectroscopy
We obtained 63 spectra of K2-24 using the High Resolution Echelle Spectrometer (HIRES; Vogt et al. 1994) on the 10m Keck-I telescope between 2015-06-24 and 2017-10-03. We collected spectra through an iodine cell mounted directly in front of the spectrometer slit. The iodine cell imprints a dense forest of absorption lines which serve as a wavelength reference. We used an exposure meter to achieve a consistent signal-to-noise level of 110 per reduced pixel on blaze near 550 nm. We also obtained a "template" spectrum without iodine. The first 32 of these spectroscopic observations are described in P16.
RVs were determined using standard procedures of the California Planet Search (Howard et al. 2010) including forward modeling of the stellar and iodine spectra convolved with the instrumental response (Marcy & Butler 1992;Valenti et al. 1995). The measurement uncertainty of each RV point is derived from the uncertainty on the mean RV of the ∼700 spectral chunks used in the RV pipeline and ranges from 1.5 to 2.1 m s −1 . Table 2 lists the RVs and uncertainties. We also provide the Mount Wilson S HK activity index (Vaughan et al. 1978), which is measured to 1% precision. Table 2 is published in its entirety in machine-readable format. A portion is shown here for guidance regarding its form and content.
RV ANALYSIS
Here we present our Keplerian analysis of the K2-24 RVs. The RVs exhibited ≈10 m s −1 peak-to-trough variability that was not associated with the known ephemerides of K2-24b or c, which motivated searches for additional non-transiting planets. Figure 3 shows a Keplerian search using a modified version of the Two-Dimensional Keplerian Lomb-Scargle (2DKLS) periodogram (O'Toole et al. 2009;Howard & Fulton 2016). When we measured the change in χ 2 (periodogram power) between a three-planet fit and a two-planet fit, we found a peak at P = 420 days, with an empirical false alarm probability (eFAP) of 0.8%. While the eFAP was formally below the standard criterion of eFAP < 1% for Doppler confirmation, a complete confirmation of this candidate would have required additional vetting such as an assessment of RV/activity correlations, which is beyond the scope of this work. We included this candidate our subsequent orbit fitting because it improved the quality of the RV fits to K2-24b and c.
We analyzed the RV timeseries using the publicly available RV modeling package RadVel . RadVel facilitates maximum a posteriori (MAP) model fitting and parameter estimation via MCMC. A Keplerian RV signal may be described by the orbital period P , time of inferior conjunction T c , eccentricity e, longitude of periastron ω and Doppler semi-amplitude K, i.e. {P, T c , e, ω, K}. In our fitting and MCMC analysis, we adopted the following parameterization: of e and ω enforces a uniform prior on eccentricity and prevents a Lucy-Sweeney bias toward non-zero eccentricities (Eastman et al. 2013;). Our preferred model consists of three Keplerians with eccentricities fixed to zero. We fixed the P and T c of K2-24b and c to the P16 values. To aid convergence, we imposed a loose Gaussian prior on ln P d of N (ln(440), 1). Figure 4 shows the MAP model. Models with more free parameters will naturally lead to higher likelihoods at the expense of additional model complexity. To compare the quality of models of different complexity we used the Bayesian Information Criterion (BIC; Schwarz 1978). Models with smaller BIC are preferred. For the circular, three planet model, BIC = 366.0. Models where candidate d is allowed to have a non-zero eccentricity were not favored by the BIC = 381.2. Models with only two planets on circular orbits were also disfavored by the BIC = 378.6.
To derive uncertainties on the model parameters, we used RadVel to sample the posterior probability via MCMC. RadVel automatically checks for convergence using the Gelman-Rubin statistic (Gelman & Rubin 1992). For K2-24b and c, our RV only analysis yields masses of 16.8 +3.2 −3.1 M ⊕ and 19.0 +3.9 −3.8 M ⊕ , respectively. We compare these masses to those determined by the joint TTV/RV analysis in Section 5. If candidate d is a bonafide planet, it has a mass of 54 +14 −14 M ⊕ and orbits at a distance of 1.15 +0.06 −0.05 AU. However, we do not treat candidate d in our subsequent analysis or discussion, because we have not performed a thorough confirmation and because it is decoupled dynamically from the inner two planets.
Even though the model with all three eccentricities set to zero was preferred in a BIC sense, we performed an analogous MCMC exploration with eccentric orbits to asses the extent to which the RVs alone constrain eccentricities. The RV dataset only ruled out high eccentricity orbits, with upper limits of e 1 < 0.39 and e 2 < 0.34 at 90% confidence.
JOINT TTV/RV ANALYSIS
As expected, the Spitzer observations revealed TTVs of several hours. In this section, we present an analysis of the transit times from K2 and Spitzer , folding in the constraints from RVs described in the previous section. Lithwick et al. (2012), L12 hereafter, developed an analytical model for the TTVs that occur when two planets are near first order mean-motion resonance (i.e., P 2 :P 1 ≈ j:j − 1, where j = 2, 3, . . .). For a complete exposition of this formalism, see L12. Here, we provide a brief summary, in order to illustrate the type of constraints that the TTVs provide.
For planets near, but not in, first order mean-motion resonance L12 showed that their transit times, T c,i , are described by a sinusoidal perturbation about a mean period, P : T c,i = T c,0 + P i + Re(V ) sin λ j + Im(V ) cos λ j . (1) Here, i is an integer index that labels the transit epoch, T c,0 is the time of the first transit (i = 0), and V is the complex TTV amplitude. The longitude of conjunctions λ j , is an angle that advances linearly with time and is given by The time it takes λ j to advance by 2π is known as the super-period P j , which is given by For the K2-24bc pair, ∆ = 0.013 and P j = 1595 days. The complex TTV amplitudes are given by respectively, where µ is the planet-star mass ratio and f and g are order unity scalar coefficients which depend j and ∆ and are given in L12. For the K2-24bc, f = −1.16 and g = 0.38. Z * free is the complex conjugate of the following linear combination of the planets complex eccentricities: where z = e cos + ie sin .
We incorporated Gaussian priors of µ 1 = 48 ± 9 ppm and µ 2 = 53 ± 11 ppm based on our RV analysis in Section 3. We confirmed that Gaussian priors were appropriate by checking that the RV-only constraints on µ 1 and µ 2 are well-described by normal distributions, with negligible covariance (Pearson r = 0.09).
We explored the range of plausible planet masses and orbits given the measured transit times using the Affine-Invariant MCMC sampler of Goodman & Weare (2010). We found that employing parallel tempering dramatically reduced the number of iterations needed for convergence (Earl & Deem 2005). We let 16 walkers evolve for 50,000 iterations at five different temperatures, discarding the first 10,000 iterations as burn in. We verified that the chains were well-mixed by computing the autocorrelation length scale τ for each chain at each temperature and confirming that τ is much smaller than the number of iterations.
In Figure 6, we display the measured and modeled transit times with respect to an adopted reference linear ephemeris. The models sampled from the posterior are a good fit to the observed transit times and gradually diverge from one another after the last Spitzer measurement. To facilitate future observations of K2-24b and c, we include the predicted transit times and uncertainties through 2025 in the Appendix. Figure 5 shows the two-parameter joint posterior distributions. Note the strong covariance between µ 1 and µ 2 . As expected, the TTVs enabled a tight constraint on the planet mass ratio of M p,2 /M p,1 = 0.81 +0.03 −0.02 . As a point of comparison, the RV-only fits constrained the mass ratio to M p,2 /M p,1 = 1.10 +0.34 −0.26 , which is consistent at the 1σ level.
Note also the strong covariance between µ-Z free . The priors on µ 1 and µ 2 help to break the µ-Z free degeneracy, and we detect significant non-zero real imaginary components of Z free . While Z free only constrains linear combinations of the eccentricities, we could infer that (1) at least one of the planets has a non-zero eccentricity and (2) the eccentricities are likely |Z free | ∼ 0.08.
Recall that the RV analysis in Section 3 only provided upper limits of e 1 < 0.39 and e 2 < 0.34. Because the TTVs constrain only linear combinations of the e 1 and e 2 , we cannot rule out high eccentricity solutions. However, as we discuss in Section 5, these solutions are unlikely given the low eccentricities typically observed in compact Kepler multi-planet systems.
TTV/RV SYNERGIES
In the previous section, we presented a joint TTV/RV analysis of the K2-24 system. Here, we provide an updated assessment of planet properties based on our combined TTV/RV analysis in Section 4 and compare them to those presented in P16, which only included RVs. Orbital eccentricities are substantially improved over P16, and we also improve planet mass precision and constraints on core/envelope structures.
Planet Mass
P16 measured masses of K2-24b and c based on one season of RV measurements and found M p,1 = 21.0 ± 5.4 M ⊕ and M p,2 = 27.0 ± 6.9 M ⊕ , respectively. Our analysis here yields masses of M p,1 = 19.0 +2.2 −2.1 M ⊕ and M p,2 = 15.4 +1.9 −1.8 M ⊕ , respectively. The mass measurements from the two papers are consistent to within 2σ, but our new masses have higher precision. The improved mass constraints are due to two factors: (1) more RV measurements with better phase coverage and (2) the strong constraint on M p,2 /M p,1 from the TTVs. Our TTV/RV analysis demonstrates that K2-24c is 20% less massive than K2-24b, despite being 40% larger. −6 % and f env,c = 57 +9 −10 %. We repeated this analysis using the updated planet masses and radii and found f env,b = 26 +3
Core/Envelope Structure
−3 % and f env,c = 52 +5 −3 %. Our new values are consistent with Petigura et al. (2017), but with smaller formal uncertainties. This stems mainly from the improved stellar radius (see Table 3) and from the fact that, in the sub-Saturn size range, radius alone is a good proxy for envelope fraction (Lopez & Fortney 2014).
One challenge in explaining the formation of K2-24c is to determine how the planet acquired such a large envelope, while avoiding runaway accretion. As a point of reference, in the canonical core accretion models of Pollack et al. (1996), Saturn forms first as a ≈12 M ⊕ core that accretes H/He from the protoplanetary disk. At the crossover mass (i.e. when M env ≈ M core or when f env ≈ 50%), runaway accretion begins and Saturn quickly grows to its final mass.
One way to resolve the f env ≈ 50% problem is to imagine that the disk dissipated right as K2-24c approached the runaway phase. While impossible to rule out, this scenario requires special timing of planet formation and is thus a priori unlikely. More likely, the inferred structure of K2-24c points to an incomplete understanding of core-nucleated accretion and motivates further theoretical explanations of planet conglomeration in the sub-Saturn mass regime.
Eccentricity
By combining TTVs and RVs, we achieved significantly tighter constraints on eccentricity than those from either technique alone. The full RV dataset only provided weak upper limits on the planet eccentricities of e 1 < 0.39 and e 2 < 0.34. The TTVs, in contrast, constrained µ 1 Z free and µ 2 Z free (Equations 7-8). Because RVs constrain planet mass directly, they break some of the µ-Z free degeneracy inherent to a TTV-only analysis.
Our TTV/RV model provided the following constraints on Re(Z free ) and Im(Z free ): Re(Z free ) = f e 1 cos 1 + ge 2 cos 2 = 0.038 +0.004 −0.003 Im(Z free ) = f e 1 sin 1 + ge 2 sin 2 = 0.070 +0.008 −0.007 . These constraints amount to lines in the e 1 cos 1 -e 2 cos 2 and e 1 sin 1 -e 2 sin 2 planes with slopes determined by f and g. Because TTVs only constrain linear combinations of e 1 and e 2 there are still significant e 1 -e 2 degeneracies, even after folding in the RV constraints. Figure 7 shows the large range of e 1 and e 2 consistent with our TTV/RV analysis. Note, however, that e 1 and e 2 cannot both be zero. Our analysis does not formally exclude high eccentricity solutions. These solutions, however, are disfavored for stability reasons and because TTV-active systems are observed to have eccentricities of a few percent. Various groups have characterized the distribution of eccentricities among large numbers of Kepler multiplanet systems, modeling eccentricities as a Rayleigh distribution parameterized by a mean eccentricity e . Studies of TTV-active multi-planet systems have found e = 0.01-0.03 (Wu & Lithwick 2013;Hadden & Lith-wick 2014). Analyses of transit durations in multi-planet systems where the host stars have well-measured densities have found e = 0.05-0.07 (Van Eylen & Albrecht 2015; Xie et al. 2016). That TTV-active systems exhibit lower e than the more general class of multi-planet systems suggests a distinct formation pathway.
Under the assumption that K2-24 is drawn from the population of TTV-active Kepler multi-planet systems, we applied a Rayleigh prior on eccentricity e = 0.03. Figure 7 shows the joint distribution of e 1 and e 2 including this prior. The eccentricity of K2-24c assumes the prior distribution. Solutions with non-zero e 1 are favored because e 1 ∼ 0 requires e 2 ∼ 0.2, which is strongly disfavored by our prior. For the remainder of the paper, we adopt e 1 = 0.06 +0.01 −0.01 and e 2 < 0.07 (90% conf.). We discuss the dynamical origins of these eccentricities in Because the TTVs only constrain linear combinations of the eccentricities, a large range of e1 and e2 is consistent with the data. Note, however, e1 and e2 may cannot both be zero. The red contours incorporate a Rayleigh prior on eccentricities with e = 0.03, which is shown as gray dotted lines in the 1D distributions. This prior is motivated in Section 5. Under this prior, solutions where e1 ∼ 0.0 are disfavored because they imply that e2 ∼ 0.2. The 'x' marks (e1, e2) = (0.02, 0.03), which is expected if the system had experienced divergent migration through resonance (Section 6.2).
DYNAMICS
Here, we explore the dynamical origins of the K2-24 system architecture. In Section 6.1, we discuss how the system evolves on secular timescales. In Section 6.2, we consider several formation scenarios and assess whether they are consistent with the observed eccentricities.
Secular Evolution
While K2-24b and c are near the 2:1 mean-motion resonance, they cannot be locked in resonance. Resonant locking generally requires that e ∆ 2 /µ, and for both planets ∆ 2 /µ ∼ 3. Therefore, the long-term dynamical evolution of K2-24b and c is dominated by secular interactions. The coplanar secular evolution of the planets' eccentricities may be visualized as trajectories in the e-∆ plane, where ∆ is the angle between the apses. 3 3 Strictly speaking, the orbital angle relevant to the secular evolution is the longitude of perihelion rather than the argument We simulated plausible long term evolutions of K2-24b and c by taking 1000 draws from the posterior samples from Section 5 and integrating them for 10,000 years with the Mercury N -body integrator (Chambers 1999). These integrations revealed several qualitative apsidal outcomes: circulation, libration about ∆ = 0 • (aligned apses), and libration about ∆ = 180 • (antialigned apses). Indeed, the observational data is not yet precise enough to conclusively determine which of these regimes the systems actually occupies. We show representative examples of circulation and libration in Figure 8. Inspection of these solutions shows that while at present time e 1 is likely larger than e 2 , at other phases of the secular cycle e 2 may be larger than e 1 .
Origin of Eccentricities
Here, we consider several plausible mechanisms for exciting eccentricities, and assess whether they are consistent with the observed eccentricities of K2-24b and c.
Self-Excitation
We first considered the possibility that the eccentricities are self-excited, since gravitational interactions between two planets on initially circular orbits will pump eccentricities up to a certain value. To simulate this we performed an integration with Mercury using representative planet masses and setting the initial eccentricity to zero. As expected, the planets gained some eccentricity, but never exceeded e = 0.005. Eccentricities smaller than 0.005 are excluded by the data (see Figure 7), implying that some other process is required to explain the observed eccentricities.
Divergent Migration Through Resonance
A well-known mechanism to excite eccentricity is divergent migration through mean-motion resonance. In this scenario planets begin interior to resonance with zero eccentricity. As shown in Batygin & Morbidelli (2013), migration through resonance corresponds with a separatrix crossing, after which the planets emerge with non-zero eccentricities and anti-aligned apses (∆ = 180 • ). As shown in Batygin (2015), the exited relic eccentricities are set by the planet-star mass ratios µ and initial eccentricities, which are usually assumed to be small.
In models of early Solar System evolution by Tsiganis et al. (2005), such a resonance crossing is used to trigger the onset of a transient dynamical instability. We note that divergent migration could be driven by gravitational scattering with a planetesimal disk (Minton & Levison 2014).
ω. However, because we take the planetary orbits to be coplanar ∆ω = ∆ . Figure 9 shows the time evolution of a simulation where K2-24b and c are adiabatically driven through resonance using fictitious forces. During the resonant crossing, eccentricities are quickly excited to e 1 = 0.03 and e 2 = 0.02. In this scenario, ∆ is driven to 180 deg, and the libration amplitude is very small. Given that this mechanism produces planets that are stationary in the e-∆ plane, we can directly compare the present day e to the predicted values from divergent migration.
In Figure 7, we compare the predicted eccentricities to our present day constraints. Eccentricities of (e 1 , e 2 ) = (0.03, 0.02) are disfavored by the data, both with and without the Rayleigh prior on eccentricity. Moreover, the mechanism that drives divergent migration (e.g. planetesimal scattering) is also likely to damp eccentricities. Therefore, (e 1 , e 2 ) = (0.03, 0.02) corresponds to upper bounds on the eccentricities the planets could acquire through this mechanism. This tension disfavors divergent resonant crossing as the sole explanation for the planet eccentricities, but future measurements of e and for both planets would shed additional light on this interpretation.
Disk-Driven Stochastic Excitation
Another mechanism that excites eccentricities is stochastic interactions between young planets and a turbulent disk (Adams et al. 2008). Density fluctuations within a turbulent protoplanetary disk cause eccentricities to grow approximately like a random walk, with RMS(e) ∝ √ t. One mechanism to drive density fluctuations is the magnetorotational instability (MRI). In the limit of ideal MRI-driven turbulence, Okuzumi & Ormel (2013) showed that the growth of e can be constructed from analytical arguments: where α is Shakura-Sunyaev viscosity parameter, σ is the surface density, and n is the mean-motion. This equation suggests that if planets are embedded in a gas disk for a significant fraction of a 10 Myr disk lifetime, as they must have been to capture their H/He envelopes, they can acquire the several percent eccentricities we observe today.
RMS
In order to illustrate this process, we performed a Mercury integration where we subjected the planets to appropriately scaled stochastic velocity kicks over a period of 2 × 10 5 yr. The simulation setup was identical to that of Batygin & Adams (2017). The resulting evolution is shown in Figure 9. Note that unlike the case of divergent migration through resonance, the apsidal offset ∆ takes on a broad range of values, resulting in an observ-able distinction between the two dynamical excitation mechanisms.
Summary
We considered three mechanisms for exciting planet eccentricities: self-excitation, divergent migration, and stochastic pumping. We found that self-excitation cannot explain the present day eccentricities. Divergent migration produces eccentricities that are qualitatively similar to the values observed today, although the predicted eccentricities are formally inconsistent with our measured values. Stochastic pumping can account for the present day eccentricities.
We stress that this is not an exhaustive analysis of excitation mechanisms. Among the mechanisms considered, however, stochastic pumping remains the most plausible explanation, given the data. Divergent migration predicts specific values for e 1 , e 2 , and ∆ which can be corroborated with future observations. For example, measurements of secondary eclipse times place tight constraints on e cos ω. When combined with the constraints from this paper, such measurements would constrain e and separately.
CONCLUSIONS
We have presented a joint TTV/RV analysis of the K2-24 system based on RVs from Keck/HIRES and transit observations with K2 and Spitzer . Our analysis provides new constraints on planet masses and core/envelope structure. Importantly, we leveraged the synergies between TTV and RV measurements to provide tight constraints on planet eccentricities of e 1 ∼ e 2 ∼ 0.08. Assuming the planets are drawn from the ensemble of Kepler multi-planet systems, we found a small, but significantly non-zero eccentricity of 0.06 +0.01 −0.01 for K2-24b and we ruled out eccentricities larger than 0.07 for K2-24c. These eccentricities are relics of the planets' past formation histories, and we found that stochastic interactions with a gas disk is a viable explanations for the observed dynamical state.
Future advances in the exoplanet census and RV instruments will expand the number of systems amenable to similar studies. Next-generation RV facilities at large telescopes such as VLT/ESPRESSO (González Hernández et al. 2017), Keck/KPF (Gibson et al. 2016), and GMT/GCLEF (Szentgyorgyi et al. 2016) will enable RV measurements of a large sample of faint Kepler planet hosts, including many TTV-active systems. Also, ESA's PLATO mission (Rauer 2013) will conduct a transit survey over ≈2000 deg 2 for 2-3 years and add to the sample of planets with long baseline photometry.
Proceeding along an orthogonal direction, NASA's TESS mission (Ricker et al. 2014) will soon survey the entire sky, casting a wide net for planets around bright stars. These bright stars will be more amenable to RV follow-up than our current sample from Kepler and K2 . One challenge is the limited baseline of TESS observations. During a nominal two-year mission, most of the sky would receive 27 days of TESS observations. While this will be sufficient to detect near-resonant systems, the baseline is too short to adequately sample TTV super-periods, which are typically measured in years. Extensions to TESS that would allow for subsequent transit measurements of known planets would therefore be exceedingly valuable. | 2018-06-23T13:05:11.000Z | 2018-06-23T00:00:00.000 | {
"year": 2018,
"sha1": "ea229561c0a72edafab3ad243c35e7fd427b329b",
"oa_license": null,
"oa_url": "https://authors.library.caltech.edu/88688/1/Petigura_2018_AJ_156_89.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ea229561c0a72edafab3ad243c35e7fd427b329b",
"s2fieldsofstudy": [
"Physics",
"Geology"
],
"extfieldsofstudy": [
"Physics"
]
} |
235958969 | pes2o/s2orc | v3-fos-license | A multi-stage SEIR(D) model of the COVID-19 epidemic in Korea
Abstract Background This paper uses a SEIR(D) model to analyse the time-varying transmission dynamics of the COVID-19 epidemic in Korea throughout its multiple stages of development. This multi-stage estimation of the model parameters offers a better model fit compared to the whole period analysis and shows how the COVID-19’s infection patterns change over time, primarily depending on the effectiveness of the public health authority’s non-pharmaceutical interventions (NPIs). Methods This paper uses the SEIR(D) compartment model to simulate and estimate the parameters for three distinctive stages of the COVID-19 epidemic in Korea, using a manually compiled COVID-19 epidemic dataset for the period between 18 February 2020 and 08 February 2021. The paper identifies three major stages of the COVID-19 epidemic, conducts multi-stage estimations of the SEIR(D) model parameters, and carefully infers context-dependent meaning of the estimation results to help better understand the unique patterns of the transmission of the novel coronavirus (SARS-CoV-2) in each stage. Results The original SIR compartment model may produce a poor and even misleading estimation result if it is used to cover the entire period of the epidemic. However, if we use the model carefully in distinctive stages of the COVID-19 epidemic, we can find useful insights into the nature of the transmission of the novel coronavirus and the relative effectiveness of the government’s non-pharmaceutical interventions over time. Key messages Identifies three distinctive waves of the COVID-19 epidemic in Korea. Conducts multi-stage estimations of the COVID-19 transmission dynamics using SEIR(D) epidemic models. The transmission dynamics of the COVID-19 vary over time, primarily depending on the relative effectiveness of the government’s non-pharmaceutical interventions (NPIs). The SEIR(D) epidemic model is useful and informative, but only when it is used carefully to account for the presence of multiple waves and context-dependent infection patterns in each wave.
Introduction
The SIR model of epidemic, Kermack and McKendrick's seminal compartment model [1] has been widely used to analyse various epidemics, and the ongoing COVID-19 pandemic is not an exception (See, for example, [2][3][4][5]). This paper explores how well one of its variants, the SEIR(D) model, fares with the current COVID-19 epidemic data for South Korea ('Korea' hereafter).
The original SIR model assumes that (1) the susceptible population is relatively homogeneous, and that (2) parameters used in the model remain invariant throughout the entire epidemic period. In the real-world, however, the susceptible population is not homogeneous. Nor is it that the one-time transmission dynamics captured by the estimated model parameters stay constant throughout the whole period of observation. More importantly, the public health authority's non-pharmaceutical interventions (NPIs), even in the absence of vaccines and medical treatments, can significantly alter the value of parameters, drastically changing the transmission dynamics of the COVID-19 epidemic. [6,7] Given this real-world complexity, to what extent one can safely rely on existing SIR-based epidemic models to analyse the COVID-19 pandemic becomes a controversial issue. This paper addresses this problem by demonstrating that carefully calibrated epidemic models used in a particular epidemiological context can better capture potentially time-varying and context dependent parameters. These context dependent and time-varying parameters can then be used to evaluate the relative effectiveness of non-pharmaceutical policy interventions to the COVID-19 in each stage. The analysis in this paper can offer a useful insight into both developing a better theoretical model and evaluating existing public health policies.
To demonstrate this point, the paper begins with a brief overview of the COVID-19 epidemic in Korea. This sub-section serves as a useful reference for interpreting later empirical estimations of the SIR-based model parameters. The next sub-section introduces the SEIR and SEIRD epidemic model, data, and methods that we employ in this paper. The third and fourth section reports the result of the statistical analysis, together with a careful interpretation of the estimation results, in order. The last section concludes the discussion by drawing some implications.
2. The context, models, methods, and data
An overview of the COVID-19 epidemic in Korea
Since the first imported case was detected in late January of 2020 [8], there have been three distinctive phases of the COVID-19 epidemic in Korea. The first fullblown spread of the novel coronavirus began in late February. According to KCDC, this first wave is triggered by a massive religious assembly of a particular Christian cult, known as Shincheonji Church of Jesus. [9] The novel coronavirus quickly spread among those who attended this religious gathering, which was held in a tightly packed mega church and other religious buildings. This first wave lasted until early-May (May 10), when the new daily confirmed case fell below the weekly average of 50.
The second wave of the COVID-19 epidemic began in early August, as the number of confirmed cases rose sharply from weekly average of less than 50 to a peak of 441 on 28 August 2020. The immediate trigger of this second spike was also related with another super spreader event that was more political in nature: A conservative opposition party and some Christian fundamentalist factions joined their forces to hold a massive political demonstration at the centre of the capital city, Seoul, blaming the government's various epidemic mitigation strategies. Unlike the first wave, however, the public health authority was unable to implement proper public healthcare measures such as enlisting and conducting pre-emptive diagnostic testing for suspected patients who participated in the political rally. Leading figures of Christian fundamentalist movements fiercely opposed the public heath authority's healthcare measures and even instructed their members not to fully cooperate with the authority. Consequently, it took much longer time for the Korean health authority to bring down the number of daily confirmed cases below 100 (only by 20 September 2020), and it is not even clear whether the second wave was suppressed at all.
The third and concurrent wave of the COVID-19 epidemic began around mid-October with daily confirmed cases rising from the low 60 s to the peak of 1241 on 25 December 2020. Compared to the previous two waves, the latest phase of the infection dynamics does not seem to be associated with any single super spreader event. Instead, it stems from persistent small scale and multi-sited infection cases found in childcare and elderly care facilities as well as private education, various entertainment facilities and religious venues throughout the country. The median age of newly confirmed cases is also lower than during the second wave, as increasing numbers of younger asymptomatic patients are suspected to spread virus variants that are likely to be more infectious and deadlier to some demographic groups.
During the whole tumultuous period of this epidemic, the Korean public health authority led by the Central Disease Control and Management Headquarter has maintained and implemented consistent nonpharmaceutical interventions and proactive public health-care measures. The health authority has adopted policies of (1) conducting pre-emptive and targeted PCR-based diagnostic testing on a massive scale, (2) tracing epidemiological links of confirmed patients, fully utilising the information-communication technology infrastructure, and (3) expanding public and private medical facilities and equipment to accommodate the need of quarantining and treating different groups of patients in accordance with the severity of clinical symptoms (See also [9]).
The following figure shows these three distinct phases of the COVID-19 epidemic from 18 February 2020 (Day 1) to 08 February 2021 (Day 360). The 'Active' case in the third chart in the figure represents the number of confirmed cases minus the sum of both recovered and deceased cases (See Figure 1):
The proposed SEIR(D) models
The goal of this paper is to analyse the unique pattern of the novel coronavirus infection in each phase of the COVID-19 epidemic using both SEIR and SEIRD model. These two models are a slight variation of Kermack and McKendrick's original SIR(D) model. The SIR compartment model classifies the homogeneous population into its sub-groups, namely, the susceptible, the infected, and the recovered population and traces how each population group interacts with one another over time. Ignoring so-called 'vital' dynamic variables such as the natural birth and death rate, we can write down the SIR model as a system of three differential equations of the form: where s(t), i(t), and r(t), represents the number of susceptible, infected, and recovered population at time t, respectively. The parameters, a and b then represents the transmission rate (or infection rate) and the recovery rate. These two parameters jointly determine the rate of change in the number of infected and recovered population among the susceptible population. If we include the number of deaths associated with the virus infection into this model, the outcome is the SIRD model, where D represents another compartment of the population, the deceased group with the corresponding parameter c > 0 that represents the death rate (fatality rate) associated with the virus infection. The SIRD model has the following four differential equations with three unknown parameters: Building upon both SIR and SIRD model, an epidemiologist can further develop a slightly more complex model such as the SEIR(D) model in order to account for the prior exposure to the virus. Many viral infectious diseases including the current COVID-19 have an incident of exposure to the virus and a certain incubation period before the suspected patient begins to show some signs of infection (if any). The SEIR and SEIRD model are designed to account for this exposure rate by explicitly introducing a new variable 'e(t)' and corresponding parameter (b) in between the susceptible and infected population group to the SIR(D) model. Therefore, the SEIR model is of the form: while the SEIRD model can be written as: where d(t) is the number of those who die because of the virus infection and d > 0 is the death rate associated with the COVID-19 [10].
We can simulate these four models by assigning an arbitrary value to each parameter. For example, let us use the following arbitrary values assigned to each parameter and examine how solution curves of the respective system behave: Figures 2 and 3).
As these two simulations show, the susceptible population decreases as more and more people are exposed to the virus and become infected. Some portions of the infected population recover, while the other sub-group die. Before this infection occurs, there is a certain exposure rate and/or incubation period that precedes the infection is confirmed, as shown in the second panel of both the SEIR and SEIRD model.
The second simulation ( Figure 3) shows that the susceptible population decreases faster than the first case ( Figure 2) as the infection rate a is set to be higher (0.3) than the first case (0.2). This higher infection rate is also reflected in the higher exposure rate curve. In the second panel of Figure 3 that shows both the SEIR and SEIRD model simulation result, the recovery rate curve is flattened at the beginning and only steadily rises towards the end of the simulation because of both higher exposure and infection rates.
Methods and data
The paper primarily relies on both the SEIR and SEIRD model to conduct a multi-stage parameter estimation, while occasionally compares the result from the SIR(D) model. As will become clear, the estimation result from both the SEIR and SEIRD model is far superior to what we can get from the SIR(D) model in particular epidemic contexts. One novel feature of this paper is to conduct a multi-stage parameter estimation using these models to identify potentially time-varying and context dependent parameters in each stage of the COVID-19 epidemic in Korea.
For this statistical analysis, the paper uses a manually compiled dataset taken from the official website of Korea Disease Control and Prevention Agency (KDCA). The KDCA has released various data related to the COVID-19 epidemic since the first infection case is confirmed. Individual researchers can view the daily press release and manually compile time series for the confirmed cases, recovered cases, deceased cases, all classified by sex, selected age group, and detailed geographical location of infection [11].
The whole period
This section reports the multi-stage estimation results and offers an interpretation of some computed statistics such as the average days for recovery and the reproduction ratio in each stage. Let us begin with the estimated SIR(D) parameters for the whole period from 18 February 2020 to 08 February 2021. The following table shows the SIR(D) model-based estimation result (See Table 1): This parameter table shows that the estimated parameters are very sensitive to the number of variables and both effective reproduction ratio and average days for recovery derived from the parameters also vary depending on the number of variables. The estimated parameters look generally okay from a pure statistical point of view as relatively low and reasonable P-Values indicate. The computed average reproduction ratio is about 1.5 and the duration for the recovery is about 15 days, which are consistent with many international comparative studies, including the KDCA's own computation.
However, this estimation result is not robust in the sense that it is biased towards the latest development in the COVID-19 epidemic. Because of higher numbers of both confirmed and recovered cases concentrated in the latest third wave, the estimated infection and recovery rate parameters substantially underestimate the actual cases occurred during the first two waves. This estimation error in both SIR and SIRD model is also reflected in the relatively low R-Squared statistics (0.8515) (See Figure 4).
In a sense, this biased estimation result is inevitable because the parameters from fitted models are taking the average of the cases without considering multiple waves present in the observed data. To put it another way, the very existence of multiple waves undermines the predictive power and usefulness of the SIR(D) model that is fundamentally based upon the assumption of 'invariant' and 'uniform' parameters. For this reason, we should carefully account for the presence of multiple waves when estimating parameters. The sub-section below shows how this careful usage of models and multi-stage analysis can be done.
The first wave
As we introduced earlier, the first wave began 30 days after the first imported infection case was detected and was ultimately contained 150 days later (from 18 February to 15 June 2020). On Feb. 29, new daily confirmed cases reached its peak of 909 and gradually fall thereafter.
The statistical analysis of the data for this period shows that both the infection rate and the basic reproduction ratio are much higher than the average of the whole period with more accurate model fit (See Table 2 and Figure 5). However, there are two statistical idiosyncrasies, one of which being the exceptionally high reproduction ratio, reaching 18.2 (SIR) and 17.8 (SIRD).
According to one meta-analysis of early studies conducted using initial period of Chinese COVID-19 data indicates that the mean value of the reproduction ratio is 3.38 ± 1.40 with a highest ratio being 6.49 [12]. Compared to this mean value of the reproduction ratio, both 18.2 and 17.8 are far higher. In addition, the average recovery day during this period is also very long (31.2 days in both SIR and SIRD model).
Though puzzling at first glance, we can easily resolve these problems by considering how the case definition was made during this period. The KDCA took an extremely cautious approach when it began to reclassify infected patients into the recovered group during the early stage of the epidemic. Since Korean health officials did not know much about the epidemiological and clinical nature of the COVID-19 at the beginning, they needed more time than usual to reclassify and discharge infected patients.
This delayed case definition naturally affects the number of active infection cases, lengthening the average duration for the recovery. The same delayed case classification brings down the recovery rate, while overstating the infection rate. Exceptionally high reproduction ratios specifically captured by both the SIR and SIRD model, therefore, are a direct result of this policy-induced low recovery rate, which is, in turn, determined by the KDCA's extremely cautious reclassification criteria used during the early stage of the epidemic (For KDCA's COVID-19 case definition, see [13]).
The second wave
The second wave was also triggered by a super spreader event on 15 August 2020, when Christian fundamentalists and conservative opposition party held a massive political demonstration in downtown Seoul. During this wave, some rally participant-cumsuspected patients fiercely resisted cooperating with public health officials, making it extremely difficult for the authority to properly conduct its contact tracing and other mitigation measures. Citing their noncooperative behaviours, one may even argue that the second wave has never been fully suppressed, ultimately paving the way for the third wave that immediately followed [14].
Having these considerations in mind, let us examine estimated parameters in Table 3. The parameters for the SIR(D) show the similar pattern observed in the first wave: The reproduction ratio is consistently high (the SIR 6.4 and the SIRD 6.3), compared to the same ratio taken from both the SEIR (1.3) and the SEIRD model (4.7). This relatively high reproduction ratio captured by the SIR(D) may show the intensity of the infection during this period.
Compared to the first wave, however, we have about '14 days of average recovery period,' regardless of the type of the model used in this period. This shorter average recovery day in the second wave (and the third wave as well below) is more to do with the revised case definition that the Korean health authority begins to use towards the end of the first wave, rather than any changes in pathogenic properties of the novel coronavirus. We can also confirm that the accuracy of the model parameters captured by both R-Squared and AIC across the model has been substantially improved from the whole period analysis (See also Figure 6).
The third wave
Starting from around mid-October, daily confirmed cases began to rise again, marking the beginning of the third and concurrent wave in the COVID-19 epidemic. Table 4 tabulates estimated parameters showing the severity of the concurrent wave of the COVID-19 epidemic (See Table 4).
One interesting characteristics of the latest wave is that the absolute number of confirmed, recovered, and deceased cases are far higher than the prior two waves. Nonetheless, the estimated parameters and computed statistics (especially, the reproduction ratio) are not comparably higher. The simple reason is that the confirmed, recovered, and deceased cases in the third wave started off with higher initial values than the first two waves. Therefore, even when the peak number of daily confirmed cases once reached more than 1241 (on 25 December 2020), the single highest daily confirmed case number in the entire period of the COVID-19 epidemic, the average reproduction ratios are not comparably higher than those in the previous two waves.
It is also notable to see that there is no single definitive criterion that we should use to select a particular model other than the purely statistical model selection criterion. Both the SIR(D) models are equally sound as their counterparts, the SEIR(D) models, in terms of their estimated parameters and derived statistics. The computed P-Values and reproduction ratios are equally reasonable to use regardless of the model, and the AIC also shows that they are close to each other irrespective of the models used (See also Figure 7).
Discussion and limitation
The COVID-19 epidemic in Korea has exhibited multiple stages of its development, whose immediate causes and transmission patterns differ from one another. This paper has conducted multi-period statistical analysis based upon both the SIR(D) and SEIR(D) models to capture this time-varying and contextdependent transmission dynamics more accurately. It is demonstrated that the SIR-based epidemic models are still useful and informative, but only when they are used to carefully account for the presence of multiple waves of the COVID-19 epidemic. This multi-stage estimation of the model parameters has shown that both transmission rate and the basic reproduction ratio can rise substantially in the absence of the government's effective non-pharmaceutical interventions. At the same time, even if the public health authority is willing to implement timely and proper public health measures, the success of these policies is largely dependent on how the public respond to the proposed measures. The value of estimated parameters and computed statistics that we have examined above are the reflection of the aggregate outcome of these interactions, and consistently higher infection rate parameters and reproduction ratios across the model appeared during the second wave seem to show the limited effectiveness of nonpharmaceutical interventions in the face of fierce opposition.
With respect to statistical analysis, it is challenging to identify the single best epidemic model sorely relying on any single model selection criterion. The SIRbased epidemic models are a good starting point. But the SIR(D) model fails to generate robust parameters especially when they are used to cover the entire period of the epidemic. For this reason, this paper has attempted to identify multiple waves of the epidemic and estimate model parameters for each wave to find time-varying and context-dependent transmission dynamics in each stage.
Even on this ground, however, the epidemic model and the statistical analysis cannot fully address the data problem associated with the official case definition and the measurement error we faced during the first wave of the epidemic. The delayed recovery case definition brought down the average recovery rate parameter, thereby inducing higher estimated infection rate and lengthening average days for recovery during the first wave.
Nonetheless, a careful application of epidemic models and multi-stage statistical analysis based upon the proposed models is far superior to a blind usage of the same epidemic models for the whole period analysis because the former shows time-varying and context-dependent transmission dynamics of the COVID-19 epidemic more accurately.
Conclusion
The paper discusses multi-period estimation of the COVID-19 epidemic data for Korea based upon the selected SIR epidemic models, while emphasising the importance of finding time-varying transmission dynamics of the novel coronavirus. For this purpose, the paper attempts to identify major stages of the COVID-19 epidemic in Korea and uses a selected SIRbased epidemic models to estimate the parameters that may capture evolutionary aspects of the COVID-19 epidemic in this country.
From a theoretical point of view, the analysis in this paper points to the limited usefulness of the SIR-based epidemic models. Particularly, the 'invariant parameter' assumption shared by these epidemic models is questioned. The SIR models and their parameters can be grossly misleading if they are not accompanied by proper considerations of the given context, in which the model is being used. As an alternative, the paper attempts to show that we can better utilise the same SIR epidemic models by carefully accounting for each distinctive stage of the epidemic.
This multi-stage statistical analysis reveals that the transmission dynamics of the novel coronavirus changes, primarily depending on how effectively the government's non-pharmaceutical interventions work. The multi-stage estimation of model parameters and derived statistics can capture the time-varying relative effectiveness of and challenges to the government's mitigation strategies in each stage.
[14] Therefore, the choice of both starting and ending date for the second wave is somewhat arbitrary, and we use both date of August 06 and October 04, 2020 only for the purpose of analytical convenience in this paper. The estimation result is not significantly different in two alternative cases that uses July 30 and August 05 as the starting date. The same is true for the ending date of the second wave. | 2021-07-17T06:17:03.625Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "e495fa06b9abc11197c62ea3513cdefc7f52d547",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/07853890.2021.1949490?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "04c0b5a5a00d166d027c04987f769b99157df8e9",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235613511 | pes2o/s2orc | v3-fos-license | Viewer’s Role and Viewer Interaction in Cinematic Virtual Reality
Cinematic Virtual Reality (CVR) is a form of immersive storytelling widely used to create engaging and enjoyable experiences. However, issues related to the Narrative Paradox and Fear of Missing Out (FOMO) can negatively affect the user experience. In this paper, we review the literature about designing CVR content with the consideration of the viewer’s role in the story, the target scenario, and the level of viewer interaction, all aimed to resolve these issues. Based on our explorations, we propose a “Continuum of Interactivity” to explore appropriate spaces for creating CVR experiences to archive high levels of engagement and immersion. We also discuss two properties to consider when enabling interaction in CVR, the depth of impact and the visibility. We then propose the concept framework Adaptive Playback Control (APC), a machine-mediated narrative system with implicit user interaction and backstage authorial control. We focus on “swivel-chair” 360-degree video CVR with the aim of providing a framework of mediated CVR storytelling with interactivity. We target content creators who develop engaging CVR experiences for education, entertainment, and other applications without requiring professional knowledge in VR and immersive systems design.
Introduction
The term Cinematic Virtual Reality (CVR) can be defined as a type of experience where the viewer watches omnidirectional movies using head-mounted displays (HMD) or other Virtual Reality (VR) devices [1]. Thus, the viewer can develop a feeling of being there within the scenes and can freely choose the viewing direction [2]. From a content point of view, we use the prefix "cinematic" or "narrative" to define those VR experiences that are narrative-based, instead of purely for novelty, entertainment, exploration, etc. A narrative virtual reality project can be a story-based drama, a documentary, or a hybrid production that features a beginning, middle, and an end. Its appearance may vary from simple 360-degree videos, where the only interaction for viewers is to choose where to look, to complex computer-generated experiences where the viewer can choose from multiple branches or even interact with objects and characters within the scene [3].
As CVR becomes more widely popular, content creators are trying to produce engaging narratives using this immersive medium. One of the major issues creators encounter is called the "Narrative Paradox", which is the tension between the user having freedom of choice and customization and the director controlling how the narrative plays out [4]. It poses a challenge for creators to balance engaging interactivity with dramatic progression. The second major issue linked to viewer control is that it can cause the viewer to miss important story elements, inducing a condition called "Fear of Missing Out" (FOMO) [5], and yielding weak narrative comprehension and low emotional engagement.
Many researchers have worked on solutions to these issues. We borrowed terms from a film director's skill set, namely the Mise-en-scene (a French term of where the things are in the scene), Cinematography, and Editing [6], to group them. We leave out the use of sound as it is outside of the scope of this paper. Under Mise-en-scene, people have been trying to maintain narrative control and deliver dramatic experiences by changing the user's viewpoint in the scene (e.g., camera placement), the placement of action, and the story elements [7][8][9]. Other work comes from a cinematographic perspective and discusses how the spatial-temporal density of a story, the framing grammar, and editing techniques apply to CVR [3,[10][11][12]. Finally, some researchers have explored methods to direct the viewer's attention within the immersive environment to important story elements [2,4,13,14]. These sit at the border between Mise-en-scene and Cinematography, as some use diegetic story elements in the scene, while others use visual cues and alternations.
We note, however, that these solutions have mainly focused on the director's role, as opposed to viewer agency. For example, CVR directors employ methods to direct the viewer's attention to important elements, compose a story script focused on balancing the spatial and temporal story density, and need to rethink framing and editing in both production and post-production stages. Little attention has been given to viewers, especially to the viewer's role and agency through interaction. We assume that this is because creators are most familiar with mature filmmaking techniques used in the industry, similar to how we have chosen our starting point stemmed from well-established terms. In traditional filmmaking, viewer agency is seldom brought into consideration, because in a cinema, the viewers are passively sitting and looking straight at the screen, with zero interaction with the content and no influence on the story. Thus, CVR creators take this as the given viewer scenario [6] and ignore viewer agency when migrating techniques to CVR. However, the viewing experience of swivel-chair CVR is quite different from traditional cinema. Firstly, although both are normally experienced while seated, a CVR viewer is able to turn her head or chair to look around [15,16]. This freedom of head rotation means the viewer now controls where to look in the scene, changing the viewer's perception of their role, extending their interactivity and agency [17]. Secondly, a CVR viewer is not a fully active participant, because CVR is a "lean-back medium" with limited interaction possibilities [18]. The viewer is still on the passive side and mainly wants to watch the story unfold and follow the narrative instead of acting in it [10]. We also found very little discussion in the literature that focused on the properties of interaction design from the perspective of user experience (such as the viewer's awareness of the affordances for interaction and expectations of agency). Discussion has typically been around how to enable full user interactivity in immersive environments [19,20]. However, full interactivity would require complex hardware and an overly-demanding amount of effort on the part of the viewer, and thus is not applicable to CVR. This amount of interactivity borders on something other than cinema, and reaches more towards an immersive video game. We consider CVR and video-game "users" to mainly have different motivations, so they likely comprise different demographics.
The remainder of this paper is organized as follows: We first review the literature covering CVR user experience and system design, immersive storytelling, traditional filmmaking, and game design. We then explore the continuum of viewer interactivity in CVR, summarizing the methods and studies from the literature. We then propose a Framework of Mediated CVR to create a CVR experience with implicit interactivity, aiming to balance the viewer interaction and agency, against the director's authorial control that is also necessary for storytelling, resolving the issue of the Narrative Paradox. The conclusion and future work are introduced in the last section.
Methodology
For this review, we mainly look at previous work related to CVR, from the perspective of narrative design and user experience design, rather than technical optimization or system building. We therefore did two rounds of search in the Scopus bibliographic database (https://www.scopus.com/search/form.uri accessed 18 May 2021), using the following combination of keywords. In round one, we used the search terms "360-degree" AND "storytelling". In round two, we used "cinematic virtual reality" AND "interaction". We searched in title, abstract, and keyword fields among papers published from the year 2000 up to the present.
In the first round, we mainly looked for papers covering 360-degree video content creation, and obtained 37 initial results. We then used the sorting tool from the database to filter and keep only those that have been cited five times or more, resulting in 14 such papers left in this category.
In the second round, we extended the coverage of media types from only 360-degree video to any other immersive content with a narrative purpose. We also paid special attention to those which also involved "interaction" as their focus. The initial search returned 58 hits. Considering this is a relatively new research area, especially with the term "CVR" being brought forward around 2015, we did not filter them with citation numbers. Instead, we did a further review of those papers and removed several special cases that fell outside the scope of this research. These included (1) those which initially used an immersive environment for capturing, but rendered only a part of it as a 2D viewport for viewing by end users, (2) those which focused on technical optimization rather than user experience, and (3) those which worked with sound instead of visual presentation. This refinement resulted in 31 papers.
Finally, we checked for duplications from these two rounds, as some of them covered storytelling with both 360-degree videos and viewer interaction. This resulted in a final set of 41 papers that we reviewed in detail. This final set is also listed in Table 1, preliminarily grouped by the type of media, design focus and viewer interaction techniques. In the following chapters, we first review those around user experience and narrative design of storytelling with 360-degree videos. Then we move on to reviewing a series of works of narrative VR experiences, including but not limited to 360-degree videos, embedded with different levels and various types of viewer interaction techniques. [19,20] or synthetic annotations 3D environment
Moving from the Flat Screen to an Immersive Medium
In this section, we mainly look at previous work focusing on the CVR user experience and system design. Researchers have been migrating filmmaking grammars, principles about elements that contribute to the setup of the viewer's experience, and the perception of affordance to interact into CVR, all aimed at relieving the Narrative Paradox and guiding the viewer along the storyline.
Choosing the Viewer's Role
When moving from a 2D flat video to an immersive medium, not surprisingly, researchers have noticed that the first obvious change is the Point of View (POV). In an immersive medium, the viewer sits in the center of the scene, instead of looking at a rectangular flat screen. Syrett et al. [17] stated that with this new POV, the viewer becomes the narrator, since she can choose what to look at and what to understand. This represents a change in the viewer's role. One of the commonly used methods to cope with this change is to define the viewer as either a spectator (not part of the story) or a character (either invisible or acknowledged) in the story. Bender et al. [21] compared the effects of two camera positions in CVR, the character view and the immersive passive view. They looked at their effects on attention to important spots. The character's first-person view did help the viewer to establish a fixation on Regions of Interest (ROI) faster than a third-person view.
Dooly et al. [9] in their research also proposed a division that is less coarse. They categorized the roles of a character into a silent witness, a participant, and a protagonist. The difference between a participant and a protagonist is whether other characters in the story give the viewer social acknowledgment, and how often and with what priority this is given. Furthermore, acknowledging the viewer as a person or a character is a CVR approach borrowed from theater practitioners. In staging and performing 360-degree video using principles from theatre, Pope et al. [7] found that actors had lines directly spoken to the viewer/camera, treated it as another person in the scene, and also changed them into an invisible one or a spirit, since in 360-degree video, the viewer cannot see her own body when looking down. Brewster [42] also stated that the viewers can alternatively take a "Morphing Identity," where they find themselves being something else in the story or the scene, rather than a human being. One example is the 360-degree film Miyubi (https://www.oculus.com/experiences/gear-vr/1307176355972455/ accessed on 24 October 2018) from Oculus studio. In the film, the viewer finds that she has become a toy robot newly bought by a kid, and witnesses how other family members treat it. Figure 1 shows a screenshot from the film, with the toy's robot arms visible in the lower part. Other than defining the viewer's role from a screenwriting or theater staging perspective, the pose a viewer takes to watch the narrative content will also influence how her role is perceived, thus affecting her behavior when watching. Godde et al. [10] observed viewer behavior under the same content, but in a seated pose vs. a standing pose. They point out that when seated (the most common way to view CVR), viewers spent a larger amount of time looking towards the front and showed less exploratory behavior. Tong et al. [15] also explored the preferred user scenario for 360-degree videos, and coined the term "Swivel-chair VR" to specify the preferred way to consume 360-degree video content (seated, instead of standing). Swivel-chair VR describes a scenario where the viewer watches a 360-degree video while sitting in a swivel chair, wearing a VR headset. The swivel chair allows the viewer to rotate around 360 degrees by turning the body together with the chair. It is easier and more comfortable than only turning with one's neck, as the swivel chair serves both as a cue to imply the affordance of rotation, and an anchor point to assist rotation. It also provides greater comfort and safety, compared to standing while viewing.
Choosing the Placement of the Camera, the Actor, and Other Story Elements
Mise-en-scene has been a powerful tool for filmmakers to design story elements to help the viewer find out where she is located and what to focus on in the scene. This was also explored in the context of CVR. Dividing zones around the viewer was proposed because viewers were given the freedom to look around and decide where the current view and focus would be in CVR. However, researchers discovered that not all directions around a viewer had equal weight, given a choice. Godde et al. [10] divided the full 360 area around the viewer into the front area, rear area, and a blind spot directly behind, as shown in Figure 2. They point out that viewers mainly tend to focus only on elements within the front zone (180-degree front-facing) and are less likely to look for elements inside the rear zone, where significant head turning is needed. The blind spot is where elements will most likely be ignored or missed by the viewer, even though they might be related to the narrative. With proper use of staging and directing cues, viewers can be encouraged to explore the rear zone, but the placement of important elements in the front zone is still recommended, as supported by their experiment. They also found that if the viewer takes a seated pose, the preference for the front will be intensified. Distance is another key factor to be taken into consideration. Dooley [9] pointed out that in traditional filmmaking, within the border of a frame, directors can use "shots" to control what story element to focus on, such as a close-up or a wide view, to direct the viewer's gaze and attention. Likewise, in CVR, while there is no frame to define a shot, defining "experience" with "distance between the viewers and actors" can be important and viable. They proposed the theory that "in CVR distances can influence the viewer's emotional engagement with the characters" and analyzed three scenes in a sample narrative 360-degree video Dinner Party (https://www.with.in/watch/dinner-party accessed on 1 March 2021) to see if distance variations changed how viewers perceive their own relationships with the characters around them. In the experiment carried out by Pope et al. [7], they asked actors to stage and perform a short drama for either a viewer sitting on a swivel chair or a 360-degree camera taking the place of the viewer. They discovered that when the actual viewer was replaced by the camera, actors still performed with the principles regularly used in a theatre. The actors tend to group on one side of the camera so the viewer could easily see all the action without turning her head frequently. In another test, Bailenson et al. [22] also discovered that in an immersive environment, users tend to maintain personal space and keep their distance from other human characters just like in real life. They also discovered that if the other human characters are making eye contact with the user, this tendency of distance-keeping will be more obvious. However, unlike CVR, their work was mainly conducted in a highly interactive setting where the user could freely move around.
Other than controlling from the Mise-en-scene perspective, directors can also manipulate the parameters of the camera directly, to change how the viewer perceives her role. The height of the camera was the first factor to be addressed. Researchers have discovered that the height differences between the camera and eyes are more accepted by the viewers if the camera position is lower than the viewer's eye height [8]. Additionally, a seated pose is preferred and can be adapted to more easily than a standing pose [23].
The Continuum of Interactivity of Immersive Experiences
As mentioned in the previous section, once moved from a flat screen to an immersive medium such as the 360-degree video, the viewer's role changes. The shift means the viewer is now part of the story world (as a character or not) and will expect more agency [1,30]. We can infer that the viewer will want to interact with elements in the story world and influence the narrative. Thus, viewer interaction in CVR is another dimension we need to look at. Viewer interaction, in the setting of immersive storytelling, stands for a certain control the viewer has over the narrative [35]. This control differs from one system to another (either because of its hardware or the purpose of storytelling). In the following sections, we would like to first propose the idea of a "Continuum of Interactivity" to divide and group the experiences and projects we reviewed by the level of interactivity viewers have in each of them, from Very Limited to Limited, Medium, and High, by placing them on the continuum, as shown in Figure 3. We will then discuss each group by looking into their interaction design decisions and properties. On the horizontal axis from left to right, we put a variety of immersive experiences with different levels of interactivity, not limited to only CVR, but also VR games, interactive theatre, etc. A spot on the horizontal axis represents the level of interactivity an experience has, varying from very limited to highly interactive.
Zero Interactivity
The most common and easily accessible form of CVR is 360-degree video. Like traditional filmmaking, the viewer takes a seated or standing pose, watching a 360-degree video with either a flat-screen device or wearing an HMD [15]. In both cases, the only user input is to choose in which direction to look at any given time. In a traditional 360-degree video, this input will not affect any narrative progress, and the director does not predefine any reactions to it [13,24]. One can conclude that the viewer in a 360-degree video playback has zero interactivity with the narrative. Those experiences are placed on the left-most part on the continuum as shown in Figure 3.
Illusion of Interactivity
Other than addressing the directors and actors, we also found that some researchers try to embrace the viewer's first-person view using an "illusion of interactivity" approach, which helps to increase the level of spatial presence and realness, even when no "real" interaction is taking place (cf. [43]). Brewster from Baobab studios [42] presented the method they applied in the computer-generated narrative VR short film Asteroids (https://www.baobabstudios.com/asteroids accessed on 20 April 2018), where a robotic dog mirrors the viewer's simple head tilting during a dwelling stage before the main story starts. The mirror action from the dog gives the viewer a further impression that she is part of the scene and feels like her movement can affect the scene itself, whereas general interactivity is actually not enabled and the experience is still a linear film. Dooley [3] also point out that a strategic VR director can create the illusion of choice for the viewer, when in fact they are creating a series of audio and visual cues that result in a preconceived narrative experience. This can be done by deliberately laying down a series of content chunks with auxiliary transitions and assisting content to fill the blanks between important story nodes. Thus, the viewer will feel that she is in control and that the story unfolds because she noticed something special or made the choice of looking at a certain object first. However, system wise, the viewer is not actually interacting with the environment, nor do they have any impact over how the narrative unfolds. Thus, they are also grouped to the left-most part on the continuum.
Medium-Level Interactivity in CVR
Acknowledging that viewers intend to interact, researchers have been exploring and evaluating various interactive techniques for CVR. Depending on the content, the genre, or the director's specific intention, each approach registers its own combination of choices on several properties. We focus on two of them here, the visibility of the interactive element and the depth of impact on the narrative.
Before jumping into design choices for interaction in CVR, however, we first look at the tasks CVR viewers carry out and the input they employ to perform them. In a technical survey, Roth et al. [18] point out that the main interactions in CVR, if enabled, are: selecting visible images, selecting areas for nonlinear stories, triggering the next scene or displaying more information, and navigating in the movie (menu). These were commonly implemented in, for example, "interactive 360-degree videos" or similar immersive experiences with content either pre-recorded or computer generated. They also listed two types of input based on the tasks above, continuous input, mainly for tracking, moving (objects or as navigation tasks) or pointing, and discrete input, which is for trigger actions and activation. To carry out these inputs, a CVR viewer may use head rotation, eye gaze, hand gestures, or a pointing device, depending on the system design.
Generally, when creating a CVR experience, the director will need to go through three design decisions before finally implementing the specific interaction technique: the tasks she wants the viewers to carry out, the consequence/impact each task will have, and the input method the viewers will use to perform them. In the following sections, we look at several examples with various interaction designs. We also pay extra attention to their visibility to the viewer and their impact on the narrative itself.
Shallow Intrusion-Temporal Control
A typical user input in CVR is through temporal control, which means the viewer can only interact with the playback of a narrative (such as speed change, pause, and browsing), but is unable to break the linear progress. In a conventional 360-degree video player, a viewer is presented with a "menu bar" to interact with, as shown in the two examples from Pakkaen et al. and Keijzer [25,26]. Researchers developed and tested various input techniques, but the control panel stayed similar. They were always derived from the familiar desktop UIs. One requirement of these UIs is that eye and hand coordination is needed to complete the "pointing-to-activate" two-step action, temporally taking away the viewer's capability of making spatial choices and looking at the scene itself. Petry et al. [27] invented a new system that decoupled orientation control from temporal control. They kept free head rotation to control where to look and at the same time enabled an extra pointing gesture for fast-forward/rewind. The viewers were therefore given parallel capabilities to look around and browse through events chronologically at the same time. These two inputs also did not interfere with each other, unlike the examples given previously. However, all of them only brought the viewer to the level of controlling the "temporal progress" along a linear track; the storytelling itself was pre-recorded and fixed. Thus, the interaction was "shallow" and "limited."
Deeper Intrusion-Narrative Control
Creators are also aware that, in CVR, after acknowledging the viewer as a character in the scene, adding interaction enhances the viewer's feeling of presence, because having an active role contributes to their enjoyment and engagement [28]. At the current stage, as most CVR content is pre-recorded, the possibilities of interaction with the narrative itself are mainly limited to two: (1) choices over a bifurcated plot where every scene is a video clip, and (2) the overlapping of extra elements over each video clip, injected into the scene [30]. Interactive narrative content, also known as Interactive Fiction [31], is a form of narrative based on a bifurcated story and has been commonly available. Content can be found in both traditional flat-screen media, such as the sci-fi drama Bandersnatch from the series Black Mirror (https://www.netflix.com/sg/title/80988062 accessed on 6 May 2019), and in immersive media, such as the virtual relic city tour Bagan (https://artsexperiments.withgoogle.com/bagan accessed on 14 January 2019) based on recorded 360-degree videos and rendered 3D scenes. In interactive experiences, the viewers make choices at each "intersection" (refers to story nodes in the design) and rearranges the linkage of fragments into their own configuration [32]. One example is the previouslymentioned drama Bandersnatch, in which the viewer will occasionally be presented with two choices throughout the story, as shown in Figure 4. Each interaction inside the experience is reactive, from a technological point of view. However, this is challenging from a narratological/authorial point of view, in terms of keeping the story flow and maintaining user engagement with the narrative, because the creator of the story cannot predict each viewer's actual chain of choices when the content is being presented. Thus, the authorial control over the narrative is disrupted.
Explicit and Implicit Interaction
In typical designs of viewer's interaction with the user interfaces of the narrative system, visible elements, such as a circle target, a countdown marker, or a translucent dot [33] assist the viewer in becoming aware of the location of an ROI, the effectiveness of a recent input, or a commencing event. When novel designs started to move away from conventional UI, people also explored the possibility of enabling user input without relying on any visible interface or reaction element. Roth et al. [29] list parameters around the activation targets. Targets can turn visible when triggered by the user, or remain invisible, depending on the requirement of the narrative itself. In most narrative VR, like with movies, even a small non-diegetic object can be disturbing and break the feeling of presence. Thus, the trigger or target will need to be only visible when activated or invisible throughout the entire experience. Figure 4. Screenshot from the interactive film Bandersnatch (source: Netflix). At several given moments in the film, the viewer is presented with two choices. It will impact the current character's decision of imminent action (or may not). A countdown timer is also presented (the thin line above the options). If the viewer does not make a choice before the time runs out, a default option will be chosen automatically.
In an experiment conducted by Ibanez et al. [34], they constructed a virtual tour system that generates stories based on the location designated by the user and the location where a virtual tour guide was standing. Therefore in the entire tour, the user feels like she is guided by a real tour guide, with all the knowledge and responses to her choices of POIs instead of a linear pre-recorded video footage. In fact, however, all the granular narratives are pre-fabricated. Therefore, compared to a conventional system where the viewer (user) only gets to navigate along a one-dimensional timeline with explicit input, in this virtual tour system, the viewer naturally browses the scene, and the narrative structure changes accordingly on the fly. During the entire process, the viewer is providing implicit input to the system (naturally choosing where to look and focus, as one would do in a real-world tour), and is unaware of the fact that she is making choices. Another example of this implicit interaction is a six-degree-of-freedom (6DOF) "Digital Marae" experience that we have implemented (cf. [44]). A marae in this context refers to physical Māori location containing a complex of buildings around a courtyard where formal greetings and discussions take place. A voxelized avatar of a real person from the marae in the physical world acts as the host in a computer-generated environment of the marae. The host delivers narratives while the 6DOF viewer is freely roaming in the scene. Three-dimensional clips of a real storyteller (voxelvideos) are rendered (visually and acoustically) in the virtual marae environment. The virtual storyteller introduces artifacts and decorations in that marae, as shown in Figure 5. This can be stopped and started with explicit user interaction, similar to the UI controls described in the previous section. As an option, the voxelvideo host avatar is also capable of actively establishing eye contact with the viewer and initiating the introduction when she is within a certain proximity, maintaining eye contact when speaking to the viewer. When the viewer steps away from the gaze-maintaining proximity, the host avatar goes back to the initial pose (of not attending the viewer) and pauses the speech. This proximity setup gives the viewer the impression that she is interacting with a real person-like host when she roams the marae, and that the host is giving a tour especially for her, instead of the impression of "a holographic recording that is played when I press a button". [44]). In setup A on the left, the viewer uses head orientation to "point" to one of the objects (ROIs) she is interested in, in this case, a tatā or tīrehu (bailer). The virtual host (voxelized avatar) stands beside the viewer to deliver the narrative. In setup B on the right, when the viewer notices an object she is interested in, she moves (or teleports) to the proximity of that object. The virtual host stands next to the object, acknowledges the viewer when she is nearby, and delivers the narrative.
In these two examples, although the viewers are not aware of any interactive elements, nor any trigger activation that is revealed to them, they are indeed interacting with the system and affecting the process of storytelling; thus, both are placed near "medium-level interaction" on the continuum (Figure 3).
High Level Interaction
Simply browsing pre-recorded clips by pointing and clicking is not the only type of immersive storytelling. On the opposite end of the continuum, researchers have also tried advancing narrative experiences by employing interaction techniques of high-level fidelity. Examples include emergent storytelling, interactive drama, and sandbox video games. Sharaha and Dweik [35] reviewed several interactive storytelling approaches with different input systems. They have one aspect in common, namely that the users are highly interactive since they act as one of the virtual characters in the story. In the system designed by Cavvaza et al. [36], they focused on auto dialogue generation to drive the narrative based on the text input provided by the user. A similar approach can also be found in the well-known video game Façade [37,38], in which the story unfolds as the player interacts with two virtual characters by typing texts into the console (sample screenshots of the game are shown in Figure 6). Edirlei [39] proposed another storytelling experience where the viewer watched the narrative via an Augmented Reality (AR) projection system and participated in it by drawing. The virtual characters were performing, and the viewer could physically draw objects on a tracked paper and then "transfer" it into the story scene as a virtual object to interact with the characters and the storyline. In another large-scale CAVElike experience [40], the story unfolded as the user was having natural conversations with another virtual character. The user was able to freely speak and move around in the scene. The system reacted to her behavior and progressed with the narrative. In those experiences, the viewers (or users) were given interactivity of high fidelity (speech, gestures, locomotion, drawing). Because of the abundance of input and interaction techniques a player (no more simply a viewer) could use, these experiences are placed to the right end of the continuum, as shown in Figure 3.
Discussions
In the previous section, we stated that the preferred user scenario for CVR is the "Swivel-chair VR", where a passive viewer sits on a swivel-chair (or, less likely, standing). The viewer expects some agency from the storytelling but still wants to enjoy the narrative at a "lean in" mindset instead of "lean forward" [45]. If we look at the continuum ( Figure 3) and consider where "CVR with interactivity" should sit, we can see that "very limited interactivity" is not the desired place to implement narrative VR. In many CVR experiences, the level of interactivity provided by the system does not match the viewer's expectation of participation when moving from a flat screen into this immersive medium. "High-level interactivity" is also not applicable because in CVR, especially the preferred "Swivel-chair VR", the viewer's capability of interaction, objectively, is still insufficient to perform fullbody action or delicate operation on objects. Furthermore, subjectively, the viewer is still passive, unwilling to act with full interaction, but rather to enjoy the story unfolding in front of her. We can therefore preliminarily infer that a proper system design for "CVR with interactivity" will be placed at either the spot near "limited" or "medium" on the continuum.
A storytelling framework presented by Reyes [30] fits well into this category. She presented the possible diegetic interaction options on a pre-scripted story with different navigation alternatives. The aim is to produce an interactive narrative that is independent of the user's journey within the story; the plot is always created with a dramatic climax, or say, "ensure[s] the linear progression of the dramatic arc independently of the journey shaped by the user's choices". In her proposed system, she put forward this idea of "limited interactivity", as the user has some level of free choice, but the general narrative structure is based on the "hero's journal" and is controlled and made by the director (like the doublediamond shape approach of in video games [46]). The primary nodes are always defined (and will be a key element to drive the story forward), with secondary nodes of different paths in between for free choice. In their paths, two types of links were designed and implemented, where "external links" are those jumps between pieces of the stories, moving alongside the general story arc, and "internal links" are extensions within a node, pointing to extra pieces of stories but not critical to the arc (providing more information to enrich the experience). A similar approach has been evaluated by Winters et al. [4]. They used structural features of the buildings and terrains in a 3D game world to implicitly guide the players along a path that was preferred by the storyteller. They also placed landmarks with outstanding salience in the backdrop of the game world so that, while the player can freely roam the world, she was still attracted by the landmark and would eventually go to the key place where the main plot is taking place to drive the narrative forward.
When reviewing the studies in the literature and grouping them by looking at the "level" of interaction a viewer or player has during the experience, we also noticed that a single-dimension continuum might not be the absolute metric to use when exploring viewer interaction in CVR. As we stated, on one hand, we saw that the forms of viewer interaction varied among very limited to medium-level to high and abundant, determined by system design; on the other hand, similar forms of interaction techniques also have a different impact on the story itself, such as simply scrubbing around the timeline [25] or affecting how the story unfolds and ultimately changing the outcome, such as in Bandersnatch. The "level of interactivity" or say the "complexity of the interaction technique" seems not strictly equivalent to the "depth of impact", as we described in a previous section. It is possible that when an interactive storytelling system is implemented, its input methods and interaction techniques define its position on the "continuum of interactivity". However one can not fully predict how the viewer will choose to participate and impact the narrative, because the actual use case varies from one viewer to another. We have seen discussion from researchers who looked at both technical and narrative aspects of immersive experiences separately [47]. Koenitz [41] also put forward a theory indicating that the viewer's participation in storytelling is also part of the narrative itself. We assume further exploration will be needed on the relationship between the viewer interaction design choices and the actual viewer experience.
Furthermore, from the aforementioned interaction design case, we had this observation that there is no definite "better" interaction design for satisfying user experience. In one of our ongoing research projects, one iteration of the Digital Marae experience (cf. [44]), we administered two setups: (1) swivel-chair CVR where the viewer used head orientation for "pointing", (2) walk-around CVR where the viewer was given a controller for "point to teleport". The viewer's capabilities were also different in those two setups. In the first setup, the viewer's input directly indicated the content she was interested in at a given moment. The viewer had definite control over the sequence in which the stories of ROIs were presented, but there was no priority between one against another. Their spatial relationship was also not preserved. The viewer felt she was browsing a "flat image" of the entire marae. The guidance of the host was diluted, thus leading to lower narrative engagement. In the second setup, the viewer was able to spatially get "closer" with one of the ROIs and "further" from others. The system interpreted the viewer's choice by measuring which ROI she was closest to and presented the story related to that one. Compared to the first setup, the viewer had a higher level of narrative immersion because she felt the host knew where she was and what her focus was as she roamed the virtual environment. However, this significantly increased the complexity of the system as the viewer needed input for teleportation and the director needed to cope with extra factors such as the viewer's distance and possible interruptions when the story about one specific ROI was being delivered. We think that there is no "correct" choice between those two setups, but it all depends on the director's choice of the purpose of the installation and what experience the viewers were expected to have. Therefore, at this stage, we want to preliminarily conclude the following. (1) A CVR viewer is a "lean-in" viewer. It is recommended to consider the viewer as a character in the scene who has a certain level of impact on the narrative, which generates user agency, presence, and engagement. At the same time, the viewer's participation needs to be considered because her role is different from the one in cinemas or video games; (2) to enable interaction, the creator will need to consider the tasks a viewer wants to carry out by interacting, the impact over narrative each of those tasks will have, and the input method the viewer will use to perform those tasks. Especially on the input method, we have seen both explicit and implicit ones. Based on conclusion 1, since the viewer is only "lean-in" instead of fully willing to participate, implicit techniques are recommended; (3) authorial control is still necessary for the delivery of a complete story. The director still needs it to construct the story arc to evoke emotional engagement in the viewers. Internal links between the story elements need to remain intact despite the viewer's interactivity; (4) whether a higher level of "abundance and complexity of viewer interaction" will necessarily lead to a higher level of narrative engagement and enjoyment remains unclear. Directors need to consider this under the general purpose of the storytelling activity itself.
Adaptive Playback Control as a Concept Framework
In this section, we propose the concept of Adaptive Playback Control (APC) as a machine-mediated narrative framework. We put it forward as a conceptualized solution of enabling viewer's interaction in CVR and as an example case to those concepts we stated, especially a representation of the exploration of the fourth item in the preliminary conclusions from the previous section. This narrative framework is put forward with these considerations: (1) the viewer is using a (seated) swivel-chair VR experience with an HMD; (2) the viewer is passively watching, but the screenwriting from the director will define the viewer as (at least) a participatory level character in the story; (3) the viewer's experience will not be interrupted by non-diegetic elements, and the time flow will not be explicitly broken to solely "compensate" interaction.
We looked at both ends of the continuum we proposed, considered the viewer's expectation of agency and the director's expectation of authorial control, leading to three system components: A projector component presents the content to the viewer, and also receives commands from the moderator to update the presentation when necessary.
The director provides the actual content in the form of nodes, plus "map-like" references for the moderator. The "map" defines remarks on the nodes and paths with "weights of necessity" such as which of the nodes are mandatory and what parts of the path are pre-defined and cannot be waived. The projector captures implicit inputs from the viewer (such as head rotation and gaze change), hands it over to the moderator to be read as signals, and alters the path of how the story progresses from one key node to another in real time. Figure 7 illustrates the components in this system and how they work together to deliver the final result to the viewer. We came up with the term "Adaptive Playback Control (APC)" to describe the entire structure.
One of the plans is to integrate this narrative system into the Digital Marae experience we introduced in the previous chapter. On top of the eye-contact setup, we plan to add another layer to the viewer's capability of freely roaming the marae. Pre-recorded voxelized clips of the host introducing a list of ROIs (such as ritual objects, decorated walls, sculptures) and the host performing "functional events" such as asking or waiting for a certain action, as well as idle state, are all prepared and stored in the story builder. When the viewer is walking and looking around in the Digital Marae, the projector monitors the real-time location of the viewer and triggers an "inquiry event" when the viewer approaches one of the feature points and ponders for a certain period of time. The avatar of the host will approach the viewer and ask if she wants to know more about that feature point, just like one will experience in a museum when a staff member is offering help (on the background, the system is playing one of the "functional event" clips and waiting for the viewer's response). If the viewer confirms with certain input (head nod, or simple gesture, depending on the system configuration), the moderator will make decisions and branches out to play the clip in which the host will start to tell the story related to that object in front of the viewer. To make the experience more like real-life events, the moderator will also pause the narrative when the projector detects the viewer is looking away and "appears to be no longer interested in the current content". Generally, the experience is played out around the viewer's behavior and actions in the scene. The three components of the APC system work together to shuffle and reconnect the clips provided by the creator into one continuous tour-like experience. Figure 7. The structure diagram of the APC system. It shows the three components in the system, a viewer who is using the system, and how the components work together to deliver the final result to the viewer. From top to bottom: Before the actual playback, the director feeds the contents (as nodes), and a "map" reference to the story builder, as raw materials. The "map" is a reference for the Moderator, with remarks of possible paths a viewer can later take connecting the nodes in an actual playback, the mandatory paths, as well as their priorities. When an actual playback starts, the viewer watches the content, wearing an HMD. At the same time, the moderator monitors the viewer's behavior and analyzes which node the viewer is interested in, cross-references with the "map", and determines which one of the nodes will be presented next. Then, the moderator sends the directives to the projector. The projector responds to it and presents the chosen node (content) to the viewer. Since everything is running in backstage, the viewer is unaware of the fact that she does make choices, albeit implicitly.
In an immersive experience like this, the viewer can find herself taking a role in the story, with a certain level of interaction with the system, and finds agency with it, because the path of how the story progresses forward does change (within certain boundaries) according to the viewer's behavior. On the other hand, the main purpose of the storytelling and the emotional response the director wants to invoke in the viewers can still be ensured because the essential narrative arc and the main plot were still laid by the director and not affected by the viewer's actual behavior when watching. We can also observe whether the viewer's feel of narrative immersion and feel of engagement and enjoyment will be influenced by the fact that the interaction she has within the story world is at a "limited level", compared to the 6DOF version we presented previously.
With this framework, we want to "interactivize" the CVR experience for the intrinsic nature of immersive media, such as 360-degree video and computer-generated immersive film, but not for the full interactive experience. This is because high-level interaction requires complex elements such as full body avatar, emergent storytelling mechanisms, AI components and other advanced technologies, which are not supported by most CVR experiences. This "one step back" is also due to the prospective application we want to explore. We aim to provide a framework for immersive and interactive content creators who develop engaging and enjoyable experiences for entertainment, learning, or invoking empathy. These include teachers who want to create a visual demo for their classes or museum curators who want to create virtual tours for online and remote visitors. We expect that such a framework will give creators a familiarity akin to scripting for conventional videos. Thus, on one hand, it is still a pre-scripted narrative at the backbone but an interactive and immersive experience at the front face. It can ensure the narrative arc remains in control of the director, but the freedom of interaction is in the hands of the viewers. We expect that this is a solution to archive the balance and relieve the Narrative Paradox.
Summary and Future Directions
In this paper, we reviewed literature about designing narrative VR content with the consideration of the viewer's role and viewer's interactivity, aimed to resolve the issue of the Narrative Paradox. We first looked at previous work by grouping them under the Mise-en-scene and cinematography of traditional filmmaking. Researchers have explored the viewer's role in CVR and its relationship to the characteristics of other story elements in the scene when interactions are limited. From their insights, we also learned that theater practices can provide a good reference to the configuration of placement and distance of essential story elements to clarify the viewer's role and the focus of the story.
Then, we moved on to review the literature around enabling viewer interaction in CVR. We proposed the "continuum of interactivity" of immersive experiences to place various types of approaches we visited, to categorize them by the "level of interactions" they have and see which is appropriate for creating CVR experiences with interactivity, to achieve high level engagement and presence. According to the continuum, we also discuss the "depth" of narrative impact those interaction designs have and their visibility to the viewer in the story world.
We made four preliminary conclusions after the literature review, covering the discussion around viewer's role in CVR, factors affecting interaction design, director's authorial control, and the relationship between interaction's level of complexity and its impact on narrative immersion. Then, the "Adaptive Playback Control (APC)" framework is proposed as a conceptualized example of enabling viewer interaction in immersive storytelling with backstage authorial control, plus consideration of viewer's role and the context of the story. This framework is yet under further exploration and tailoring. In the future, we plan to implement it in both 360-degree videos and computer generated 3D scenes. We will also conduct user studies to evaluate its effectiveness and gather user feedback. Overall, we aim to propose a framework of "mediated CVR storytelling" with "interactivity". We believe such a framework can contribute to mitigating the issue of the narrative paradox in CVR and can be used by content creators to develop engaging CVR experiences for education, entertainment, and other applications, without the need for professional knowledge in VR and immersive system design.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Acknowledgments:
The authors would like to thank Noel Park and Stu Duncan from the University of Otago, Yuanjie Wu and Rory Clifford from HIT Lab NZ, University of Canterbury during the implementation of voxel-based telepresence system, and the support of other research members in the New Zealand Science for Technological Innovation National Science Challenge (NSC) projectĀtea.
Conflicts of Interest:
The authors declare no conflicts of interest. | 2021-06-24T13:14:32.723Z | 2021-05-18T00:00:00.000 | {
"year": 2021,
"sha1": "5bac2d65c9d09e4a38c5f06d8d85a3677b99a13e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-431X/10/5/66/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5a75d29751665fe6b3fa30576900a13b979ba717",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
53022587 | pes2o/s2orc | v3-fos-license | Risk factors for self-harm in people with epilepsy
Objective To estimate the risk of self-harm in people with epilepsy and identify factors which influence this risk. Methods We identified people with incident epilepsy in the Clinical Practice Research Datalink, linked to hospitalization and mortality data, in England (01/01/1998–03/31/2014). In Phase 1, we estimated risk of self-harm among people with epilepsy, versus those without, in a matched cohort study using a stratified Cox proportional hazards model. In Phase 2, we delineated a nested case–control study from the incident epilepsy cohort. People who had self-harmed (cases) were matched with up to 20 controls. From conditional logistic regression models, we estimated relative risk of self-harm associated with mental and physical illness comorbidity, contact with healthcare services and antiepileptic drug (AED) use. Results Phase 1 included 11,690 people with epilepsy and 215,569 individuals without. We observed an adjusted hazard ratio of 5.31 (95% CI 4.08–6.89) for self-harm in the first year following epilepsy diagnosis and 3.31 (95% CI 2.85–3.84) in subsequent years. In Phase 2, there were 273 cases and 3790 controls. Elevated self-harm risk was associated with mental illness (OR 4.08, 95% CI 3.06–5.42), multiple general practitioner consultations, treatment with two AEDs versus monotherapy (OR 1.84, 95% CI 1.33–2.55) and AED treatment augmentation (OR 2.12, 95% CI 1.38–3.26). Conclusion People with epilepsy have elevated self-harm risk, especially in the first year following diagnosis. Clinicians should adequately monitor these individuals and be especially vigilant to self-harm risk in people with epilepsy and comorbid mental illness, frequent healthcare service contact, those taking multiple AEDs and during treatment augmentation. Electronic supplementary material The online version of this article (10.1007/s00415-018-9094-2) contains supplementary material, which is available to authorized users.
Introduction
People with epilepsy are twice as likely to die by suicide compared to those without epilepsy [1]. Nonfatal self-harm, defined as any type of intentional self-injury or self-poisoning [2], may lie on the causal pathway between epilepsy and suicide. There are multiple motivations for engaging in selfharm, ranging from suicide attempt to emotional regulation without suicidal ideation [2]. Regardless of intent, self-harm is the strongest predictor of suicide [3].
Risk of hospitalization for self-harm in people with epilepsy has been estimated in two studies [4,5]. Singhal et al. reported a relative risk of 3.9 (95% CI 3.8-4.1) for selfharm in the year following hospitalization for epilepsy and 2.6 (95% CI 2.5-2.7) in subsequent years [4]. Meyer et al. estimated the hospital self-harm presentation rate in people with epilepsy to be 2.04 (95% CI 1.85-2.25) times that of the comparison group [5]. Meyer et al. identified epilepsy diagnosis from the self-harm reporting form, as part of a multi-centre study, and confirmed with review of medical notes [5]. Singhal et al. identified people with epilepsy from recorded hospital admission or day case contact due to epilepsy [4]. It is possible, therefore, that this may have included only individuals with the most severe or poorly managed epilepsy, which resulted in hospital presentation. Both studies required individuals to be hospitalized for the self-harm event, thus do not include those who presented in the community for self-harm. A previous study conducted in a UK primary care dataset estimated an odds ratio for selfharm of (2.35, 95% CI 1.67-3.29) for people with epilepsy compared to those without [6]. Self-harm cases were defined from those reported in primary care only, as this study was conducted before it was possible to link this dataset with hospital records. It is not known whether this magnitude of increased risk for self-harm is observed in a primary care patient cohort when linked to hospital reports of self-harm and national mortality records.
The World Health Organization (WHO) recommends that people with epilepsy should be asked about self-harming thoughts and behaviours in certain, specific circumstances [7]. However, there may be additional factors that could alert clinicians to instigate this discussion. To our knowledge, the factors that influence someone with epilepsy to self-harm have not been identified.
We, therefore, aimed to: (1) estimate self-harm risk in persons with epilepsy versus those without; and (2) identify risk factors for self-harm among individuals with epilepsy.
Setting
We extracted an incident epilepsy cohort from the Clinical Practice Research Datalink (CPRD), linked to Hospital Episode Statistics (HES) and Office for National Statistics (ONS) mortality data. The CPRD is a primary care dataset that contains routinely collected electronic health records capturing information on patient demographics, diagnoses and treatments in general practice. It has been shown to be representative of the UK population [8]. All of the linked general practices were located in England, representing 75% of all English practices included in the CPRD at the time of data extraction. We used the linked subset of the July 2015 version containing 7,378,852 individuals from 378 general practices with data deemed to be of sufficient quality for conducting research. HES contains hospital inpatient discharge dates and diagnoses, and ONS mortality data include date and cause of death.
The study was approved by the Independent Scientific Advisory Committee (protocol 17_063R) of the CPRD. Informed consent is not required for studies that use anonymized data from the CPRD.
Study population: incident epilepsy cohort
From the CPRD, we extracted the incident epilepsy cohort that formed the basis for both Phase 1 and Phase 2. The study observation period was 01/01/1998-31/03/2014 to correspond with linkage availability. We used our previously published definition to identify people with epilepsy [1] which requires a diagnostic code for epilepsy and an associated prescription for an antiepileptic drug (AED) [9,10]. We defined the epilepsy index date as the latest of the epilepsy diagnosis date and AED prescription in the 6 months prior, or 1 month after diagnosis. We restricted to the incident epilepsy cohort by mandating at least 12 months registration prior to epilepsy index date, and no prior epilepsy diagnosis in this look-back period. This minimized the risk of 'prevalent-user bias', whereby the timing of epilepsy onset could confound the relationship with self-harm risk [11]. We required individuals to be without history of selfharm in the look-back. We restricted the cohort to persons aged ten or older, as this is the minimum age at which the WHO recommend that clinicians should discuss self-harm [7]. This threshold also aligns with previously published studies, because self-harm intent is particularly difficult to discern below age ten [12].
Phase 1: matched cohort study
We matched each person with incident epilepsy to up to 20 individuals without epilepsy on gender, year of birth (± 2 years) and general practice. Individuals sampled for the comparison cohort had not received a diagnostic code for epilepsy or self-harm in the look-back period, and had been registered for at least 12 months at the practice. Individuals were followed up until the earliest date of: first self-harm event, death, patient transferred out of practice, latest date of data collection from the practice, or end of the study's observation period.
Phase 2: nested case-control study
From the cohort of people with incident epilepsy, we identified first recorded cases of self-harm during the study window-the self-harm case date. We matched these cases to up to 20 control individuals from within the incident epilepsy cohort, without history of self-harm on the self-harm case date, using incidence-density sampling [13]. We matched cases to controls on gender, year of birth (± 2 years) and timing of incident epilepsy diagnosis (± 1 year), because these variables may confound the relationship between the exposures investigated and self-harm risk [14].
Outcomes
We included both fatal (suicide) and nonfatal self-harm in our definition. We identified self-harm from primary care records using clinician-verified Read codes [15] and from HES using the following ICD-10 codes: X60-84, Y87.0 and Y87.2. We used these same codes to identify suicide from ONS mortality data, with the addition of Y10-34 (excluding Y33.9) [16]. These codes represent undetermined intent, which is included in the ONS definition of suicide [17]. As this conclusion is assigned by a coroner in the UK, it is not appropriate to apply the same codes to nonfatal self-harm.
In Phase 2, we investigated multiple exposures. We identified level of deprivation by quintiles of Index of Multiple Deprivation (IMD-2010), and compared to the least deprived quintile (1st quintile). We identified mental illness diagnoses (alcohol misuse, anxiety disorder, bipolar disorder, depression, eating disorder, personality disorder and schizophrenia) from primary care data and HES using previously published codes [15,18] that were recorded prior to self-harm case date. We developed a code list for substance misuse that was independently verified by two general practitioners (GPs) and is available at http://www.clini calco des.org [19]. We identified referrals to psychiatric services in the year prior to selfharm case date from Family Health Services Authority and National Health Service speciality fields in the CPRD [20]. Contact with healthcare services was measured by number of face-to-face consultations with the GP and number of hospitalizations for any reason, in the year prior to selfharm case date. Physical illness comorbidity was measured by assignment of a Charlson index score using Read codes from the CPRD [21] and ICD-10 codes from HES. The Charlson index is a measure of comorbidity, based on 1-year mortality risk derived from 17 comorbidities [21]. We measured AED utilization in two ways. First, we counted the number of AED types that the person was exposed to in the 90 days prior to self-harm case date and compared this to AED monotherapy. Second, we determined if there had been augmentation of AED treatment in the 6 months prior to self-harm index data. Due to the recommendation of slow withdrawal of AEDs when changing therapy [22], we defined augmentation as persistence of two AEDs 90 days after the introduction of the additional AED.
Statistical analysis
In Phase 1, we estimated the relative risk of self-harm in the incident epilepsy versus comparison cohorts using a stratified Cox proportional hazards model. We adjusted for level of deprivation because both epilepsy [23] and selfharm [15] are independently associated with higher levels of deprivation, which may confound any observed associations. We assessed the proportionality assumption using a formal test that compared Schoenfeld residuals, with a p value < 0.05 indicating non-proportionality [24], and by graphical inspection. We reported baseline characteristics as numerical and percentage frequencies and medians, and estimated prevalence ratios for pre-existing mental illness diagnoses and types of prescribed psychotropic medication. In Phase 2, we used conditional logistic regression to estimated exposure odds ratios to indicate relative risk of self-harm associated with the following exposures: (1) level of deprivation; (2) mental illness; (3) referral to psychiatric services; (4) contact with healthcare services; (5) physical illness comorbidity; and (6) AED utilization. Data analysis for both phases was undertaken using Stata, version 13 (StataCorp, College Station, TX, USA).
Phase 1: matched cohort study
We matched 11,690 people with incident epilepsy (median age 53, IQR 30-72; 52% male) to 215,569 persons without epilepsy. Compared to the matched cohort, the epilepsy cohort was more deprived and more likely to have been diagnosed with any mental illness, treated with psychotropic medication or opioids ( Table 1). The median follow-up times were 3.6 years (IQR 1.3-7.2) and 4.7 years (IQR 2.0-8.3) for the epilepsy and comparison cohorts, respectively.
There were 273 first self-harm events in the epilepsy cohort and 1547 in the comparison cohort. The overall incidence rates for first self-harm event ( Table 2) were greater in the epilepsy cohort (5.0 per 1000 person-years, 95% CI 4.4-5.6) than in the comparison cohort (1.3 per 1000 personyears, 95% CI 1.3-1.4). The proportionality assumption for the stratified Cox proportional hazards model did not hold (p = 0.007); therefore, we divided follow-up time to first year after diagnosis and subsequent years. There was an excess risk of self-harm during the first year of follow-up (deprivation-adjusted HR 5.31, 95% CI 4.08-6.89) compared to subsequent years (deprivation-adjusted HR 3.31, 95% CI 2.85-3.84), although elevated risk persisted throughout the follow-up period.
Phase 2: nested case-control study
Within the epilepsy cohort we identified 273 individuals with a first self-harm event (cases) and matched them to 3790 control patients with epilepsy and without history of self-harm on the self-harm case date ( Table 3). The median age was 34 years (IQR 20-46) and 43% were male. The median time since epilepsy diagnosis was 2.6 years (IQR 0.9-4.6) for persons who had self-harmed and 2.2 years (IQR 1.0-3.9) for control patients. Individuals living in the most deprived areas had an elevated self-harm risk compared to those living in the least deprived localities (5th quintile: OR 2.22, 95% CI 1.44-3.42, 4th quintile: OR 1.75, 95%CI 1.11-2.75), but there was no evidence of increased risk associated with other quintiles of deprivation. There was no difference in self-harm risk associated with a Charlson comorbidity index score of 1 or 2-3, but an increased risk was evident when the score was 4 or more (OR 2.91, 95% CI 1.75-4.82). 65.9% of cases and 35.6% of controls had a history of mental illness. Having one or more mental illness diagnoses increased self-harm risk compared to having no such diagnoses (OR 4.08, 95% CI 3.06-5.42) and this risk increased markedly among individuals who had received three or more mental illness diagnoses (OR 15.36, 95% CI 10.03-23.51).
All mental illnesses examined were associated with an increased self-harm risk, but the magnitude varied across the diagnostic categories. Depression was the most common diagnosis and was associated with approximately a fourfold elevation in self-harm risk (OR 3.92, 95% CI 2.94-5.22). In a post hoc sensitivity analysis, we included depression symptom codes as well as diagnoses in the definition of depression. This did not alter the estimated risk (OR 4.03, 95%CI 3.04-5.33).
In the 12 months prior to self-harm case date, 12.8% of cases and 3.8% of controls were referred to specialist psychiatric services (OR 3.65, 95% CI 2.45-5.44). In the same timeframe, 45.8% of self-harm cases and 29.1% of controls were hospitalized at least once for any reason (OR 2.12, 95% CI 1.64-2.76). The median number of face-to-face consultations with a GP in the 12 months preceding the self-harm case date was nine (IQR 5-15) for cases and six (IQR 3-11) for controls. Compared to individuals who had 0-4 consultations in the previous year, individuals who had five or more consultations were at a two-to fivefold increased self-harm risk.
In the 90 days prior to self-harm case date, compared to individuals who were prescribed a single AED, those prescribed no AED (OR 1.47, 95% CI 1.01-2.12), two (OR 1.84, 95% CI 1.33-2.55) or three or more AEDs (OR 2.44, 95% CI 1.51-3.94) were at an increased risk of self-harm ( Table 4). Augmentation of AED treatment in the prior 6 months was associated with a twofold increased risk of self-harm compared to no augmentation (OR 2.12, 95% CI 1.38-3.26).
Discussion
In a large population-based cohort study, we found that people with epilepsy have an elevated self-harm risk compared to those without the condition. There was a fivefold elevation in risk in the first year following diagnosis and a threefold increased risk persisting beyond this first year. Among people with epilepsy, those most likely to self-harm included people with comorbid mental illness diagnoses, previous psychiatric referral, previous hospitalization for any reason, or five or more consultations with their GP in the previous year. Individuals treated with none or multiple AEDs, including those who had recently augmented treatment, were at increased risk of self-harm, compared to those prescribed AED monotherapy. We report the first published estimates for elevated selfharm risk in people with incident epilepsy in which selfharm cases were ascertained using both primary and secondary care records. Our estimates are slightly higher than those reported in earlier studies that included only individuals who presented to hospital with self-harm [4,5] and estimates from the study using the predecessor to the CPRD, prior to linkage availability (OR 2.35, 95% CI 1.67-3.29), thus including only self-harm episodes that were recorded in primary care [6]. Previous studies did not restrict to an incident epilepsy cohort; therefore, inclusion of individuals with prevalent epilepsy may have resulted in prevalent-user bias [11]. This may have diluted the period of highest risk, close to the time of incident epilepsy diagnosis.
Among people with epilepsy, we found that elevated selfharm risk was associated with prior diagnosis of any mental illness or referral to psychiatric services. This corroborates with evidence reported from general population studies in which mental illness is associated with a 6-to 14-fold increased risk of self-harm, dependent on the specific diagnosis [4]. Within the epilepsy cohort, a fivefold increased risk of self-harm was associated with history of alcohol and substance misuse. It is possible that these individuals experience a high frequency of seizures, caused by the alcohol or substance misuse, or due to non-compliance with treatment as a result of a disordered lifestyle. This could contribute to the increased self-harm risk experienced by these individuals. A bidirectional relationship between attempted suicide, which includes self-harm, and epilepsy has been suggested previously [25].
Having five or more face-to-face general practice consultations in the previous year was associated with elevated self-harm risk, compared to people who attend up to four times per year. Clinicians should be alert to the risk of selfharm in individuals who present regularly, which may be in relation to epilepsy severity or comorbid conditions. Importantly, clinicians can use these frequent interactions to discuss self-harm risk with patients in this group.
The use of multiple AEDs is a result of treatment augmentation, due to inadequate seizure control; or during a period of switching to an alternative monotherapy due to lack of tolerance or for other reasons such as pregnancy [22]. The elevated self-harm risk observed during use of multiple AEDs is likely to be an indication of more severe epilepsy with an associated higher seizure frequency, which is not controlled by AED monotherapy. Furthermore, individuals who have many seizures may experience consequent psychosocial difficulties, including inability to drive or absence from work or social activities, which may exacerbate the stigma associated with epilepsy [26]. Additionally, some individuals may become despondent if AED treatment requires augmentation, despite compliance with monotherapy. This may result in difficulty coping and the condition may be perceived as a burden to the individual, both of which are known motivators of suicidal behaviour [27]. Indeed, we observed an elevated risk of self-harm associated with recent augmentation of AED treatment. We have previously identified the need to examine the risk associated with individual AEDs using carefully designed, new-user studies [28]. This was not the aim of this study; therefore, the study design does not allow us to comment on individual AEDs.
Self-harm risk was also elevated for people who were not prescribed an AED in the 90 days prior to the index selfharm case date (OR 1.47, 95% CI 1.01-2.12). On entry to the incident epilepsy cohort, all individuals were prescribed an AED. Therefore, those individuals without AED prescription on the self-harm case date may have gradually stopped taking AEDs because they became seizure free. In the UK, the National Institute for Health and Care Excellence (NICE) recommends that AED withdrawal should only be considered following a 2-year absence of seizures [21]. Given that the median time since epilepsy diagnosis on self-harm case date was approximately 2 years, it is unlikely that all of those who had no recent AED prescription withdrew their AED on the advice of a clinician. It is possible that some of those individuals were non-compliant with their medication regimen. This may be motivated by undesirable adverse events, beliefs about medication and illness, comorbid mental illness or lifestyle choices, all of which may potentially contribute to elevated self-harm risk.
Healthcare professionals involved in the care of people with epilepsy could instigate conversations about self-harm risk, especially if the described risk factors are present. These include mental health problems, and extend to both the clinicians responsible for the mental health services and those working in general primary care settings. Furthermore, GPs should consider discussing self-harm risk management in people who consult frequently. Further research could investigate whether any technological prompts could aid this during consultations.
Strengths and limitations
This is the first published study to estimate self-harm risk among people with epilepsy in a large, linked primary care patient cohort, including 11,690 people with incident epilepsy. Linkage to HES maximized self-harm case ascertainment. The Read codes used to identify self-harm cases were verified by clinicians and have been used in other studies [15,29]. To mitigate confounding by previous self-harm, we restricted the incident epilepsy cohort to include only those persons with no prior recorded history of self-harm in either their primary or secondary healthcare records. It is still possible, however, that individuals had a self-harm event prior to this look-back period and before their CPRD records began. Furthermore, we recognize that not all people who have a self-harm episode will present to healthcare services and those who do represent the "tip of the iceberg" of self-harm events [30]. However, our inclusion of selfharm reported to both primary and secondary care builds upon those studies which used only one of those sources to ascertain self-harm [4][5][6]. As people with epilepsy attend the GP more often than those who do not, there may have been more opportunity to report self-harm and they may be asked about self-harm as per the WHO recommendations [7]. This would overestimate the magnitude of elevated selfharm risk in people with epilepsy compared to those without the condition.
It is not possible to accurately determine the type of epilepsy from UK general practice data; therefore, this is something we could not examine in this study. Epilepsy type may influence risk of self-harm [31]. It would, therefore, be beneficial to compare self-harm risk among people with different epilepsy subtypes, particularly whether having symptomatic epilepsy (and therefore underlying brain pathology) has an influence on self-harm risk.
Conclusion
In conclusion, clinicians should be aware that people with epilepsy are at increased risk of self-harm, compared to those without the condition, especially during the first year post-diagnosis. These patients should, therefore, be routinely monitored. Additionally, we recommend that clinicians are particularly vigilant for self-harm thoughts and behaviours in people with epilepsy and comorbid mental illness, those who consult regularly, those prescribed AED polytherapy and during periods of AED treatment augmentation. and not necessarily those of the NHS, the NIHR, or the Department of Health. The funders had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication. Data from the Clinical Practice Research Datalink (CPRD) were obtained under license from the UK Medicines and Healthcare Products Regulatory. The study was approved by the Independent Scientific Advisory Committee (ISAC) for CPRD research (reference 17_063R). Hayley Gorton has full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. All authors contributed to data analysis. We gratefully acknowledge the following colleagues from the University of Manchester: Dr James Lilleker MBChB for providing expert advice on the epilepsy Read code list, and Dr Benjamin Brown MSc MBChB and Dr Thomas Blakeman PhD MBChB for cross-checking Read code lists for substance misuse, migraine and neuropathic pain. | 2018-11-10T06:20:13.918Z | 2018-10-24T00:00:00.000 | {
"year": 2018,
"sha1": "3bc86af46ce997224741d5d6176bbe882be1944a",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00415-018-9094-2.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "3bc86af46ce997224741d5d6176bbe882be1944a",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225843538 | pes2o/s2orc | v3-fos-license | Retinex Based Image Enhancement via General Dictionary Convolutional Sparse Coding
Retinex theory represents the human visual system by showing the relative reflectance of an object under various illumination conditions. A feature of this human visual system is color constancy, and the Retinex theory is designed in consideration of this feature. The Retinex algorithms have been popularly used to effectively decompose the illumination and reflectance of an object. The main aim of this paper is to study image enhancement using convolution sparse coding and sparse representations of the reflectance component in the Retinex model over a learned dictionary. To realize this, we use the convolutional sparse coding model to represent the reflectance component in detail. In addition, we propose that the reflectance component can be reconstructed using a trained general dictionary by using convolutional sparse coding from a large dataset. We use singular value decomposition in limited memory to construct a best reflectance dictionary. This allows the reflectance component to provide improved visual quality over conventional methods, as shown in the experimental results. Consequently, we can reduce the difference in perception between humans and machines through the proposed Retinex-based image enhancement.
Introduction
The color of an object determined by a machine visual system (MVS) such as a digital camera is based on the amount of light reflected from it. On the other hand, the human visual system (HVS) determines the color of an object by considering the details of the surrounding environment and changes in overall illumination [1]. The complex HVS automatically recognizes the changes in illumination and easily recognizes the original color of the object. This feature of the HVS is called color constancy and has been studied for a long time [2,3]. Owing to the inconsistency between the HVS and the MVS under various illumination conditions, a machine cannot obtain the same image as a human. These inconsistencies also cause algorithmic errors in functions such as color separation, pattern recognition and object tracking. Therefore, to improve the performance of MVS, it is important to understand the color constancy of HVS.
The Retinex concept by Land and McCann [4][5][6][7] combines the functions of retina and cortex, and explains how the HVS perceives color. If S is defined as the value of an image in the spatial domain Ω, the image value in the Retinex model mainly depends on two factors: the amount of illumination projected onto the object in the image and the amount of illumination reflected by the object in the image. Based on these two components, S can be constructed with the reflectance function R and illumination function L [8] as follows: where, 0 < R < 1 (reflectivity) and 0 < L < ∞ (illumination effect), ∀x ∈ Ω The HVS can recognize both the illumination component and the reflectance component, and is able to remove the illumination component. This is called color constancy. Therefore, the HVS can recognize the constant color of an object, while ignoring the changing illumination. Removing the illumination component from the image can cause dark areas in the low-light image enhancement application to represent the original color of the object, which can increase the success rate of other algorithms on the machine. Also, it is possible to improve the edge, texture, etc. so that the reflectance component can exhibit the same effect in reconstruction as a super resolution.
Motivated by the color perception characteristics of the HVS, the Retinex theory enhances high-contrast images, allowing more detail and color to be observed in low-light areas [8][9][10][11][12][13]. In addition, Retinex theory can be used effectively for shadow removal [14]. A representative method among Retinex theories is the Retinex model using regularization parameters by calculating the total variation in the reflectance function [8,15]. Recently, a Retinex model [16] has been proposed for effective image enhancement using the sparsity of reflectance in the gradient region and the sparsity of illumination in the frequency domain. The Retinex model in [17] proposed image enhancement using sparse coding. Sparse coding is a method of constructing the basis of an image through a dictionary.
However, such conventional Retinex methods have a problem in that when the illumination changes rapidly, details in a complex area within the image are not properly reflected and blurred, or illumination and reflectance cannot be accurately decomposed. In most Retinex models, illumination has smooth conditions and reflectance shows a detailed structure. Therefore, if the reflectance function can be more accurately generated, the detailed description of the complex area can be well preserved when the illumination changes rapidly, and thus, the illumination and reflectance of the image can be accurately resolved. In sparse source separation Retinex model [16] that consider sparsity, it is assumed that the reflectance component is sparse in the frequency domain. An image with a complex region (high rank) has, a number of detailed components. In such a case, the decomposition of reflectance is not properly performed, and an error in the reflectance function affects the illumination function, which results in the improper decomposition of reflectance and illumination. In particular, in the case of a Retinex model [17] using sparse coding, an advantage is that the reflectance component can be expressed by the dictionary greater detail. The sparse coding consists of a dictionary and a sparse coefficient in a linear combination, constituting a local patch dictionary. A drawback of the dictionary-based sparse coding approaches is that important spatial structures of the signal of interest may be lost because of its subdivision into mutually-independent patches. Further, patches (atoms) of the dictionaries learned using this approach are often redundant and contain shifted versions of the same features. Therefore, it is necessary to construct a reflectance dictionary for each image, which cannot be used as a robust dictionary. Therefore, a new method that accurately generates the components of the reflectance function is needed to overcome these limitations.
In this paper, we propose an image enhancement method based on Retinex theory using convolutional sparse coding (CSC). Our approach builds on recent advances in CSC and reconstruction techniques. We show that the CSC reconstruction technique provides a higher quality of high contrast and complex image than the existing patch-based sparse reconstruction techniques. In addition, we observe that CSC, is a particularly well-suited general dictionary for the different types of high-contrast and complex signals present in the reflectance function. The advantage of a general dictionary is that when an arbitrary image is input, the reconstruction can be performed immediately with the learned dictionary without learning a new one. We use singular value decomposition (SVD) in CSC to construct a more compact dictionary in limited memory. In addition, since it is a form of reflectance basis of a general image, it only has additional information that fits the basis. Based on this fact, we can improve the low-light image through our proposed method. Additionally, a halo artifact can occur in a common Reinex-based reconstruction problem, which can be reduced by our proposed method. These are detailed in Section 4. Therefore, we pose the Retinex image enhancement problem as a CSC problem and derive necessary formulations to solve it. We make the following contributions:
•
We show that the reflectance function of the Retinex model can be learned through a CSC dictionary, leading to efficient image enhancement through the expression of various nonlinear image shapes.
•
We propose that the reflectance function can be reconstructed using a trained general dictionary using CSC from a large dataset.
•
We use SVD in CSC to construct a more compact dictionary in limited memory.
The remainder of this paper is organized as follows. Section 2 briefly introduces traditional Retinex model, sparse coding for Retinex model, and basic CSC. Section 3 discusses our CSC Retinex algorithm, focusing on the proposed objective function and the general reflectance function dictionary learned with CSC. Obtained experimental results are presented and analyzed in Section 4. Finally, Section 5 summarizes this paper.
Related Work
In this section, the traditional Retinex model, Retinex model using sparse coding, and CSC are described. Section 2.1 introduces the representation of Retinex model separately through reflectance and illumination functions. Section 2.2 introduces the used Retinex model by learning a dictionary through sparse coding. Finally, Section 2.3 describes CSC.
Retinex Model
The physics-based Retinex model [16] solves the problem by transforming the Retinex theory into a more physical form through a series of equations or optimization problems. These algorithms have been widely studied in recent years because of their ability to remove the entire illumination from an image. As mentioned above, the algorithm in this category expresses the image value S as a product of the illumination function L and the reflectance function R as seen in Equation (1). Here, we further assume that 0 < R < 1 (reflectivity) and 0 < L < ∞ (illumination effect). Based on these assumptions, Equation (1) implies that L > S > 0. In order to handle the product form, we first convert it into the logarithmic domain, that is, s = log(S), l = log(L), and r = log(R). Then, we obtain s = l + r . Note that 0 < R < 1. We set r = −r > 0; then the above model takes the following form: Thus, the illumination component l and the reflectance component r can be easily decomposed by converting them to a logarithmic addition.
Sparse Coding Retinex (SCR) Model
In the SCR model [17], sparse coding is used to search for a suitable basis in the dictionary for the reflectance function and capture more detailed structures or features of the reflectance function. Sparse coding can be used to learn the dictionary, and each image patch can be represented sparsely using a linear combination of the atoms from a specially chosen dictionary. Specifically, the signal Y is sparse in the sense that Y ≈ DX with the sparse coefficient matrix X, which is represented by the dictionary D. The SCR model expresses the reflectance component as a dictionary of sparse coding to remove illumination.
here, ∇ is the gradient operator. D is a learned dictionary of size n 2 -by-k attached to the restored image, with k atoms in the dictionary; R ij is the sampling matrix of size n 2 -by-N 2 to construct a patch for a part of r; γ ij is a vector of size k-by-1, containing the encoding coefficients for the patch of r represented in the dictionary; P = {1, 2, . . . , N − n + 1} 2 denotes the index set for different patches of r; || · || 2 denotes the Euclidean norm of a vector; and || · || 0 denotes the number of nonzero elements. A drawback of dictionary-based sparse coding approaches is that important spatial structures of the signal of interest can be lost because of its subdivision into mutually-independent patches. Further, patches (atoms) of the dictionaries learned with this approach are often redundant and contain shifted versions of the same features. This can be seen in Figure 1b, which shows the sample atoms of a dictionary learned from reflectance images. Moreover, as we show in the red box of Figure 2, owing to the nature of the mathematical formulation (a linear combination of learned patches), these patch-based approaches can fail to adequately represent high-frequency, high-contrast image features, which are particularly important in reflectance images. Figure 1. Single image dictionary: (a) Training image; (b) Sparse coding dictionary in single image [17]; (c) CSC dictionary in single image. Figure 2. The result of applying the Retinex algorithm from the single image dictionary in Figure 1: (a) SCR [17] illumination; (b) SCR reflectance; (c) Image enhancement through SCR; (d) CSC Retinex [18] illumination; (e) CSC Retinex reflectance; (f) Image enhancement through CSC Retinex.
Convolutional Sparse Coding (CSC)
An alternative to patch-based approaches is CSC, which is based on an image decomposition into spatially-invariant convolutional features, as explained in the following paragraphs. Compared to the atoms of a dictionary, the learned filters of our CSC scheme ( Figure 1c) show a much richer variance (e.g., they span a larger range of orientations), which leads to better reconstructions.
CSC models the signal of interest α as a sum of sparsely-distributed convolutional features [19][20][21], that is α is modeled as: The CSC problems are expressed in the form arg min where each example image x w is represented as the sum of sparse coefficient feature maps z w k convolved with filters d k of fixed spatial support. The superscripts indicate the example index w = 1, . . . , W, and the subscripts indicate the coefficient/filter map index k = 1, . . . , K. The variables x w ∈ R D and z w k ∈ R D are vectorized images and feature maps, respectively, d k ∈ R S represents the vectorized s-dimensional filters, and * is the s-dimensional convolution operator defined on the vectorized inputs. The constraint on d k ensures the dictionary does not absorb all of the system's energy. As shown in [21], Equation (5) is reformulated as an unconstrained optimization problem. The constraint is then absorbed in an additional penalty indicator ind c (·) for each filter, defined on the convex set of constraints C = {x | ||Mx|| 2 2 ≤ 1}, where M is the R S×D Fourier sub-matrix that computes the inverse Fourier transform and projects the result onto the spatial support of each filter. arg min is a concatenation of Toeplitz matrices, each one expressing the convolution with the respective sparse coefficient map z w k (Z w ∈ R D×DK ). Accordingly, eliminating the sum over the examples (index W) by stacking the vectorized images in Equation (7) derives a consensus optimization method for CSC, allowing the splitting of large-scale and high-dimensional CSC into smaller sub-problems [21]. The individual sub-problems can be solved efficiently using the distributed Fourier-domain formulation, with parallel workers. Consensus optimization makes CSC tractable to large problem sizes.
Proposed Method
In this section, we describe the proposed CSC Retinex algorithm, focusing on the proposed objective function and the general reflectance function dictionary learned with CSC. First, a new reflectance function using CSC is described. Then, we develop an objective function which combines illumination and reflectance functions. Next, we introduce a method to minimize the objective function. Finally, the architecture of the Retinex model is depicted.
Proposed Reflectance Function
The reflectance function of the proposed Retinex model aims to find the best basis that guarantees more detailed structures or features. Therefore, the proposed reflectance function should generate the most appropriate basis dictionary from the example image, and there should be a negligible redundancy between each dictionary patch. As mentioned in Section 2.3, CSC is based on an image decomposition into spatially-invariant convolutional features. Compared to the atoms of a dictionary, the learned filters of our CSC scheme (Figure 1c) show a much richer variance (e.g., they span a larger range of orientations), which leads to better reconstructions.
The components of the reflectance function of the proposed Retinex model are as follows.
In this paper, CSC was used as given in Equation (8) to construct the reflectance function. Therefore, we learn dictionaries and coefficient maps through CSC. The reflectance is generated through the convolution of the dictionary and coefficient map. When a component of the reflectance function is generated through convolution, various nonlinear shapes of an image can be expressed, including both local and global features. For this reason, the reflectance can be expressed even in a more complex area than the conventional sparse coding method, following which the illumination can be accurately decomposed in the objective function. Learning in CSC takes longer than the previous method, but if we learn the dictionary, we can quickly proceed with the algorithm through the stored dictionary when creating the reflectance function component in the test stage.
We can reconstruct the reflectance component by using CSC to learn a general dictionary with a large dataset. In sparse coding, each image patch can be sparsely expressed using a linear combination of atoms from a specially selected dictionary. However, in such a linear combination, there is a limit to the number that can be expressed, and since the dictionary patch of sparse coding has high redundancy, this number is further reduced. In CSC, each image patch can be sparse expressed using a convolution of atoms from a specially selected dictionary. The convolution can express more cases than a linear combination; moreover, the redundancy of the CSC dictionary is also lower than the sparse coding, thereby leading to the expression of more cases ( Figure 1). Therefore, if we generate a general dictionary sufficiently learned from a large dataset through CSC, an appropriate reflectance component can be reconstructed even in an untrained test image. This is advantageous as there is no need to generate a dictionary every time, when compared to the SCR model that has to generate a dictionary for each image. To do this, we construct a basis dictionary using SVD.
Proposed Objective Function
In the proposed method, CSC is applied to the Retinex model to effectively decompose the image's illumination and reflectance. The main idea of the CSC Retinex model is to search the appropriate basis in advance for the reflectance function, and then decompose the illumination by identifying more detailed structures or features in the reflectance function. Therefore, the key step is to construct a dictionary for expressing the reflectance component of the input image.
Our model is based on the following assumptions: • In general, since the illumination function is spatially smooth, it can be expressed as α 2 ||∇l|| 2 2 as the regularization term.
•
For the reflectance function, it is generated by CSC as mentioned in Section Section 3.1 by Based on the reflectivity, the constraints l ≥ s and r ≥ 0 are added.
We consider the following energy function for Retinex to simulate and explain how the HVS perceives color: here, α, β and η are positive numbers for regularization parameters. In our proposed model, the reflectance can be better represented by a trained dictionary than the SCR model, and the first and second terms in Equation (9) can be interpreted as the regularization terms for reflectance r. In addition, by applying an iterative algorithm in our proposed model, we construct a dictionary that can derive the optimal reflectance each time the algorithm is repeated. Therefore, the following alternating minimization method [8] is used to solve Equation (9).
Reflectance Function Sub-Problem
From line 3 of Algorithm 1, the reflectance sub-problem can be written as min r,d,z here, the formula for finding the dictionary d k and coefficient map z k through CSC Equation (7) is as follows. arg min where, r = [r 1 T . . . r W T ] T . Equation (11) can be solved using the consensus alternate direction method of multipliers (ADMM) [21,22]. Z and r can be expressed in a small N blocks matrix Z = [Z 1 . . . Z N ], r = [r 1 . . . r N ]. Here, r i represents the i th data block along with its respective filters Z i . The filter d sub-problem through ADMM is as follows [21]. Algorithm 2's calculation follows CSC's ADMM method [21]; however, in particular, we propose a new method to solve the least-squares problem in line 3. The least-squares solution in line 3 is as follows.
where † denotes the conjugate transpose, and I denotes the identity matrix.
Then, we can solve the least-squares problem in line 3 through SVD. The diagonal entries σ i p of ∑ i are called the singular values of Z i . The columns of U i are called left singular vectors. Moreover, the columns of V i T are called right singular vectors. And h is the rank of Z i . Then, Z † i Z i can be calculated as V i ∑ i 2 V i T . We can use SVD to select only the important parts of large datasets and update the dictionary through it. Since we have limited memory (dictionary filter size and number), it is particularly important to construct a general dictionary for selecting and compressing only important information from large datasets.
SVD is known to be the most robust and reliable method for solving the least-squares problem, but it has the disadvantage of complicated computation. Nevertheless, we use SVD to obtain the most common basis dictionary. It means that the dictionary constituting the reflectance can be constructed on a basis like SVD. Therefore, the best reflectance component r can be obtained from the dictionary and coefficient map in CSC.
Illumination Function Sub-Problem
From line 5 of Algorithm 1, the illumination sub-problem can be written as Since Equation (13) is a l 2 -norm problem, it can be easily solved through a partial derivative in the whole image area Ω by differentiating with respect to l and setting the result of Equation (13) to zero. Then, it can be solved efficiently through a fast Fourier transform (FFT) [8,17].
Experimental Result
In this section, we present the numerical results to illustrate the effectiveness of the proposed model and algorithm. In addition, we verify the algorithm through a comparative analysis of our proposed method and the SCR method. The proposed method and the SCR method implement the algorithm by applying the HSV (Hue, Saturation and Value) Retinex model to the color image. HSV Retinex is intended to reduce color changes by applying the Retinex algorithm only to the value channel of the HSV color space, and then convert it back to the RGB domain.
We note that the reflectance image obtained from Retinex is usually an overenhanced image. Therefore, we add a Gamma correction operation after the decomposition. Suppose L = exp(l) is the illumination function obtained from Algorithm 1, and S = exp(s) is the initial image; then the reflection function will be given by R = S/L. The Gamma correction of L with an adjusting parameter γ is defined as L = W( L W ) 1 γ . In this experiment, we set the commonly used parameter γ = 2.2. W is the white value (it is equal to 255 in an 8-bit image, and also equal to 255 in the value channel of an HSV image), and the final result is given as S = L · R. We use the following parameter values to compare the two methods in the test: α = 1, η = 0.1, β = 1. For the stopping criteria, we set l = r = 0.001 in Algorithm 1. We verified the algorithm using MATLAB 2019(a) version. When the size of the dictionary to be learned in the algorithm is small, the calculation of the proposed method can be completed faster. However, if the algorithm has a larger dictionary size, the reflectance function can be more complex, and the computational complexity will increase; therefore, we need to choose an appropriate dictionary size. In this paper, we experimentally set the size of the dictionary to 11 × 11 and use 100 filters (11 × 11 × 100), as seen in Figure 3. To generate the general dictionary for large-scale image data, we use 2000 images from ImageNet [23]. As seen here, Figure 3a without SVD and Figure 3b with SVD are similar in appearance, but the dictionary with SVD includes more general features that yield better reconstruction results (Sections 4.1 and 4.2). It also looks simpler and closer to the basis by using SVD. We collected training and test images for our experiment from [24,25], which was publicly released.
(a) (b) Figure 3. Proposed Retinex algorithm dictionary; (a) Our CSC dictionary in large datasets, (b) Our SVD-CSC dictionary in large datasets.
Real Image
As a first experiment with a real image, we test an image with a complex structure using a dictionary learned from an image with a simple structure. By comparing this with the dictionary of sparse coding and the dictionary of CSC, we can see that the reflectance dictionary of CSC is superior to sparse coding. Through the learned dictionary using Figure 4a, the reflectance and enhancement images can be obtained through each CSC and sparse coding. Since the image used in training and the image used in the test in a single image are different, the separation of illumination and reflectance through CSC and sparse coding is not good. Although separation through CSC is not complete, it still yields better results than sparse coding. The reflectance of Figure 4b and the enhancement image of Figure 4c obtained through the CSC show that the details are better restored in the cat's hair and texture.
Second, in Figure 5, the results of sparse coding in a single image, CSC in a single image, CSC in generating a general dictionary through large datasets, SVD-CSC in a single image and SVD-CSC in generating a general dictionary through large datasets are compared. These comparisons help contrast the performance of the general dictionary generated from large datasets to a single image dictionary. Figure 1a is an original image used as a training and test image for single image sparse coding, single image CSC, and single image SVD-CSC. In addition, it is used as a test image in the general dictionary. As shown in Figures 5c, the result of using SVD-CSC in a single image had a better quality with regard to image contrast than the result using a general dictionary. However, this difference is minimal, and Figures 5e using general dictionary in SVD-CSC also show good results. Figure 5e using the SVD-CSC general dictionary looks almost the same as Figure 5b using the CSC single dictionary. This is because SVD is not used for training in large datasets, and thus, important basis data is not extracted and used in the iteration. Therefore, only the cost function value of the reflectance function itself is minimized, and local reflectance differences are not preserved. Figure 4. Results of using different images for training and test in a single image: (a) Training image; (b) Reflectance using CSC dictionary; (c) Image enhancement using CSC dictionary; (d) Original Test image; (e) Reflectance using sparse coding dictionary; (f) Image enhancement using sparse coding dictionary. Figure 5. Comparison of single image dictionary and large datasets general dictionary in enhancement image: (a) Image enhancement using SCR same as Figure 2c; (b) Image enhancement using CSC Retinex, same as Figure 2f; (c) Image enhancement using CSC general dictionary; (d) Image enhancement using SVD-CSC single dictionary; (e) Image enhancement using SVD-CSC general dictionary. Third, we compare the SVD-CSC method with the CSC method without SVD. We show through this comparison that a dictionary using SVD can be more general than the one without SVD. As mentioned before, both dictionaries were trained with ImageNet [23] datasets. In Figures 6-8, the results of the general dictionary through SVD-CSC have better image quality than those without SVD. In particular, the SVD-CSC shows better local reflectance than the CSC, when looking at the wheels of the bicycle in Figure 6, the hourglass and candlestick in Figure 7, and the frames in Figure 8. As mentioned earlier, this difference is even more pronounced in the absence of SVD, especially in terms of local reflection, because dictionaries do not contain important information, but are only directed at minimizing the energy in the least-squares problem. On the other hand, SVD-CSC generates a dictionary using SVD to decompose on the best basis in the least-squares problem. Figure 6. Comparison of SVD-CSC and CSC methods without SVD: (a) Original test image; (b) Reflectance using CSC general dictionary; (c) Image enhancement using CSC general dictionary; (d) Reflectance using SVD-CSC general dictionary; (e) Image enhancement using SVD-CSC general dictionary. Additionally, we examine the effect of SVD-CSC by reconstructing the reflectance from a dictionary of limited memory. For this purpose, we compare the results when the allowable memory is altered by changing the size of the filter. If the size of the filter is reduced, the elements constituting the dictionary are reduced. Consequently, each filter compresses and expresses information more clearly than when the filter size is large, and when reconstructing reflectance from such information, each filter must have more important information such as the basis of SVD than the preceding one to obtain better reconstruction results. In Figure 9a, it is seen that when using 5 × 5 × 100 filters, the large data CSC does not express the local reflectance well.Meanwhile, the large data SVD-CSC in Figure 9b does not express perfect local reflectance, but it shows better results than the large data CSC. When using 15 × 15 × 100 filters in Figure 9c,d, both large data CSC and large data SVD-CSC have sufficient information, and hence, they show similar results. Therefore, the proposed large data SVD-CSC method can achieve better reconstruction results by constructing a compact dictionary with limited memory. In Figures 5-8, we show the image enhancement effect in low-light image. Finally, we show that high quality reflectance images can be obtained by removing a halo artifact from image reconstruction. A common problem with Retinex-based algorithms is the appearance of a halo artifact around the high-contrast areas owing to the rapid change in illumination. The halo artifact are shown in the red box in Figure 10. As shown in Figure 10c, the halo artifact usually appear due to the smoothness conditions of illumination of the Retinex model [16]. However, in Figure 10e our proposed method can reduce this artifact because the reflectance can be reconstructed in more detail. Therefore, our proposed method can reduce a halo artifact while using limited memory in these reconstruction applications, and acquire high quality images with enhanced edges and texture such as super resolution. Figure 10. Comparison of high contrast image reconstruction: (a) Original test image; (b) Reflectance using [16] Retinex model; (c) Image enhancement image using [16] Retinex model; (d) Reflectance using SVD-CSC general dictionary; (e) Image enhancement image using SVD-CSC general dictionary.
Objective Evaluation
A blind image quality assessment called natural image quality evaluator (NIQE) [26] was used to evaluate the enhanced results. A lower NIQE value represents a higher image quality. Since NIQE evaluates only the naturalness of image assessment, we used another color image assessment called auto regressive-based image sharpness metric (ARISM) [27]. In Table 1, the large data SVD-CSC method had a lower score on NIQE/ARISM than the sparse coding and large data CSC methods; thus, our method is shown to have a good balance and performance. In addition, the large data SVD-CSC method has a superior or similar performance to the single image CSC, and a lower performance than the single image SVD-CSC, but without a significant difference. Therefore, through objective evaluation, we showed that the general dictionary of large data SVD-CSC method has a robust performance in most cases. We reconstruct the reflectance from the limited memory dictionary illustrated in Figures 5-8 to confirm the effect of SVD-CSC. The smaller the filter size of the dictionary, the more the compression rate, which adversely affects the performance when it reconstructs reflectance and subsequently, leads to bad results. Nevertheless, our proposed SVD-CSC method represents an effective objective performance indicator. In Table 2, when the filter size is 15 × 15, CSC and SVD-CSC express reflectance with sufficient information, and both show similar results that are good. However, when the filter size is 5 × 5, the performance of the CSC is greatly reduced, whereas the SVD-CSC shows similar performance to the 11 × 11 filter CSC. Therefore, it can be seen that the proposed SVD-CSC can be effectively applied when the compression ratio is high.
Synthesis Image
We compare the performance of the proposed SVD-CSC and other methods with a synthesized image. Figure 11 shows the results of image tests using an image with uniformly dark colors and the same degrees of illumination. From Figure 11b, image enhancement results shown in Figure 11c-g were obtained using sparse coding, single image CSC, single image SVD-CSC, large data CSC, and large data SVD-CSC methods, respectively. In Figure 12, we compare the methods using the S-CIELAB color metric [28], which includes a spatial processing step, and is useful and efficient for measuring color reproduction errors in digital images. We show the S-CIELAB errors between Figure 11a and the sparse coding results in Figure 11c, between Figure 11a and the single image CSC results in Figure 11d, between Figure 11a and the single image SVD-CSC results in Figure 11e, between Figure 11a and the large data CSC results in Figure 11f, and between Figure 11a and the large data SVD-CSC results in Figure 11g. Figure 12a,c,e,g,i show a green color when the S-CIELAB error exceeds 30 units. Figure 12b,d,f,h,j show the histogram distribution of the S-CIELAB error. The S-CIELAB error histogram distributions give the numbers of pixels per error unit. In this test, the S-CIELAB error values for the proposed method (SVD-CSC) indicate that 8.1% of the image exceeded 30 units, whereas 22.9%, 7.9%, 6.4%, and 13.8% of the image exceeded 30 units with sparse coding, single image CSC, single image SVD-CSC, and large data CSC respectively. Quantitatively, the error of single image SVD-CSC is the least, but the dictionary created in this method cannot be used in general. On the other hand, in the case of the large data SVD-CSC, the learned dictionary can be used robustly in general, with performance close to that of the single image SVD-CSC.
Conclusions
In this paper, we proposed Retinex based image enhancement method via CSC. To realize this, the reflectance function of the Retinex algorithm was designed through CSC. In addition, we applied a SVD to the proposed reflectance function, and a dictionary closer to the basis was constructed. Through this, the Retinex algorithm was designed to preserve more details than the existing methods, and the object function also contributed to the improved quality of the separated image. We also showed that by learning a dictionary from large datasets, we can use a general dictionary without learning a new dictionary from any input image. As shown in the experimental results, the general dictionary is robust and performs adequately for both real and synthetic images. Although we showed that the reflectance dictionary can be effectively constructed using a limited memory and has general applications, it does not contain the optimal number of dictionary filters because the number of dictionary filters has been determined experimentally. Therefore, further research is required to determine the optimal number of dictionary filters for clear and accurate results. | 2020-07-02T10:29:03.474Z | 2020-06-01T00:00:00.000 | {
"year": 2020,
"sha1": "9d023cbfe8111e80c5b62b865b928a6c656cbe93",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/10/12/4395/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "16a1d936be669a3a04d1672669c1692655e53cbe",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
3343948 | pes2o/s2orc | v3-fos-license | Effects of Social Experience on the Habituation Rate of Zebrafish Startle Escape Response: Empirical and Computational Analyses
While the effects of social experience on nervous system function have been extensively investigated in both vertebrate and invertebrate systems, our understanding of how social status differentially affects learning remains limited. In the context of habituation, a well-characterized form of non-associative learning, we investigated how the learning processes differ between socially dominant and subordinate in zebrafish (Danio rerio). We found that social status and frequency of stimulus inputs influence the habituation rate of short latency C-start escape response that is initiated by the Mauthner neuron (M-cell). Socially dominant animals exhibited higher habituation rates compared to socially subordinate animals at a moderate stimulus frequency, but low stimulus frequency eliminated this difference of habituation rates between the two social phenotypes. Moreover, habituation rates of both dominants and subordinates were higher at a moderate stimulus frequency compared to those at a low stimulus frequency. We investigated a potential mechanism underlying these status-dependent differences by constructing a simplified neurocomputational model of the M-cell escape circuit. The computational study showed that the change in total net excitability of the model M-cell was able to replicate the experimental results. At moderate stimulus frequency, the model M-cell with lower total net excitability, that mimicked a dominant-like phenotype, exhibited higher habituation rates. On the other hand, the model with higher total net excitability, that mimicked the subordinate-like phenotype, exhibited lower habituation rates. The relationship between habituation rates and characteristics (frequency and amplitude) of the repeated stimulus were also investigated. We found that habituation rates are decreasing functions of amplitude and increasing functions of frequency while these rates depend on social status (higher for dominants and lower for subordinates). Our results show that social status affects habituative learning in zebrafish, which could be mediated by a summative neuromodulatory input to the M-cell escape circuit, which enables animals to readily learn to adapt to changes in their social environment.
While the effects of social experience on nervous system function have been extensively investigated in both vertebrate and invertebrate systems, our understanding of how social status differentially affects learning remains limited. In the context of habituation, a well-characterized form of non-associative learning, we investigated how the learning processes differ between socially dominant and subordinate in zebrafish (Danio rerio). We found that social status and frequency of stimulus inputs influence the habituation rate of short latency C-start escape response that is initiated by the Mauthner neuron (M-cell). Socially dominant animals exhibited higher habituation rates compared to socially subordinate animals at a moderate stimulus frequency, but low stimulus frequency eliminated this difference of habituation rates between the two social phenotypes. Moreover, habituation rates of both dominants and subordinates were higher at a moderate stimulus frequency compared to those at a low stimulus frequency. We investigated a potential mechanism underlying these status-dependent differences by constructing a simplified neurocomputational model of the M-cell escape circuit. The computational study showed that the change in total net excitability of the model M-cell was able to replicate the experimental results. At moderate stimulus frequency, the model M-cell with lower total net excitability, that mimicked a dominant-like phenotype, exhibited higher habituation rates. On the other hand, the model with higher total net excitability, that mimicked the subordinate-like phenotype, exhibited lower habituation rates. The relationship between habituation rates and characteristics (frequency and amplitude) of the repeated stimulus were also investigated. We found that habituation rates are decreasing functions of amplitude and increasing functions of frequency while these rates depend on social status (higher for dominants and lower for subordinates). Our results show that social status affects habituative learning in zebrafish, which could be mediated by a summative neuromodulatory input to the M-cell escape circuit, which enables animals to readily learn to adapt to changes in their social environment.
INTRODUCTION
Most animals make context-dependent behavioral decisions as they navigate their environment (Calabrese, 2003;Kristan, 2008;Nienborg et al., 2012). For social animals these decisions are influenced in part by intraspecific social interactions that develop into long-term and stable dominance relationships (Issa et al., 2012;Miller et al., 2017). Different brain regions regulate different behaviors in social conflict (Chou et al., 2016). Although the effects of social experience on nervous system function has been investigated in both vertebrate and invertebrate systems, our understanding of how dominance relationships differentially affect learning remains limited (Yeh et al., 1996;Issa et al., 2012;Araki et al., 2013). One form of learning is habituative learning, a well-characterized form of non-associative learning during which an animal decreases its responsiveness to repeated stimuli (Thompson and Spencer, 1966;Rankin et al., 2009). While habituation has been described in many organisms including Cnidarians (Rushforth et al., 1964), aplysia (Kandel, 2009), crayfish (Krasne and Woodsmall, 1969;Araki et al., 2013), zebrafish (Eaton et al., 1977a;Marsden and Granato, 2015;Pantoja et al., 2016;Roberts et al., 2016) and humans (Davis, 1934), it remains unclear how social dominance leads to neural differences underlying habituation processes (Glanzman, 2009;Thompson, 2009).
Zebrafish has emerged as a good model system to investigate the neural mechanisms in behavioral neuroscience. When paired, zebrafish interact aggressively with conspecifics with aggressive displays that increase in intensity until a stable social hierarchy is established that persists for weeks (Larson et al., 2006;Oliveira et al., 2011;Pavlidis et al., 2011;Miller et al., 2017). In addition, zebrafish exhibit distinct simple behaviors that can be readily quantified behaviorally and whose neural correlates have been probed thoroughly (Eaton et al., 2001). These characteristics present an opportunity to study how short-term habituation is modulated by the long-term social experience on the activation dynamics of neural circuits.
One of these neural circuits is the Mauthner neuron (M-cell) escape circuit that mediates the startle avoidance response (Eaton et al., 2001). An abrupt auditory pulse to the ear elicits a quick (<10 ms) and highly stereotypical startle escape response. The functional and anatomical organizations of the underlying neural circuit that mediates the startle escape response have been studied in larval and adult zebrafish, and goldfish (O'Malley et al., 1996;Eaton et al., 2001;Severi et al., 2014;Thiele et al., 2014;Wang and McLean, 2014). A distinct reticulospinal neural network centered around the M-cells is known to initiate and control the startle escape response ( Figure 1A). The M-cells constitute a pair of neurons that receive ipsilateral synaptic sensory input (Sato et al., 2007;Kohashi and Oda, 2008;Mu et al., 2012) and project axons to innervate contralateral spinal cord motor neurons to elicit a rapid fast flexion (Eaton et al., 1977b;Zottoli et al., 1987;Canfield, 2003). Moreover, the startle escape response can be modulated by social status (Neumeister et al., 2010;Whitaker et al., 2011). The mechanism of modulation includes the excitability of the M-cell as well as excitatory and inhibitory drives to the M-cell escape circuit (reviewed in Medan and Zebrafish startle response is activated by auditory stimuli. C-start behavior is mediated by the Mauthner neural circuit. M-cell innervates contralateral spinal cord motor neurons that activate the musculature. Activation of the M-cell is necessary for C-start escape. (B) A pair of bath electrodes is placed on each end of each testing chamber. Bath electrodes detect neuromuscular field potentials generated as the M-cell escape response is activated. M-cell escape is activated by an auditory pulse. Field potentials and stimuli are time-locked and digitally recorded. (C) An illustrative example of a phasic field potential recording during activation of the C-start escape response mediated by the M-cell followed by C-start bend and counter-bends. (D) Representation of the repeated auditory stimuli at 1 and 0.2 Hz. (E) Latency between the stimulus and the phasic potential for communals (COMM), dominants (DOM), and subordinates (SUB) at 1 and 0.2 Hz. Mean±SEM was plotted. of the swim circuit while subordinate animals swim less but their escape response is more sensitive.
Not only is the escape circuit prone to social plasticity, it is also known to be susceptible to habituation. Repeated auditory stimulation at a rate of 0.2 Hz rapidly decreases response probability (Marsden and Granato, 2015). Moreover, rate of habituation depends on the intensity as well as the frequency of stimulation (reviewed in Rankin et al., 2009). However, it is poorly understood how social status influences the habituation process of the M-cell startle escape response to repeated auditory stimulation. It is known though that the escape circuit is regulated by neuromodulatory inputs (Pereda et al., 1992;McLean and Fetcho, 2004). Neuromodulatory inputs that regulate the excitability of the escape circuit seem to influence the M-cell habituation rate to repeated stimuli. Marsden and Granato (2015) suggested a model for habituation as the combination of increased inhibitory input from feed-forward glycinergic inhibitory neurons and decreased excitatory input from auditory afferents, which together decrease the net excitability of the Mcell. Thus, the escape circuit may also be influenced by sustained social cues during social interactions that enable the animals to learn their social standing as they interact with conspecifics.
In the present study, we investigated a potential mechanism of how neuromodulatory inputs that are known to regulate the excitability of the M-cell escape circuit can be altered to mediate differences in habituation rates depending on the social status and characteristics of stimulation inputs. We constructed a simplified neurocomputational model of the M-cell escape circuit to test this idea and investigated how the interplay among social status, cellular excitability, and characteristics (frequency and amplitude) of repeated stimulations affects the habituation of the M-cell escape response.
Animal Maintenance
The experiments were approved by the East Carolina University Committee of Institutional Animal Care and Use. Wild type AB zebrafish strain (7-12 months old) (Danio rerio) were housed communally with mixed sex (∼20 fish per 10 gallon tanks) at 28 • C under a 14 h/10 h light/dark cycle and fed three times daily to satiation with a commercial food (Otohime B2, Reed Mariculture, CA, USA). Food was supplemented with newly hatched artemia (Brine Shrimp Direct, UT, USA).
Social Isolation, Pairing and Behavioral Observations
Males were randomly selected from communal tanks and physically and visually isolated from conspecifics for 1 week in isolation tanks (23 × 13 × 6 cm). Following isolation, animals were randomly paired with a conspecific continuously for 2 weeks in a novel tank of equal dimension to the isolation tanks. To determine social status formation and stability of social relationships, we recorded the daily aggressive behavior between animals during 5 min of observations. Observations occurred in the morning between 10 a.m.−12 p.m. when animal activity was relatively elevated. We counted the total number of attacks/bites and retreats displayed by each animal. These behaviors are reliable measures of assessing social dominance as described previously (Oliveira et al., 2011). Social dominance was calculated by taking the ratio of the total number of aggressive to submissive behavior. The animal with a higher ratio is considered the dominant animal. In rare instances when the dominance relationship was unstable or reversed the pairs were excluded from the study (n = 2 out of 35). These paired animals quickly form stable dominance relationships by starting on the third day of interactions and remained stable for the remainder of the 2 weeks (Miller et al., 2017). As an experimental control we also tested male communal animals of similar age and size as the experimental animals. The communal fish were housed in mixed sex tanks of ∼20 fish.
Experimental Setup
Animals were transferred to two separate but identical testing chambers (dimensions: 11 × 4 × 3 cm) and allowed to acclimate for a period of 30 min. With this experimental arrangement the animals were physically, visually and chemically separated. The chambers were equidistant from the speaker (4 cm) ( Figure 1B). The testing chambers contained double distilled water with a resistance of ∼15 M -cm and temperature of 25 • C. As previously described, high resistive water was found to improve the signal to noise ratio of the electrical field generated during escape behavior, and long-term exposure to water with low ionic concentration does not have obvious effects on the behavior or stress level of the animals (Issa et al., 2011;Monesson-Olson et al., 2014).
To record the electric field potentials of the escape response we used a pair of bipolar conductive electrodes (1 mm bare thickness, 3-5 mm metal exposure). Each pole was placed at the end of the testing chamber ( Figure 1B). Electric signals were amplified 1,000-fold using an AC differential amplifier with a low cut-off at 300 Hz and high cut-off at 1 KH (AM-Systems model 1700, Carlsborg, WA, USA). Signals were digitized using a Digidata-1322A digitizer and stored using the Axoscope software (Molecular Devices, Inc., Sunnyvale, CA, USA). Identification of the M-cell mediated escape response can be reliably identified due to the short-latency and large amplitude of the field potential generated during escape. Onset of M-cell mediated escape occurs between 5 and 15 ms with a characteristically large and phasic electric field potential of the M-cell followed by the neuromuscular field potentials of motor neurons during escape ( Figure 1C). Figure 1E shows the latencies for all three animal groups (dominants, subordinates, and communals) to consecutive 40 stimuli: communals had 5.73±0.25 ms (mean± SEM) at 1 Hz and 5.36±0.25 ms at 0.2 Hz; dominants had 4.65±0.37 ms at 1 Hz and 4.02±0.44 ms at 0.2 Hz; subordinates had 4.84±0.43 ms at 1 Hz and 5.36±0.72 ms at 0.2 Hz. Unlike the M-cell field potentials, the electric signal generated during swimming behavior is qualitatively and quantitatively distinct in that they are significantly smaller in amplitude and longer in duration compared to M-cell generated field potential (Prugh et al., 1982;Issa et al., 2011;Monesson-Olson et al., 2014). These features enable the unambiguous characterization of the field potentials generated by these two behaviors (Issa et al., 2011).
Habituation Trials
Auditory pulses were digitally generated via a computer using the Audacity open source audio editor and recorder software (www.audacityteam.org). The amplitude of auditory pulses was calibrated prior to the experiments using a decibel meter (Sinometer, MS6700). Supra-threshold auditory stimuli consisted of a phasic 1 ms at 95 dB re 20 µPa amplitude (sine wave). For the habituation experiments, we delivered 40 repeated stimuli at 1 or 0.2 Hz ( Figure 1D).
Data Analysis
All statistical analyses were performed in MATLAB (Mathworks, Nautick, MA) and Prism (GraphPad software Inc., San Diego, USA). Unless specified otherwise, all comparisons were first subjected to analysis of variance testing (ANOVA) or mixed design ANOVA (a mixture of one between-group and repeated measures variables) followed by Tukey's HSD post-hoc test for all multiple comparisons.
Neuronal Model
In a previous study (Miller et al., 2017), a neurocomputational model of the escape and swim circuits in the zebrafish was constructed to study how social status may regulate the activation of the escape and swimming behaviors. In that model, M-cells were the main command neurons in the escape circuit and the stimulus was directly delivered unilaterally to one of the M-cells. In the present study, we used this neurocomputational model as a simplified M-cell escape circuit and tested one possible mechanism by which the rate of habituation of the escape circuit with respect to repeated external stimuli can be regulated by social status and characteristics of repeated stimulations.
The M-cell model used a conductance-based modified Morris-Lecar neuronal model (Morris and Lecar, 1981;Izhikevich, 2007;Ermentrout and Terman, 2010) with additional calciumdependent potassium current. The membrane potential of each cell obeys the following current balance equation: where represent the potassium, calcium, calcium-dependent potassium, and leak currents, respectively. m ∞ is an instantaneous voltage-dependent gating variable for the calcium current where: The concentration of intracellular Ca 2+ is governed by the calcium balance equation: n is a gating variable for the potassium current obeying: Synaptic variable, s, is modeled by an equation for the fraction of activated channels: where s ∞ (v) = 1/ 1 + exp − v+θ s σ s . The term I syn in Equation (1) represents the synaptic input from the other Mcell and given by I syn = g syn v − v syn s, where s is the synaptic variable from another M-cell.
The applied current I app (t) in the M-cell for i = 1, 2 is modeled as: where I 0 is a fixed constant, I i (τ ) is the stimulus at time τ , w M is a fixed constant for the weight. E net (t) represents an activity-dependent adaptation with respect to repeated sensory inputs. The M-cell receives both excitatory inputs from the VIII th sensory nerve and inhibitory inputs from inhibitory commissural neurons (reviewed in Zottoli and Faber, 2000; also in Korn and Faber, 2005). Habituation is affected by a combination of increased inhibitory input from feed-forward inhibitory neurons and decreased excitatory input from auditory afferents, which decrease the net excitability of the M-cell (Marsden and Granato, 2015). We extend this idea so that in the model, E net (t) (Equation 9) represents the maximal net pre-synaptic input to the M-cell. Here we assume that E net (t) is excitatory. Now, calcium is known to modulate inhibitory pre-synaptic neurotransmitter release via retrograde signaling (Diana and Bregestovski, 2005). Pre-synaptic release of dopamine and post-synaptic activation of the M-cell are also known to be calcium dependent (Cachope et al., 2007). The M-cell's calcium response is due to the calcium influx to the M-cell (Takahashi et al., 2002) and reflects the M-cell's activation (Marsden and Granato, 2015). Moreover, the startle escape response probability of the M-cell is determined by the amplitude of calcium in the dendrite of the M-cell (Marsden and Granato, 2015). Hence, in the model M-cell we assumed that intracellular calcium level reciprocally modulates the maximal net excitation of the M-cell. Now, activity-dependent adaptation E net (t) obeys the following equation: where ag max is the maximal net excitation, ρ is the time constant of E net (t), and [Ca] i is the intracellular calcium concentration of the i-th M-cell.
The set of parameter values are given by the following unless specified otherwise. g Ca = 4, g KCa = 0.25, g K = 8, g L = 2, Parameter values are adjusted in such a way that the M-cells do not fire action potentials unless they receive enough excitatory inputs from external stimuli.
In the computational study, we used the maximal net excitation ag max (Equation 9) as the main parameter to explore how the change in neuromodulatory inputs to the M-cell circuit leads to differences in habituation rate to repeated stimuli. Large ag max values presumably represent subordinate-like social phenotype, intermediate ag max values communal-like phenotype, and small ag max values dominant-like phenotype. To simulate the effect of an external stimulus onto the Mcell, a depolarizing current pulse was applied to the M-cell model neuron. Simulations were performed on a personal computer using the software XPP (Ermentrout, 2002). The numerical method used was an adaptive-step fourth order Runge-Kutta method with a step size 0.01 ms.
Experimental Results
To determine the effects of social status on habituation of the M-cell mediated escape response we delivered repetitive suprathreshold auditory stimuli (Figures 1B,D). Figure 2 shows the response patterns of dominants, subordinates, and communals to consecutive 40 stimuli at 1 Hz (left panel) and 0.2 Hz (right panel), respectively. Here, male communal animals of similar age and size were randomly selected from mixed sex tanks of ∼20 fish. We found that 1 Hz auditory stimulation was effective in habituating the escape response in all three groups. The rates of habituation were modestly higher in dominants and lower in subordinates compared to communals at 1 Hz. However, the rate of habituation in dominants was much more pronounced compared to subordinates. With repeated stimuli at 1 Hz, subordinates continued to faithfully respond at a steady rate; however, most dominant and some communal animals stopped responding after the first few repeated stimuli (Figure 2 left panels). When the same experiment was repeated with a 0.2 Hz stimulation protocol, we found that the habituation rate was reduced and differences of habituation rates among the three animal groups became less pronounced (Figure 2 right panels).
To determine whether these observations were statistically significant among the three social phenotypes and stimulus frequencies, we conducted a mixed design ANOVA (withinsubject factor as stimulation (40 stimuli), between-subject factors as group (dominant, subordinate, and communal animals) and frequency (1 and 0.2 Hz). We found that there were significant main effects of stimulation [F (39, 3,627) = 1.04e+1, p < 1.0e-16], frequency [F (1, 93) = 1.07e+2, p < 1.0e-16], and group [F (2, 93) = 3.36, p = 3.88e-2]. There was also a two-way interaction between frequency and stimulation [F (39, 3,627) = 2.34, p = 5.53e-6]. But there were neither two-way nor three-way interactions between frequency and group [F (2, 93) = 1.76, p > 0.05], between group and stimulation [F (78, 3,627) = 8.87e-1, p > 0.05], and among factors (group, frequency, and stimulation) [F (78, 3,627) We further performed post-hoc tests to determine the differences in the average response rates for animal groups and for the test conditions (1 and 0.2 Hz). The post-hoc test showed that the average response rates of the startle escape for subordinates were significantly higher compared to dominants (Tukey's HSD, p = 4.53e-2). However, there were no differences in the average response rates of the startle escape between dominants and communals (Tukey's HSD, p > 0.05) and between subordinates and communals (Tukey's HSD, p > 0.05). Moreover, the post-hoc test showed that the average response rates of the startle escape at 0.2 Hz were significantly higher compared to 1 Hz (Tukey's HSD, p = 1.06e-10).
Additionally, we analyzed differences of the response rates among animal groups at each frequency (1 and 0.2 Hz). To investigate the response rates of the startle escape in three animal groups (communal, dominant, and subordinate) at 1 Hz, a mixed-design ANOVA (within-subject factor as stimulation, between-subject factor as group) was performed. We found that there were significant main effects of stimulation [F (39, 2,223) = 1.17e+1, p < 1.0e-16] and group [F (2, 57) = 3.85, p = 2.71e-2]. But there was no effect of interaction between stimulation and group [F (78, 2,223) = 8.77e-1, p > 0.05]. We further performed post-hoc test to determine the differences of the response rates between animal groups at 1 Hz. The post-hoc test indicated that the response rates of the startle escape of subordinates were significantly higher compared to dominants (Tukey's HSD, p = 2.54e-2), but no other differences were observed (Tukey's HSD, p > 0.05). Moreover, as illustrated in Figure 2C left panel we observed the response rates of the startle escape of the first few stimuli for all animal groups were at least twice higher compared to the rest of the stimuli. To explore this observation further, we pooled the time bin into every 5 stimuli (1-5, 6-10, 11-15, etc.) and then performed a mixed-design ANOVA (within-subject factor as stimulation, between-subject factor as group). There were significant main effects of stimulation [F (7, 399) =2.75e+1, p < 1.0e-16] and group [F (2, 57) = 3.85, p = 2.71e-2]. No effect of interaction between stimulation and group was observed [F (14, 399) = 5.29e-1, p > 0.05]. The post-hoc test indicated that the response rates of the startle escape of the first pooled time bin (1-5 stimuli) was significantly higher compared to all other time bins (6-10, 11-15, . . . , 35-40) (Tukey's HSD, p ≤ 6.80e-8). There were no other differences among all other time bins (Tukey's HSD, p > 0.05) for all animal groups. This indicates that the response rates of the startle escape at 1 Hz stimuli were quickly decreased within the first five stimuli. After the first five stimuli, the response rates were settled down to certain levels depending on animal groups.
We also investigated the response rates of the startle escape for all three animal groups at 0.2 Hz by using a mixed-design ANOVA (within-subject factor as stimulation, between-subject factor as group). There was a significant main effect of stimulation [F (39, 1,404) = 3.12, p = 7.02e-10] but no effect of group [F (2, 36) = 1.70, p > 0.05]. There was also no effect of interaction between stimulation and group [F (79, 1,404) = 8.40e-1, p > 0.05]. As we did at 1 Hz, we also pooled the time bin into every 5 stimuli to determine whether the response rates of the startle escape of the first 5 stimuli were different from other time bins. We observed a significant main effect of stimulation [F (7, 252) = 5.97, p = 1.92e-6] at 0.2 Hz although it was lower compared to the effect observed in the 1 Hz protocol. There were no main effect of group [F (2, 36) = 1.70, p > 0.05] and no two-way interaction between stimulation and group [F (14, 252) Post-hoc test indicated that the response rate of the startle escape of the first pooled time bin (1-5 stimuli) was significantly higher compared to all other time bins (Tukey's HSD, p≤ 3.28e-2) except the third pooled time bin (11-15 stimuli) (Tukey's HSD, p > 0.05). There were no other differences among all other time bins (Tukey's HSD, p > 0.05). This indicates that the response rates of the startle escape to the 0.2 Hz repeated stimulation were also decreased within the first 5 stimuli although rates of decrease were smaller compared to 1 Hz. After the first 5 stimuli, the response rates were settled down to certain levels depending on animal groups. In summary, our results show that the habituation to repeated stimulation occurred at both frequencies (1 and 0.2 Hz) for all animal groups. However, at the moderate frequency (1 Hz) the habituation rates were more prominent compared to the lower frequency (0.2 Hz). The habituation rates for dominants were significantly higher compared to subordinates at 1 Hz while the differences of the habituation rates among all three animal groups were disappeared at 0.2 Hz. These results suggest that both social experience and frequency of stimulus influence the habituation of the M-cell's escape circuit.
Activity Patterns and Irregularity
Excitability of the startle escape circuit is subject to descending neuromodulation (Oda et al., 1998;Preuss and Faber, 2003). Presynaptic inputs to the M-cell may modulate the excitability of the startle escape circuit by changing in the excitability of the M-cell (Cachope et al., 2007;Medan and Preuss, 2011;Whitaker et al., 2011;Marsden and Granato, 2015). However, little is known of how a change in the M-cell's excitability results in different startle responses depending on social status and the properties of the stimulation input. Here, we hypothesized that pre-synaptic inputs to the M-cell escape circuit may be changed differentially according to social status and characteristics of the stimulations, which accounts for the observed differences in the response rates between dominants and subordinates and in the frequencies of the stimulation inputs. To test this hypothesis, we used the neurocomputational model of the M-cell (Miller et al., 2017; Equations 1-9) as a simplified M-cell escape circuit. This Mcell model is based on a conductance-based modified Morris-Lecar neuronal model (Morris and Lecar, 1981;Izhikevich, 2007;Ermentrout and Terman, 2010) with additional calciumdependent potassium current. In the simulation of the model, the maximal net excitation (ag max ) in the M-cell, which reflects the total net pre-synaptic inputs to the M-cell, was used as a main control parameter. Here, we assumed that the social status differentially affect the total pre-synaptic inputs to the M-cell escape circuit (see Materials and Methods). For example, the subordinate-like model receives more total net excitatory inputs so that it has higher ag max (Figure 3A) while the dominantlike model receives less total net excitatory inputs so that it has lower ag max (Figure 3C). The communal-like model lies in between the dominant-like model and the subordinate-like model ( Figure 3B). In the computational study, we explored not only the effects of social status, but also the effects of the characteristics of the stimulus input including the magnitude and the frequency. Figures 4A,B illustrated that the model was able to generate different response patterns depending on different maximal net excitation values (ag max ). The simulation began by finding quasi-steady states of membrane voltage v, gating variable n, calcium concentration [Ca], activity-dependent adaptation E net by running the computer simulation for 20 s without any stimulus input. Periodic depolarizing current pulses at 1 Hz (left panels) or 0.2 Hz (right panels) were given with the above initial conditions at time t = 20.3 s. For ag max = 41.5, the model displayed dominant-like response phenotype whereby the model faithfully responded only to the first few stimuli (Figure 4A upper panels). For ag max = 42.2 the model displayed communal-like response phenotype similar to the experimental results illustrated in Figure 2 (Figures 4A,B middle panels). On the other hand, for ag max = 43.5 the model displayed subordinate-like response phenotype similar to the experimental results illustrated in Figure 2 (Figures 4A,B lower panels). As ag max increased, the neuronal model tended to respond more faithfully to repeated stimulation inputs and eventually showed full and faithful responses to the stimulus inputs. In summary, the model was able to reproduce different degrees of habituation under periodic stimulation at 1 and 0.2 Hz that mimicked the experimental results shown in Figure 2 by controlling the parameter ag max . In the following analysis, we further explored how social experience and characteristics of the repeated stimulations affect habituation process.
Due to the important role of excitable cells in information processing within neural networks and many other biological systems, the response of excitable cells under periodic input has been studied extensively. For example, Kaplan et al. (1996) studied the response of periodically stimulated squid giant axons and observed irregular action potentials. Using one-dimensional return maps, they found that deterministic subthreshold dynamics are responsible for the observed irregular response patterns and subthreshold responses modulate the response of neurons to subsequent stimuli. In fact, the Mcell in our model showed similar response patterns to periodic input. As the simulation persisted, the cell tended to skip the response to inputs and elicited less action potentials and eventually showed irregular response patterns ( Figure 4C). To unveil the deterministic subthreshold dynamics responsible for this irregularity in our model, we constructed one-dimensional map using 200 s long data (between 100 and 300 s after the initiation of stimulation to obtain stable responses). Following Kaplan et al. (1996), we used the areas under the voltage trace around each stimulation input because the shape of action potentials and subthreshold responses reflect the dynamics of the neuron under repeated stimulation. More precisely, we first set up a certain threshold and then computed the area (denoted by A i ) under the voltage trace above this threshold at each stimulation i (Figure 4D). We then plotted (A i , A i+1 ). As in Kaplan et al. (1996), a logarithmic scale was used because of the large difference between the area of an action potential and a subthreshold response. Figures 4E,F showed the resulting onedimensional maps at 1 Hz ( Figure 4E) and 0.2 Hz ( Figure 4F). Note that the maps do not depend strongly on how the areas are calculated as stated in Kaplan et al. (1996). The solid line on the diagonal in each figure is the line of identity where A i+1 = A i . The small cluster of points on the line of identity in Figure 4E left and middle panels implied that the subthreshold responses of dominant-like and communal-like under 1 Hz stimulation were regular. On the other hand, the -shape of the map in Figure 4E right panel (subordinate-like) suggested that the observed irregularity was governed by a deterministic onedimensional map (see Figure 4C for the irregularity of the voltage trace). In fact, as the social-status dependent parameter, ag max , increases, the return maps can be categorized into 5 different types. The first one was a map with a stable subthreshold fixed point as shown in Figure 4E left and middle panels, where the cell eventually stops responding to repeated inputs (dominant-like and communal-like at 1 Hz). A map with a stable subthreshold periodic orbit was the second type, where the cell elicits sparse but periodic action potentials (few clusters of points) where at least one cluster was on the identity line and others were away from the identity line (between Figure 4E middle panel and Figure 4E right panel). The third type was a subthreshold irregular (potentially chaotic) pattern as shown in Figure 4E right panel. The fourth was a supra-threshold periodic orbit, where the action potentials were skipped sparsely but periodically (Figure 4F left panel). The fifth was a map with a supra-threshold fixed point, where the cell fired faithfully to the repeated inputs (subordinate-like and communal-like in Figure 4F middle and right panels).
Modulation of Response Patterns by [Ca] and E net
A previous study has shown that the change in M-cell excitability is responsible for the change in startle plasticity (Neumeister et al., 2010). On the other hand, intracellular calcium is known to modulate the pre-synaptic inhibitory synaptic transmission via retrograde signaling, which is called depolarization-induced suppression of inhibition (Diana and Bregestovski, 2005); depolarization results in the increase of intracellular calcium concentration through voltage sensitive calcium channels, which is followed by retrograde transmitter release. Hence, we focused on the dynamics of [Ca] and activity-dependent adaptation E net in response to repeated stimulation inputs to study how the cooperation of two slow variables [Ca] and E net modulates the response of the M-cell that will lead to different response patterns depending on ag max . More precisely, in this section, we explored (1) how the habituation of the M-cell escape response to 1 Hz periodic inputs depends on [Ca] and E net and (2) how the irregular activity patterns in subordinates are obtained.
With the same data sets in Figure 4 for dominant-like, communal-like, and subordinate-like cases, we plotted the temporal profiles of [Ca] and E net in Figures 5A,B. The lower curve in each figure corresponded to dominant-like (square symbols) and the upper curve for subordinate-like cases (circle symbols). The inset in each figure showed the temporal profile for communal-like case (triangle symbols) in a half-size scale. Closed symbols in each figure denoted the moments when action potentials were elicited by stimulation while open symbols denoted the moments with no action potentials. When the input was given, the cell was depolarized and [Ca] increased rapidly. On the other hand, when the stimulation was terminated [Ca] decreased slowly. E net also decreased slowly to periodic inputs although E net demonstrated more subtle dynamics which we will investigate in more detail later in Figure 6. The overall levels of both [Ca] and E net decreased over the repeated stimulations. To explore how the cooperation of two slow variables [Ca] and E net modulate the response of the cell to periodic stimulations, we considered two-dimensional ([Ca], E net )-space in more detail to get better insights of the roles of these two slow variables. Figures 5A,B were also used in Figures 5C,D. The line that runs through from the lower left corner to the upper right corner in each figure was a jump-up curve, which was numerically computed as follows. Figures 5A,B showed that [Ca] lied in [3, 3.2] and E net in [0.9, 1.2] when periodic inputs were given. Therefore, we focused on the rectangular region, R = [3, 3.2] × [0.9, 1.2] in the slow ([Ca], E net )-space and chose grid points with increment 0.01 in horizontal and vertical directions. Since the membrane voltage v and gating variable n were fast variables, we assumed that v and n approached their steady states quickly. When stimulation was given, we assumed that v = −34.32 and n = 0.00427 in dominant-like case and communal-like case, and v = −34.322 and n = 0.00429 in subordinate-like case. Note that the initial values of the membrane voltage v and gating variable n for dominants and subordinates were slightly different because their parameter values ag max were different. After the finite time simulation, their steady state values were also different. For each grid point in the rectangular region R, we used the values of [Ca] and E net along with the fixed values of v and n as initial conditions and determined whether the cell fired when the stimulation was delivered at the time t = 20 s. In Figures 5C,D, the region left to the curve was a jump-up region where the cell fired whenever the simulation input was given.
In all three (dominant-like, communal-like, and subordinatelike) cases, the trajectory began at the jump-up region, which explains why the cell showed faithful responses to the first few periodic stimulation inputs. However, for the dominantlike and communal-like cases at 1 Hz, the trajectory quickly escaped the jump-up region and remained outside the jumpup region. These response patterns of the three models can be explained by the dynamics of [Ca] and activity-dependent adaptation E net . Note that the overall level of [Ca] initially increases but eventually decreases and levels off while the overall level of E net monotonically decreases and eventually levels off (Figures 5A,B). The incorporation of these two overall behaviors results in a reversed C-shaped trajectory in ([Ca], E net )-space. Therefore, the initial increase of overall [Ca] level along with monotonic decrease of overall E net level opens up a possibility of escaping from the jump-up region. In addition, ag max determines the steady-state value of E net under no stimulation; specifically, bigger ag max means bigger steady-state E net value under no external stimulation inputs. Hence, ag max determines how long the cell responds to the periodic inputs. For example, if ag max is sufficiently large then the trajectory tends to stay within the jumpup region; hence, the cell elicits action potentials for each input. For intermediate values of ag max , the cell stays within the jumpup region for a while and either escapes (dominant-like case) or stays near the jump-up curve to generate intermittent responses (subordinate-like case). If ag max is sufficiently small, then the trajectory starts outside the jump-up region; hence, the cell does not elicit any action potential. In summary, the incorporation of [Ca] and E net dynamics is responsible for the escape from the jump-up region while ag max (the maximal net excitation) modulates the duration of the stay within the jump-up region.
We further explored how calcium and net excitability determine the irregular escape response of the M-cell to repeated stimulations shown in the subordinate-like case. Figure 6 illustrates the profiles of membrane voltage v, calcium concentration [Ca], and activity-dependent adaptation E net from 200 to 250 s since the initiation of stimulation. Apparently membrane voltage v shows sparse and irregular action potentials. Intracellular calcium concentration [Ca] increases rapidly whenever the cell receives an external input, hence whenever the cell is depolarized. This increment is very large if the cell elicits an action potential and small if not. Between depolarization events driven by a series of external inputs, either an action potential or subthreshold depolarization, [Ca] decreases slowly. On the other hand, the dynamics of E net is slower than [Ca] and the overall behavior of E net between action potentials is not monotonic, which is an important substrate for the generation of irregular response to the external inputs. Whenever the cell fires an action potential, E net decreases initially and then begins to increase until the next action potential since the dynamics of E net is regulated by [Ca] (Equation 9). Now, when the cell fires an action potential, the trajectory in ([Ca], E net )-space escaped the jump-up region rapidly. During subsequent inputs, the cell is unable to generate action potentials since the decrement of [Ca] is not sufficient to push back the trajectory into the jump-up region. However, the overall [Ca] decreases slowly and E net increases slowly, hence the trajectory is eventually pushed back into the jump-up region. For this re-injection mechanism, the baseline level of [Ca] under repeated inputs should be sufficiently small to ensure that the trajectory is pushed back into the jump-up region and sufficiently close to the jump-up curve to ensure the occurrence of irregular response. Considering that ag max , the maximal net excitation, determines the baseline level of E net under repeated stimulation inputs, irregular responses can happen for some values of ag max as shown in numerical simulation. The dynamics of E net , which decreases initially and then slowly increases, also modulates the occurrence of irregular action potentials. Figure 6D illustrates that the condition of the trajectory within the jump-up region does not fully guarantee the cell would fire an action potential. In some cases, the cell is unable to fire an action potential even though the trajectory is within the jump-up region and vice versa. Recall that when we constructed the jump-up region, we assumed that membrane voltage v and gating variable n would approach their steady-state values quickly. We chose quasi-steady state values for v and n and used them as initial conditions for v and n throughout the numerical simulation to determine the jump-up region. When the trajectory is away from the jump-up curve but within the jump-up region, the constructed jump-up curve provides a good explanation of the response patterns of the cell to the inputs. However, when the trajectory is sufficiently close to the jump-up curve, the values of v and n at the time of the stimulation became critical in the generation of an action potential.
Response Rates of Cell to Repeated Stimuli
In this section, we explored how the response rates of the model M-cell to a series of repeated inputs change when ag max and particular characteristics (frequency and amplitude) of the repeated stimuli were varied. More precisely, we measured the Faithfulness of the response of the cell to the external stimulations by changing ag max and stimulus frequency and amplitude. Here, the Faithfulness of the response of the cell to repeated stimuli is defined as the following: This value varies from 0 (no response) to 1 (full response). We measured this value during two different time intervals: initial time interval (20-30 s) and stable time interval (40-70 s). Note that the stimuli began at time 20 s to ensure that the cell was at the quasi-steady state. By measuring the Faithfulness of the cell to the repeated stimuli for two different time intervals, we explored how ag max and characteristics of the stimulation inputs affect the response rates of the cell to the repeated stimuli. That is, we used our neurocomputational model to explore the response rates of the M-cell on various conditions beyond the empirical study. For example, we changed the amplitudes and frequencies of the repeated stimulations along with various levels of ag max , which mimics the difference of the social status in the empirical study. We also considered two different time scales to compare how the Faithfulness occurs in different conditions. Note that higher Faithfulness corresponds to lower habituation rate while lower Faithfulness corresponds to higher habituation rate. Figure 7 illustrates that the Faithfulness of the cell to repeated stimuli depended on both ag max and the characteristics of the repeated stimuli including the frequency and the amplitude of the stimulation input. Figures 7A,B shows the Faithfulness when the frequency (Hz) and ag max were changing. As ag max increased, Faithfulness also increased. Note that E net is the positive excitatory input to the cell and is an increasing function of ag max . Thus, it is easier for the cell to respond to the stimulation input with higher ag max and E net , which results in higher Faithfulness. This implied that the subordinate-like case (higher ag max ) had higher values of Faithfulness compared to the dominant-like case (lower ag max ).
On the other hand, as the frequency increased, E net could not be fully recovered from the previous activity due to the previous external stimulation input and this led to lower Faithfulness values. Note that when the frequency was small (<1 Hz), the different level curves converged. This implied that when the frequency was small (longer inter stimulus interval), the cell had enough time to recover from the previous stimulation input so that it could faithfully respond to the next stimulation. Since E net decreased as the cell fired, the response rates of the cell also decreased as well. Thus, we observed that Faithfulness during the 40-70 s period (Figure 7B) was lower compared to the first 20-30 s period ( Figure 7A). This indicated that the cell exhibited lower response rates of escape to the repeated stimulation inputs as a function of time. Figures 7C,D illustrates Faithfulness when the stimulus frequency and the magnitude were changed. As the stimulus magnitude increased, it was easier for the cell to respond to the stimulation, which consequently led to higher values of Faithfulness. Miller and colleagues showed that the response probability of the M-cell to repeated stimuli is an increasing function of the amplitude of the input (Miller et al., 2017). The effect of the frequency was the same as the one in the previous result. That is, the smaller the frequency, the higher the Faithfulness value. We also observed higher values of Faithfulness during the first 20-30 s period (Figure 7C), followed by lower values during the 40-70 s period (Figure 7D), which indicated that the cell might exhibit high habituation rates to the repeated stimuli.
DISCUSSION
In the present study, we investigated one possible mechanism of how social status affects habituation processes in zebrafish to repeated auditory stimulation. Three distinct social phenotypes (dominants, subordinates, and communals) were considered. To investigate a potential mechanism underlying the different habituation processes observed in these three groups, we used both empirical and neurocomputational model of the Mcell. Empirically, the M-cell habituated to repeated moderate frequency auditory stimulation and habituation rate of the Mcell was socially regulated: dominant animals habituated more readily compared to subordinates at 1 Hz (Figure 2). In fact, at this stimulus frequency, dominant animals habituated slightly quicker compared to communals while subordinate animals habituated slightly slower compared to communals. A decrease in stimulus frequency eliminated social status dependent differences in habituation rates, albeit habituation occurred at a muchreduced degree. Our computational study demonstrated that the total net excitability of the M-cell escape circuit played a crucial role in the reproduction of the different habituation processes observed empirically and suggested that social status may affect pre-synaptic inputs to the M-cell to result in different habituation processes observed experimentally. Our computational study also predicts that habituation was more pronounced as either the frequency of repeated stimulation increased or the intensity of the stimulus decreased (Rankin et al., 2009;Marsden and Granato, 2015).
Our empirical results showed that the habituation rates at 0.2 Hz were less prominent compared to the higher frequency (1 Hz) unlike Marsden and Granato (2015) that showed prominent habituation rates at 0.2 Hz. This conflicting results come probably from the difference in experimental conditions: Marsden and Granato (2015) applied acoustic-vibrational stimuli to 5 dpf zebrafish head-restrained larvae in a simple learning task with 30 stimuli of 13 dB (the first 5 stimuli at 1/120 Hz and the final 25 stimuli at 0.2 Hz) while we applied supra-threshold auditory stimuli of 95 dB with 40 stimuli to adult male zebrafish.
Descending serotonergic and dopaminergic modulatory inputs regulate the excitability of the escape circuit (Whitaker et al., 2011;Mu et al., 2012;Medan and Preuss, 2014;Pantoja et al., 2016). Serotonin and dopamine induce opposite effects on the excitability of the escape circuit in that their application can enhance or depress the activation of the M-cell. For example, reduction of serotonin increases the habituation while dopamine leads to an opposite effect (Pantoja et al., 2016). This neuromodulatory control is mediated via direct modulation of the pre-synaptic sensory and postsynaptic Mcell and indirectly by regulating the feed-forward and feedbackward inhibitory inputs that finely tunes excitability of the M-cell (Marsden and Granato, 2015). The collective excitatory and inhibitory inputs that influence the escape circuit can be modeled as changes in calcium dynamics represented by the activity-dependent adaptation E net , which is regulated by the maximal net excitation (ag max ). In fact, calcium is known to control both pre-synaptic release of dopamine and postsynaptic activation of the M-cell (Cachope et al., 2007). Thus, we hypothesized that social status may affect the summative neuromodulatory inputs to the M-cell escape circuit, which, combined with intracellular calcium dynamics, results in a social status-dependent habituation of the escape circuit. In our model we assumed that decreased inhibitory drive and/or increased excitatory drive onto the M-cell escape circuit result in higher net excitatory inputs. This would correspond with the higher net excitability in subordinate-like social phenotype (Figure 3A). Similarly, increased inhibitory drive and/or decreased excitatory drive to the M-cell escape circuit results in the lower net excitatory inputs that corresponds to the lower net excitability in dominant-like phenotype ( Figure 3C).
To test this idea, we simulated and analyzed the response of a simplified model of the M-cell with respect to repeated stimuli. Although the model did not include all the dynamics and the contributions of many neurotransmitters (2-AG, dopamine, serotonin, etc.) and pre-synaptic cells (excitatory and inhibitory) that may act in vivo, the model was able to reproduce important characteristics of habituation processes observed in animals of different social standing by controlling the maximal net excitation ag max . More precisely, different habituation processes were obtained by the incorporation of intracellular cellular calcium concentration [Ca] and activity-dependent adaptation E net , which is modulated by the maximal net excitation ag max . In other words, the dynamics of slow variables (quantities that change slowly over time) modulate the overall activity patterns of the whole system. As explained in Figure 5, the slow dynamics of [Ca] and E net govern the response of the M-cell to repeated stimulation that leads to different responses depending on ag max and the stimulation frequency. This could explain why the habituation of the M-cell occurs quickly in dominants compared to subordinates when the frequency of the periodic input is moderate (around 1 Hz). This also could explain why both animals respond only to the first few stimuli when the frequency is sufficiently high. In our model calcium concentration [Ca] regulates the calcium dependent potassium channel (I KCa ) which is known to control neuronal excitability and spike frequency adaption (Vergara et al., 1998). The calcium dependent potassium channel is present in the peripheral nervous system and sensory system of zebrafish including the statoacoustic (VIII) ganglia (Cabo et al., 2013). Moreover, the existence of this channel in the M-cell of zebrafish was also suggested (Brewster, 2012). Further studies are necessary to determine the presence of the channel in M-cell of zebrafish.
The model also showed that the startle escape response rate is an increasing function of the frequency of the repeated input (Figure 7; Rankin et al., 2009). When the frequency was sufficiently low, all three (dominant-like, communal-like, and subordinate-like) models faithfully responded to the stimuli so that there was almost no habituation. This is because the M-cell had enough time to recover from the previous stimulation as the frequency decreases or the inter stimulation interval increases. In other words, E net is able to increase sufficiently high while calcium concentration decreased sufficiently low, hence the Mcell lies either within or near the jump-up region where it was ready to fire an action potential when the next stimulation was delivered. On the other hand, when the frequency was sufficiently high, all three models (dominant-like, communallike, and subordinate-like) stopped responding to the stimuli except the first few. This was because the E net has no time to recover from the previous activity. This suggests that when the frequency is either too low or too high, the difference of the habituation rates among three models would disappear.
Within neuronal networks and many other biological systems, the response of excitable cells with respect to periodic input has been studied extensively due to its prevalence and importance in information processing (Ermentrout, 1996;Izhikevich, 2007;Smeal et al., 2010). Various activity patterns including silent, bursting, tonic spiking, and chaotic behaviors have been observed. The mechanisms underlying these activity patterns and transitions between them have been studied to obtain an insight into spatiotemporal patterns observed in neuronal networks (Prescott et al., 2008;Bogaard et al., 2009;Drion et al., 2015). Bifurcation theory in non-linear dynamics has been used to study neuronal excitability in depth (Ermentrout, 1996;Rinzel and Ermentrout, 1998;Izhikevich, 2000Izhikevich, , 2007. The study of irregular patterns is a long lasting theme in non-linear dynamical systems and even in biological areas. For example, Kaplan et al. (1996) observed that the squid giant axon exhibits irregular action potentials under periodic inputs and found that deterministic subthreshold response dynamics underlies the observed irregular response. The model M-cell in the current study also exhibited irregular action potentials in subordinate-like cases; as the simulation persisted, the cell tended to skip the response to inputs and elicited less action potentials and eventually showed irregular response patterns. The corresponding one-dimensional return map showed that the underlying dynamics was deterministic (Figure 4). Figures 5, 6 illustrate that the dynamics of [Ca] and E net determine the dynamics underlying the irregular response.
The key substrate was the proximity to the jump up region when stimulation was given, which was in turn modulated by the maximal net excitation ag max . This was the reason why we obtained irregular response over a certain range of ag max values.
The limitations of the current study and proposed model are noteworthy. First, this research is based on the experiment of only two different frequencies (1 and 0.2 Hz) and fixed stimulus intensity. To overcome these experimental limitations, we used a neurocomputational model and explored a wide range of the frequency and amplitude domains. Our computational modeling study replicated the empirical results and demonstrated that the rates of habituation of the M-cell startle escape response depend on the social status of the animals and characteristics (frequency and amplitude) of the stimulus inputs. Second, the usage of adult and freely behaving animals to measure habituation rates prevented the direct measurement of physiological changes in intracellular calcium of the M-cell during the habituation process. Although calcium imaging approaches are widely used in zebrafish research such as transgenic fish lines that express calcium indicators specifically in the M-cell, these studies are mostly limited to embryonic stages during which the skin is transparent and the skull is not yet fully formed. Thus, currently it is not possible to measure calcium dynamics in the M-cell of adult fish in vivo. Third, while the habituation rates were different depending on social conditions, the experimental results show that there were wide variations in the responsiveness of the M-cell to repeated stimuli within the same animal group. These individual differences may be due to the different activity levels of dorsal raphe nucleus serotonergic neurons (Pantoja et al., 2016). Pontoja and colleagues also showed that both serotonin and dopamine modulate the habituation in opposite directions. Moreover, 2-AG also modulates swimming and escape response in zebrafish (Song et al., 2015). Thus, it will be interesting to explore the effects of neuromodulators (including 2-AG, dopamine, serotonin, etc.) on the social status and the habituation of the M-cell. Fourth, we used a simplified single compartment computational model of the M-cell. One can expand this model by including additional pre-synaptic cells such as excitatory cells (like sensory VIII th nerve fiber) and inhibitory cells (like commissural neurons) (reviewed in Zottoli and Faber, 2000; also in Korn and Faber, 2005). Moreover, the empirical results from calcium dynamics of the pre-synaptic cells and the M-cell during the escape response along with the changes of the neuromodulators will also help in building more elaborate computational models.
In conclusion, using a dual empirical and computational approach, we provided one possible mechanism of how social status affects habituation processes of the startle escape response in zebrafish. Social status may affect neuromodulatory inputs to the M-cell escape circuit that leads to differences in the Mcell's excitability. This may enable animals to readily learn to adapt to changes in their social environment by selecting the most appropriate behavior (for example, escape for subordinate animals while quick habituation or swim for dominant animals) to environmental stimuli. Our model suggests that the incorporation of calcium concentration [Ca] and activitydependent adaptation E net under the modulation of the maximal net excitation ag max plays a critical role in reproducing different degrees of habituation rates depending on the social status and the periodic stimulation inputs. The change of the excitability of the M-cell may be due to the availability of 2-AG, hormonal regulation, or other neurotransmitters including dopamine and serotonin (Song et al., 2015;Pantoja et al., 2016). These mechanisms are not necessarily mutually exclusive. It is most likely that the synergistic action of few of these mechanisms may act in a cooperative manner. Current technological limitations prevent the direct exploration of the parameter space that regulates escape circuit dynamics. Therefore, it will be exciting when future technologies make it possible to empirically verify the proposed model by direct physiological measurements of the neuromodulatory inputs that regulate M-cell's excitability in socially experienced animals.
AUTHOR NOTE
Most animals exhibit habituation to repeated stimuli. However, how social experience and characteristics of the external stimulations affect habituation process is still poorly understood. Using zebrafish as an experimental model system, we showed that habituation is affected by social experience and is contingent on rate of stimulation. Dominant animals habituate rapidly compared to subordinates to repeated stimulation at a moderate frequency. However, as the stimulus frequency decreases, the difference of habituation rates between the two groups disappears. Moreover, the habituation rates of both social phenotypes at moderate stimulus frequency were higher compared to the low stimulus frequency. To test the idea that a change in neuromodulatory inputs to the M-cell may be responsible for the different habituation processes, we constructed a simplified computational model of the M-cell escape circuit. Our model showed that a change in total net excitability of the model M-cell escape circuit, which represents a summative neuromodulatory input to the M-cell, combined with intracellular calcium dynamics was sufficient to reproduce the experimental results. The model that represented "dominantlike" phenotype displayed rapid habituation compared to a model that represented "subordinate-like" phenotype at a moderate stimulus frequency. This difference disappears at a low stimulus frequency. Habituation was more pronounced as either the stimulus frequency increased or stimulus intensity decreased. Thus, our study demonstrates that habituation is socially regulated, and our model suggests that the socially mediate rate of habituation also depends on the characteristics (frequency and amplitude) of the stimulus inputs.
AUTHOR CONTRIBUTIONS
CP, KC, FI, and SA performed research; CP, FI, and SA analyzed data; CP, FI, and SA designed research and wrote the paper. | 2018-02-05T14:05:52.622Z | 2018-02-05T00:00:00.000 | {
"year": 2018,
"sha1": "008b5e8d435c61349b793a173bbc53341ef09e9f",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fncir.2018.00007/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "008b5e8d435c61349b793a173bbc53341ef09e9f",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
208088483 | pes2o/s2orc | v3-fos-license | Revisiting the Futamura Projections: A Visual Tutorial
Partial evaluation, while powerful, is not widely studied or used by the pragmatic programmer. To address this, we revisit the Futamura Projections from a visual perspective by introducing a diagramming scheme that helps the reader navigate the complexity and abstract nature of the Futamura Projections while emphasizing their recurring patterns. We anticipate that this approach will improve the accessibility of the Futamura Projections to a general computing audience.
Introduction
The Futamura Projections are a series of program signatures reported by [Fut99] (a reprinting of [Fut71]) designed to create a program that generates compilers. This is accomplished by repeated applications of a partial evaluator that iteratively abstract away aspects of the program execution process. A partial evaluator transforms a program given any subset of its input to produce a version of the program that has been specialized to that input. We use the symbol mix from [Jon96] to denote the partial evaluation operation because partial evaluation involves a mixture of interpretation and code generation. In this tutorial, we will provide an overview of typical program processing and explain the Futamura Projections. Table 1: Legend of symbols and terms used in § 2 and § 3.
Symbol
Description program A miscellaneous program. p n A parameter. a n An argument. pow A power program. compiler S→T T A compiler from language S to language T , implemented in T . program S A miscellaneous program implemented in language S. program T A miscellaneous program implemented in language T . compiler C→x86 x86 A compiler from C to x86, implemented in x86. pow.c A power program implemented in C. pow x86 A power program implemented in x86. interpreter S T An interpreter for language S implemented in language T . interpreter C x86 An interpreter for C implemented in x86. partial input static A subset of input for a program being specialized by mix. program T A specialized program implemented in language T . square x86 A square program implemented in x86. mix T A partial evaluator implemented in language T . mix x86 A partial evaluator implemented in language x86. compiler generator T A compiler generator implemented in language T . compiler generator x86 A compiler generator implemented in x86.
We use the terms static and dynamic throughout this paper in reference to a variety of bindings. For purposes of this paper, a static binding is one that happens before run-time, usually at compile-time, and remains unchangeable during run-time. A dynamic binding happens at run-time and is changeable at run-time. For instance, the value of an integer variable is dynamic in that it bound at run-time and can be modified at run-time. The size of an integer variable, on the other hand, is static in that it is fixed (e.g., four bytes) before run-time (usually at language implementation time) and cannot change at runtime. Table 1 is a legend mapping terms and symbols used in this article to their description.
Programs Processing Other Programs
Program execution can be represented equationally as [[program]][a 1 , a 2 , ..., a n ] = [output]. Alternatively, the diagram in Fig. 1a depicts a program as a machine that takes a collection of input boxes and produces an output box. We use this diagram syntax to aid in the presentation of complex relationships between programs, inputs, and programs treated as inputs (i.e., data). Each input area corresponds to part of a C-function-style signature that names and positions the inputs. The input is presented in gray to distinguish it from the program and its input bar. Fig. 1b shows this pattern applied to a program that takes a base b and an exponent e and raises the base to the power of the exponent. In this case, 3 raised to the power of 2 produces 9, or [
Compilation
Programs written in high-level programming languages such as C must be either compiled to a natively runnable language (e.g., the x86 machine language) or evaluated by an interpreter. A compiler is simply a program that translates a program from its source language to a target language. This process is described equationally as [[compiler S→T T ]][program S ] = [program T ], or diagrammatically in Fig. 2a. For clarity, the implementation language appears as a subscript of any program name. Compilers will also have a superscript with an arrow from the source (input) language to the target (output) language. If language T is natively executable, both the depicted compiler and its output program are natively executable. If pow from Fig. 1b is written in C, it can be compiled to the x86 machine language with a compiler as depicted in Fig. 2b
Interpretation
The gap between a high-level source language and a natively executable target language can also be bridged with the use of an interpreter. "The interpreter for a computer language is just another program" implemented in the target language that evaluates the program given the program's input and producing its output [FW08]. The interpretation pattern, described equationally as , is depicted in Fig. 3a. The interpreter has the previously established implementation language subscript, with a superscript indicating the interpreted language. The input program's input bar extends into the next input slot, which serves to indicate which input is associated with which of its own input slots. However, as the inputs are still being provided directly to the interpreter, an encompassing box is drawn around each individual input to the program being executed. By convention, the background shading is alternated to differentiate inputs, while the borders of inputs remain gray. This pattern is applied to the pow.c program in Fig. 3b; the C program is being executed by an interpreter implemented in x86 to produce the output from pow.c given its input. This interpretation is represented equationally as
Partial Evaluation
With typical program execution, all input must be provided during execution. Partial evaluation allows for a combination of static partial input, provided initially to mix itself, and various different sets of dynamic input, which complete the original program's input, that are later given to the transformed program. For reasons explained below, the Futamura Projections require that mix be implemented in the same language as the programs it takes as input; diagrams including mix will provide a subscript that represents the implementation language as well as the language of input and output programs. Here, a program is being passed to mix with a (partial) static assignment of inputs, or a subset of its input (in this case, consisting only of its second argument). The result is a transformed version of the program specialized to the input; the input has been propagated into the program to produce a new program. Notice how the shape of the output program mirrors the shape of the input program combined with the static input. Notice also in Fig. 4b that the shape of the program combined with the remainder of its input mirrors the shape of the typical program execution shown in Fig. 1a. However, the input has been visually fused to the program, represented by the dotted line. In addition, the labels for the original program, the second input slot, and the static input have been shaded gray; while the resulting program is entirely comprised of these two components, its input interface has been modified to exclude them. In other words, while the components are still present, they are only in the background of the new program. The equational representation of this resulting program shows the simplicity of its behavior: The partial evaluation pattern is applied to the pow x86 program in Fig. 4c. If pow x86 is partially evaluated with static exponent e = 2, the result is a power program that can only raise a base to the power 2. This example is represented equationally as ]. The specialization produces a program that takes a single input (as in Fig. 4d ) and squares it. It behaves as a squaring program despite being comprised of a power program and an input; mix has propagated the input into the original program to produce a specialized program.
First Futamura Projection: Compilation
Partial evaluation is beneficial given a program that will be executed repeatedly with some of its input constant, resulting in a significant speedup. For example, if squaring many values, a specialized squaring program prevents the need for repeated exponent e = 2 arguments. Program interpretation is another case that benefits from partial evaluation; after all, the interpreter is a program and the source program is a subset of its input. Fig. 5a illustrates that when given program S and an interpreter for S implemented in language T , we can partially evaluate the interpreter with the source program as static input (i.e., . This is the First Futamura Projection. As with the previous pattern, the partially evaluated program (i.e., the interpreter) has been specialized to the partial input (i.e., the source program), which is indicated visually by the fusion of the source program to the interpreter. Notice that program S is vertically aligned with the static input slot of the partial evaluator as well as the program input slot of the interpreter. This is because program S serves both roles. In this case, the dynamic input of the interpreter is the entirety of the input for program S . When that input is provided in Fig. 5c, the specialized program completes the interpretation of program S , producing the output for program S . In other words, the specialized program behaves exactly the same as program S , but is implemented in T rather than S. The partial evaluator has effectively compiled the program from S to T . Thus, the equational form is identical to that of a compiled program: [[program T ]][a 1 , a 2 , ..., a n ] = [output]. Fig. 5b ] express the partial evaluation of a C interpreter when given pow as partial input. The resulting program, detailed in Fig. 5d, behaves the same as pow.c, but is implemented in x86. The equational expression for the target program is also identical to the compiled program: First Futamura Projection: A partial evaluator, with an interpreter as input, can compile from the interpreted language to the implementation language of mix.
Second Futamura Projection: Compiler Generation
The First Futamura Projection relies on the nature of interpretation requiring two types of input: a program that may be executed multiple times, and input for that program that may vary between executions. As it turns out, the use of mix as a compiler exhibits a similar signature: the interpreter is specialized multiple times with different source programs. This allows us to partially evaluate the process of compiling with a partial evaluator. This is the Second Futamura Projection, represented equationally as Fig. 6a. In this partial-partial evaluation pattern, an instance of mix is being provided as the program input to another instance of mix, to which an interpreter is provided as static input. Just as in earlier partial evaluation patterns, the program input has been specialized to the given static input; in this case, an instance of mix is being specialized to the interpreter. The vertical alignment of programs helps clarify the roles of each program present: the interpreter is the partial input given to the executing instance of mix as well as the program input given to the specialized instance of mix. Additionally, this specialized output program as executed in Fig. 6b matches the shape and behavior of the First Futamura Projection shown in Fig. 5a. This is because the same program is being executed with the same input; the only difference is that the output of the second projection is a single program that has been specialized to the interpreter rather than a separate mix instance that requires the interpreter to be provided as input. In the Second Futamura Projection, mix has generated the mix-based compiler from the first projection. Because the result is a compiler, its equational expression is that of a compiler: [ ]). When given pow in C, this specialized mix program then specializes the interpreter to pow, producing an equivalent power program in Second Futamura Projection: A partial evaluator, by making use of itself and the interpreter, can generate a compiler from the interpreted language to the implementation language of mix.
Third Futamura Projection: Generation of Compiler Generators
Because mix can accept itself as input, we can use one instance of mix to partially evaluate a second instance of mix, passing a third instance of mix as the static input. This is the Third Futamura Projection, shown in Fig. 7a and written equationally as The transformation itself is straightforward: partially evaluating a program with some input. The output is still the program in the first input slot specialized to the data in the second input slot; however, this time both the program and the data are instances of mix. Again, the positioning of the various instances of mix within the diagram serves to clarify how the instances interact. The outermost instance executes with the other two instances as input. The middle instance is the program input of the outer instance and is specialized to the inner instance. Finally, the inner instance is being integrated into the middle instance by the outer instance.
Notice that Fig. 7b shows that the execution of the resulting program matches the shape and behavior of the Second Futamura Projection shown in Fig. 6a when provided with an interpreter as input. The partial evaluator has generated the mix-based compiler generator from the second projection. This process is represented equationally as [[compiler ]. Interestingly, the only variable part of the Third Futamura Projection is the language associated with mix.
Previous instance diagrams were specific to the pow.c program; for instance, Fig. 6c presents an interpreter for the implementation language of pow.c, namely C. However, the diagram in Fig. 8a ]), but it will accept any interpreter implemented in x86 regardless of the language interpreted. For example, Fig. 8c Third Futamura Projection: A partial evaluator, by making use of two other instances of itself, can generate a compiler generator that produces compilers from any language to the implementation language of mix.
Summary: Futamura Projections
The Third Futamura Projection follows the pattern of the previous two projections: the use of mix to partially evaluate a prior process (i.e., interpretation, compilation). The first projection compiles by partially evaluating the interpretation process without the input of the source program. The second projection generates a compiler by partially evaluating the compilation process from the first projection without the source program. The Third Futamura Projection generates a compiler generator by partially evaluating the compiler generation process without the interpreter. Each projection delays completion of the previous process by abstracting away the more variable of two inputs. Just as the first projection interprets a program with various, dynamically-given inputs and the second projection compiles various programs, the third projection generates compilers for various languages/interpreters. Table 2 juxtaposes the related equations and diagrams from both § 2 and § 3 in each row to make their relationships more explicit. Each row of Table 3 succinctly summarizes each projection by associating each side of its equational representation with the corresponding diagram from § 3.
Conclusion
Partial evaluation, through the Futamura Projections, can be used to compile, generate compilers, and generate compiler generators. Although the scope of the Futamura Projections has been largely limited to the programming languages research community, we are optimistic that this article has demystified their esoteric nature and shed light on their role in building powerful programming abstractions. | 2016-11-29T21:56:34.000Z | 2016-11-29T00:00:00.000 | {
"year": 2016,
"sha1": "2243fcc7e60e488ee741fa0b83f8560427f1b1ef",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/1611.09906",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "682e5a3b9bbdcf08b780c61596f2df40b6c6623f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
233261336 | pes2o/s2orc | v3-fos-license | Glycol chitosan-based tacrolimus-loaded nanomicelle therapy ameliorates lupus nephritis
Background Recently, we developed hydrophobically modified glycol chitosan (HGC) nanomicelles loaded with tacrolimus (TAC) (HGC-TAC) for the targeted renal delivery of TAC. Herein, we determined whether the administration of the HGC-TAC nanomicelles decreases kidney injury in a model of lupus nephritis. Lupus-prone female MRL/lpr mice were randomly assigned into three groups that received intravenous administration of either vehicle control, an equivalent dose of TAC, or HGC-TAC (0.5 mg/kg TAC) weekly for 8 weeks. Age-matched MRL/MpJ mice without Faslpr mutation were also treated with HGC vehicle and used as healthy controls. Results Weekly intravenous treatment with HGC-TAC significantly reduced genetically attributable lupus activity in lupus nephritis-positive mice. In addition, HGC-TAC treatment mitigated renal dysfunction, proteinuria, and histological injury, including glomerular proliferative lesions and tubulointerstitial infiltration. Furthermore, HGC-TAC treatment reduced renal inflammation and inflammatory gene expression and ameliorated increased apoptosis and glomerular fibrosis. Moreover, HGC-TAC administration regulated renal injury via the TGF-β1/MAPK/NF-κB signaling pathway. These renoprotective effects of HGC-TAC treatment were more potent in lupus mice compared to those of TAC treatment alone. Conclusion Our study indicates that weekly treatment with the HGC-TAC nanomicelles reduces kidney injury resulting from lupus nephritis by preventing inflammation, fibrosis, and apoptosis. This advantage of a new therapeutic modality using kidney-targeted HGC-TAC nanocarriers may improve drug adherence and provide treatment efficacy in lupus nephritis mice. Supplementary Information The online version contains supplementary material available at 10.1186/s12951-021-00857-w.
Background
Systemic lupus erythematosus (SLE) is an autoimmune disease characterized by the production of autoantibodies against cell nuclear components that can affect any organ, including the kidneys [1]. Varying degrees of renal involvement (ranging from 30 to 60% and dependent upon both ethnicity and lesion type) are seen in patients with SLE [2][3][4]. Lupus nephritis may progress into endstage kidney disease and is independently associated with higher morbidity and mortality, even among patients undergoing dialysis and those who have undergone transplantation [2]. Therefore, patients with lupus nephritis require appropriate and continuous immunosuppressive treatment to mitigate lupus and improve kidney outcomes. Although induction therapy using standard or low-dose cyclophosphamide is important to attenuate intrarenal inflammation immediately, long-term use of a calcineurin inhibitor or antimetabolite to maintain autoimmunity and suppress inflammation for the prevention of flare is needed in patients with lupus nephritis [1,5].
In a lupus mouse model, tacrolimus (TAC) monotherapy or in combination with mycophenolate mofetil (MMF) and prednisone significantly diminished proteinuria and glomerular injury by preserving synaptopodin via the reciprocal regulation of RhoA and Rac1 [6,7]. Moreover, recent clinical studies have shown that TAC is more effective in inducing complete remission and reducing proteinuria than cyclophosphamide in patients with moderate to severe lupus nephritis [6,8,9]. Following several randomized studies evaluating the efficacy and safety of TAC as a maintenance treatment for lupus nephritis, TAC was approved for lupus nephritis treatment in Korea, Japan, and other Asian countries [10][11][12]. Due to the role of TAC as a potential therapeutic immunosuppressive agent, its use in induction and maintenance therapy for lupus nephritis has attracted considerable attention [13,14].
However, clinical management with TAC therapy remains challenging due to its narrow therapeutic range and off-target effects on other organs, as well as the negative effect of long-term TAC use, including neurotoxicity, new-onset diabetes, and nephrotoxicity [15,16]. In addition, the twice-daily oral administration decreased patient adherence to TAC therapy [17]. Although new extended-release TAC formulations exist, TAC needs to be administered daily, and the trough level for maintaining optimal therapeutic targeting should be checked [18]. Under this unmet need, nanomaterials incorporating therapeutic drugs can be engineered for slow release that allows a single-dose administration to achieve proper therapeutic targets [19]. Chitosan is one of the most functional biopolymers widely used as a pharmaceutical carrier for drug delivery [20]. Glycol chitosan possesses reactive amine groups that are accountable for the kidney-specific accumulation via megalin receptors present on the kidney [20][21][22][23]. Recently, we developed hydrophobically modified glycol chitosan (HGC) nanomicelles loaded with TAC (HGC-TAC) for the enhanced renal delivery of this immunosuppressive agent [23]. HGC-TAC nanomicelles delivered TAC preferentially to the kidney while lowering the plasma concentrations without any off-target effects [23].
There are currently limited experimental studies exploring the use of nanomaterials to treat glomerular diseases, including lupus nephritis [24][25][26][27]. Herein, we conducted a study to determine whether the administration of HGC-TAC nanomicelles decreased kidney injury in an MRL/lpr mouse model of lupus nephritis.
Characterization of HGC-TAC nanomicelle
The hydrophobic drug, TAC, was physically encapsulated into the nanomicelles by probe sonication and dialysis (Fig. 1a). The field-emission transmission electron microscopy (FE-TEM) images of HGC-TAC nanomicelle revealed spherical morphology (Fig. 1b). The TAC loading content and encapsulating efficiency of the nanomicelle were 23 ± 3%, 88 ± 8%, respectively. The average hydrodynamic size of the HGC-TAC nanomicelle was 370 ± 22 nm per dynamic light scattering measurements. The HGC-TAC nanomicelle showed an average zeta potential of 24 ± 4 mV (Additional file 1: Fig. S1a, b). The colloidal stability of HGC-TAC nanomicelles was assessed by the time-dependent changes of the HGC-TAC nanomicelles in distilled water, PBS and, 10% FBS (Additional file 1: Fig. S1c). It was shown that in the presence of FBS, the hydrodynamic size was increased, but the zeta potential decreased over time because of the formation of protein corona over the nanomicelles. However, the polydispersity index of the particle decreased, suggesting that the particles were not destabilized. It can be assumed that the formation of protein corona prevented the aggregation of particles. To determine the time-dependent cellular uptake of HGC nanomicelles in vitro, human tubular epithelial cells were treated with Flamma 675-conjugated HGC (HGC-F675) nanomicelles. As shown in Additional file 1: Fig. S1d, fluorescence intensities increased in the cell membrane in a time-dependent manner.
In vitro and in vivo release profile of TAC from HGC-TAC nanomicelles
The in vitro release profile of TAC in phosphate-buffered saline (PBS) and fetal bovine serum (FBS) showed biphasic and sustained release from HGC-TAC for up to 8 days (Additional file 2: Fig. S2a, b). In intravenously HGC-TAC-injected lupus mice, the plasma concentration of Fig. 1 In vivo biodistribution of hydrophobically modified glycol chitosan (HGC) nanomicelles. a Schematic representation of the preparation of HGC nanomicelles loaded with tacrolimus (HGC-TAC). b The FE-TEM image of HGC-TAC nanomicelles. c Fluorescence images of organs at different time intervals after injection of HGC-F675 in MRL/lpr mice. The near-infrared images of dissected organs were obtained using a near-infrared filter in a fluorescence-labeled organism bio-imaging instrument. The fluorescence intensity of organs was quantified at each time point (n = 3 mice/ group). d Confocal images of HGC-F675 nanomicelles in the kidney at different time points after injection in MRL/lpr mice. Note that podocin is a marker of podocytes, indicating the glomerulus. Original magnification ×400 or ×200, respectively. Bar = 50 μm (See figure on next page.) Kim et al. J Nanobiotechnol (2021) 19:109 TAC showed a high profile at the initial hour and rapidly decreased to zero after 24 h. However, there were significant TAC concentrations in kidney tissues until at least 96 h. Therefore, encapsulated TAC in HGC nanomicelles might prevent direct TAC exposure to the plasma, keeping plasma TAC concentration low while supplying a long-lasting TAC concentration in the kidney (Additional file 2: Fig. S2c).
In vivo biodistribution of nanomicelles
To determine the in vivo biodistribution of intravenously injected HGC nanomicelles, the fluorescence signals from various organs were serially measured for up to 7 days. As shown in Fig. 1c, the fluorescence intensity from the kidneys was the most intense compared to those from other organs after injection of HGC-F675 nanomicelles. The signal intensity from the kidneys declined gradually but was relatively well preserved for up to 7 days in MRL/lpr mice. To further localize the intrarenal distribution of the HGC-TAC nanomicelles, kidney sections were examined by confocal microscopy (Fig. 1d). HGC-F675 nanomicelle signals were localized in the cortex, medullar, and glomerular regions, suggesting a possible kidney-specific uptake of HGC-TAC nanomicelles for up to 7 days after the injection.
HGC-TAC nanomicelle treatment attenuated lupus activity and proteinuria in lupus nephritis mice
We first investigated the effects of HGC-TAC nanomicelles on lupus activity and renal outcomes in lupusprone MRL/lpr mice. Increased survival rates were shown in lupus mice that received HGC-TAC treatment compared to vehicle treatment (Fig. 2b). After 8 weeks, lupus mice exhibited increased anti-double-stranded DNA antibody titers and serum levels of blood urea nitrogen and creatinine and decreased serum C3 levels compared to the MRL/MpJ wild-type mice (Fig. 2c).
Weekly treatment with intravenous HGC-TAC improved all these parameters, albeit serum blood urea nitrogen and C3 levels were not significantly different between vehicle-or TAC-treated lupus mice and HGC-TACtreated lupus mice. Furthermore, HGC-TAC treatment decreased urine protein and albumin-to-creatinine ratios compared to vehicle or TAC treatment alone (Fig. 2d). Thus, HGC-TAC treatment may attenuate proteinuria and lupus nephritis activity. Although the body weights of lupus mice increased compared to wild-type mice at 8 weeks, the body weights of HGC-TAC treated lupus mice were not different from those of wild-type mice at the end of the experiment. It may be hypothesized that body edema was improved (Fig. 2e). However, there were no significant differences in kidney-to-body weight ratios among all groups (Fig. 2f ).
HGC-TAC nanomicelle treatment resulted in improved renal histology and decreased glomerular immune complex deposition in lupus nephritis mice
Histologically, hematoxylin, eosin, and Periodic Acid-Schiff staining of kidney sections from MRL/lpr mice revealed conspicuous inflammatory interstitial infiltration, global proliferative lesions in the glomeruli, and thickened capillary walls, which indicated a diffuse/focal proliferative glomerular nephritis pathology (Fig. 3a). HGC-TAC treatment attenuated tubulointerstitial inflammation and glomerular injury compared to vehicle or TAC treatment alone. In vehicle-or TAC-treated lupus nephritis mice, mild mesangial and subendothelial immune depositions of complement factors (C3 and C1q, as well as fibrinogen, IgG, IgA, and IgM) were detected, whereas the kidneys of HGC-TAC treated MRL/lpr mice displayed decreased immune deposits (Fig. 3b). Additionally, HGC-TAC treatment in MRL/lpr mice attenuated podocyte foot process effacement, mesangial deposition, and widening compared to vehicle or TAC treatment alone (Fig. 3c). This explains the improvement of proteinuria in HGC-TAC-treated lupus mice. Our data suggested that HGC-TAC treatment improves renal histology and glomerular immune deposition in lupus nephritis.
Treatment with HGC-TAC nanomicelles attenuated inflammation in lupus nephritis mice
Next, we explored whether renal inflammatory protein and gene expression in lupus nephritis mice was modified by HGC-TAC treatment. HGC-TAC treatment in MRL/ lpr mice tended to decrease the level of CD68 + cells, a (See figure on next page.) Fig. 2 Treatment with HGC-TAC ameliorated lupus activity and proteinuria in MRL/lpr mice. a A schematic of the timeline of HGC-TAC treatment and experimental analysis. b Survival curve of the mice in each group. WT wild-type. c Serum samples were analyzed for anti-double strand DNA (anti-dsDNA) antibody in serum, serum blood urea nitrogen (BUN), serum creatinine, and complement C3 levels. d Protein and albumin excretion were measured in urine samples collected in metabolic cages for 24 h. Urine protein and albumin were normalized for urine creatinine. UPCR Urine protein-to-creatinine ratio, UACR urine albumin-to-creatinine ratio. e, f The body weights and the kidney-to-body weight ratio of the groups. Data are representative of three independent experiments. All values are presented as mean ± SEM. The experiment was performed with 6 mice per group. *P < 0.05, **P < 0.01, and ***P < 0.001, # P < 0.05 HGC-TAC-treated MRL/lpr mice compared with vehicle-treated MRL/lpr mice. ns not statistically significant Kim et al. J Nanobiotechnol (2021) 19:109 marker for macrophages and monocytes, compared to vehicle or TAC treatment alone (Fig. 4a, b). Immunohistochemical staining showed that vehicle-or TAC-treated lupus nephritis kidneys had increased levels of CD68 + cells and F4/80+ mononuclear macrophages, whereas the levels of these cells were profoundly decreased in the glomerulus of HGC-TAC-treated lupus nephritis mice ( Fig. 4c, d, and f ).
Since interferon-γ (INF-γ) is a major pro-inflammatory T-cell cytokine that plays a pivotal role in the Fig. 3 Treatment with HGC-TAC reduced glomerular proliferative lesions and tubulointerstitial infiltration and decreased glomerular immune complex deposition in MRL/lpr mice. a Hematoxylin, eosin, and Periodic Acid-Schiff staining of kidney sections from wild-type (WT), MRL/lpr mice treated with vehicle, or MRL/lpr mice treated with an equivalent dose of TAC or HGC-TAC. b Immunofluorescence staining of kidney sections for C3, C1q, fibrinogen, IgG, IgA, and IgM. Original magnification ×200, Bar = 50 μm. c Scanning and transmission electron microscopy in each group. Red arrows indicate podocyte foot process effacements. Asterisk indicates mesangial deposition. Original magnification ×2000, ×8000 and ×10,000, respectively; Bar = 5 μm. This experiment was performed with 3 mice per group development of nephritis in MRL/lpr mice [28], we analyzed the mRNA expression of INF-γ using quantitative polymerase chain reaction (qPCR). The relative expression level of INF-γ was lower in the kidneys of HGC-TAC-treated MRL/lpr mice than that in the kidneys of vehicle-treated mice (Fig. 5). Consistent with these results, renal mRNA expression levels of inflammatory interleukin 1β (IL-1β), interleukin 6 (IL-6), monocyte chemoattractant protein-1 (MCP-1), and tumor necrosis factor-α (TNF-α) markers were lower in HGC-TAC-treated MRL/lpr mice than those observed in vehicle-treated lupus nephritis mice. Similarly, cell adhesion marker expression levels, such as intercellular adhesion molecule-1 (ICAM-1) and vascular cell adhesion molecule-1 (VCAM-1), were all low in the kidneys of HGC-TAC-treated lupus nephritis mice but did not reach statistical significance. In addition, HGC-TAC treatment significantly decreased the mRNA expression of MCP-1, IL-6, and IFN-γ compared to TAC treatment alone. Moreover, in line with the mRNA expression data, kidney section staining revealed increased TNF-α expression, especially in the glomerulus of vehicle-and TAC-treated MRL/lpr mice. Conversely, this expression was downregulated after HGC-TAC treatment in HGC-TAC-treated lupus mice (Fig. 4e, f ). Overall, these data suggested that HGC-TAC treatment reduced glomerular inflammation in lupus nephritis mice.
Treatment with HGC-TAC nanomicelles reduced renal glomerular fibrosis and apoptosis in lupus nephritis mice
We next investigated whether HGC-TAC treatment affects renal fibrosis in lupus nephritis mice. α-smooth muscle actin levels were decreased in HGC-TAC-treated MRL/lpr mice than in vehicle-or TAC-treated MRL/lpr mice, but this effect was limited (Fig. 6a, b). However, kidney section staining with Masson's trichrome stain showed differences in the extent of glomerular fibrosis between vehicle-or TAC-treated and HGC-TAC-treated lupus mice (Fig. 6c, d).
To assess the effects of HGC-TAC treatment on renal apoptosis in lupus nephritis mice, we performed a terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) assay. TUNEL-positive cells were considerably increased in both the glomerulus and tubular epithelium of MRl/lpr mice. Notably, HGC-TAC treatment remarkably ameliorated TUNEL-positive cell counts in lupus nephritis mice (Fig. 7a, b). In contrast, HGC-TAC treatment did not affect the protein expression of the ratio of bax/bcl-2, cleaved caspase 3, and cytochrome compared to untreated lupus nephritis mice (Fig. 7c, d). However, the phosphorylation of p53 increased in vehicle-or TACtreated MRL/lpr mice, whereas treatment with HGC-TAC significantly decreased the phosphorylation of p53. These findings suggest that HGC-TAC treatment may inhibit apoptosis in lupus nephritis mice via an intrinsic apoptotic pathway.
HGC-TAC nanomicelle treatment regulates kidney protection in lupus nephritis mice via the TGF-β1/MAPK/ NF-κB signaling pathway
Since excessive activation of the transforming growth factor (TGF)-β1-mediated MAPK (mitogen-activated protein kinases) signaling pathway is involved in lupus nephritis development in both human and mouse [29,30], we examined whether the HGC-TAC treatmentinduced renoprotection observed in lupus nephritis mice is dependent on the TGF-β1/MAPK signaling pathway. HGC-TAC-treated mice showed a marked decrease in TGF-β1 levels compared to vehicle-or TAC-treated MRL/lpr mice (Fig. 8a). Compared to those in vehicletreated wild-type mice, the phosphorylation levels of c-Jun NH2-terminal kinase (JNK) and p38 were higher in lupus nephritis mice; however, the phosphorylation of extracellular signal-regulated kinase (ERK) was lower. Among MAPKs, HGC-TAC treatment significantly reduced the phosphorylation of JNK and p38, while increased the phosphorylation of ERK (Fig. 8a, c). Although TAC-treated mice also showed similar effects, the increased expression of TGF-1β1 and p38 phosphorylation, and decreased ERK phosphorylation were significantly restored in HGC-TAC-treated mice. The phosphorylation of signal transducer and activator of transcription (STAT) 3 was also found to be upregulated in nephritic kidneys and is a critical component in the pathogenesis of lupus nephritis [31]. Although the expression of STAT3 phosphorylation was elevated in vehicle-treated MRL/lpr mice, it was unaffected by HGC-TAC treatment (Fig. 8a, c).
To further evaluate the downstream signaling pathway of TGF-β1/MAPK, we assessed nuclear factor-κB (NF-κB) signaling pathway alterations. Phosphorylation of p65 was increased in nephritic mice, whereas treatment (See figure on next page.) Fig. 4 HGC-TAC diminished lupus-specific inflammation. a CD68 protein levels from wild-type (WT), MRL/lpr mice treated with vehicle, or MRL/ lpr mice treated with an equivalent dose of TAC or HGC-TAC. b Relative protein intensities. The values for the WT vehicle-treated group are set to 1. (n = 6 mice/group). c-e Immunohistochemical staining of kidney sections for CD68, F4/80, and tumor necrosis factor-α (TNF-α) from each group. Magnification ×400, Bar = 25 μm. f Staining of CD68, F4/80, and TNF-α in the glomerulus was quantified and expressed as a percentage of positive glomerular area. Data are shown as mean ± SEM. *P < 0.05, **P < 0.01, and ***P < 0.001. ns not statistically significant Kim et al. J Nanobiotechnol (2021) 19:109 with HGC-TAC resulted in a decrease in p65 phosphorylation (Fig. 8b, c). TAC treatment also decreased the p65 phosphorylation, but HGC-TAC-treated lupus mice exhibited lower expression than TAC-treated lupus mice. Consistent with these data, TGF-β1 and NF-κB staining were markedly increased in lupus nephritis glomerulus and tubules, whereas these expressions were attenuated in the HGC-TAC-treated lupus mice (Fig. 8d, e, and f ). These data collectively indicate that HGC-TAC negatively regulates the TGF-β1/MAPK/NF-κB signaling pathway, providing kidney protection in lupus nephritis mice, but does not mediate the STAT3 signaling pathway.
Discussion
In this study, we investigated the renoprotective effects and efficacy of HGC-TAC nanomicelle treatment in a MRl/lpr mouse model of lupus nephritis. Lupusassociated activity and proteinuria were attenuated by HGC-TAC treatment. In addition, HGC-TAC treatment mitigated renal dysfunction and histological injury, including glomerular proliferative lesions and tubulointerstitial infiltration. Furthermore, HGC-TAC administration reduced renal inflammation and accompanying inflammatory gene expression in the lupus nephritis mouse model. Additionally, HGC-TAC administration ameliorated increased glomerular fibrosis and renal apoptosis and appeared to regulate renal inflammation via the TGF-β1/MAPK/NF-κB signaling pathway.
Regarding these renoprotective effects, HGC-TAC was more potent compared to an equivalent dose of TAC treatment alone. Several studies have discussed the feasibility of sitespecific drug delivery into the kidneys for the treatment of glomerulonephritis [24][25][26]. In ddY mice, a spontaneous animal model for IgA nephropathy, treatment with prednisolone phosphate-loaded liposomes showed better improvement in glomerular IgA and C3 depositions compared to ordinary prednisolone phosphate treatment characterized by the same dose and duration [24]. Similarly, dexamethasone loaded immunoliposomes were highly effective in improving renal function and decreasing glomerular crescent formation (without affecting blood glucose levels) in an anti-glomerular basement membrane glomerulonephritis model [26]. Moreover, a previous study demonstrated that a single intravenous injection of MMF containing immunoliposomes reduced mesangial cells in anti-Thy1.1 nephritis rats, a model of mesangial proliferative glomerulonephritis [25]. Therefore, targeted delivery of a steroid or MMF using immunoliposome may maintain the efficacy and quality of these drugs for kidney inflammation but minimize systemic side effects in the glomerulonephritis model.
However, kidney-targeted, nanomicelle-based TAC delivery methods for lupus nephritis models are yet to be developed. A recent study showed that monotherapy with TAC significantly diminished proteinuria and Toll-like factor-α, IMF-γ interferon-γ (n = 6 mice/group). The values for WT vehicle-treated group are set to 1. Data are shown as means ± SEM. *P < 0.05, **P < 0.01, and ***P < 0.001. ns not statistically significant receptor-7 expression and induced the suppression of IL-6 production in lupus nephritis mice [6]. Additionally, TAC monotherapy preserved renal function in nephritic mice by inhibiting podocyte apoptosis and stabilizing the actin cytoskeleton [7]. However, in these studies, lupus mice were given 0.1 to 1 mg/kg TAC daily by intragastric administration for 8 weeks [6,7]. In the present study (compared to the results of long-term and frequent TAC administration via daily oral gavage), we showed that weekly intravenous injections of HGC-TAC (0.5 mg/kg TAC) nanomicelles alone inhibited renal inflammation and resulted in improved renal morphology and function in lupus nephritis mice. However, weekly treatment with an equivalent TAC dose without HGC nanomicelles exhibited suboptimal renoprotective effects in lupus nephritis mice compared to HGC-TAC treatment. Therefore, kidney-targeted HGC-TAC delivery can exert renoprotective effects using a smaller than conventional TAC dose with an extended administration interval. Since poor adherence to immunosuppressive therapy is common and is one of the most important factors limiting renal allograft survival following transplantation in lupus patients [32,33], HGC-TAC nanocarriers may improve the drug adherence associated with reduced mortality [34]. Thus, our findings suggest HGC-TAC nanomicelles as a new therapeutic modality that can reduce pill burden or extend the interval of TAC administration in lupus nephritis patients. Immunofluorescence staining of kidney biopsies showed substantial expression of TGF-β1 and increased urinary levels of TGF-β1, as reported in lupus patients [29,35,36]. Previous studies have shown that decreased ERK signaling is associated with the development of autoimmunity in lupus via the decrease in the activity of DNA methyltransferases and the consequent alteration of gene expression; of note, lupus-prone mice are characterized by the increased phosphorylation of JNK [37][38][39]. Cell survival and death may therefore be controlled by the opposing actions of the ERK and JNK pathways [40]. In addition, the activation of p38MAPK is involved in TGF-β1-mediated gene expression and apoptosis in MRL/lpr mice [30]. In line with the results of these studies, we found that the TGF-β1/JNK-p38MAPK signaling pathways were upregulated, while the expression of ERK was downregulated in vehicle-treated lupus mice. These altered signaling mediators were restored by treatment with HGC-TAC. Importantly, our results were also consistent with those of a previous study demonstrating that MAPK1 short interfering RNAs (with nanocarrier therapy) suppressed glomerular MAPK1 gene and protein expression in lupus nephritis mice [27]. NF-κB is associated with the onset of various inflammatory autoimmune diseases, including lupus nephritis [41]. The phosphorylation of NF-κB occurs in the cytoplasm, enhancing NF-κB transcriptional activity [42]. Thus, NF-κB signaling regulates the expression of numerous genes that play key roles in the inflammatory response during kidney injury [43]. Consequently, NF-κB signaling plays a pathological role in lupus nephritis, and interference with this signaling by HGC-TAC nanomicelle treatment contributes to renoprotective effects in lupus nephritis mice.
In conclusion, we have demonstrated that weekly treatment with HGC-TAC nanomicelles prevents kidney injury from advanced lupus nephritis by preventing inflammation, fibrosis, and apoptosis through the modulation of the TGF-β1/MAPK/NF-κB singling pathway. Although a key takeaway of this study is that a new therapeutic modality using a kidney-targeted, TAC-loaded nanocarrier may provide benefits for treating nephritis in lupus mice, the therapeutic efficacy of TAC-loaded nanocarriers remains limited to animal models. Further studies are needed to clarify the renoprotective effects of HGC-TAC nanomicelles in humans.
Synthesis of TAC-loaded HGC (HGC-TAC) nanomicelles
We synthesized glycol chitosan conjugated to 5β-cholanic acid micelles and loaded TAC, as described in our previous report [23]. Briefly, lyophilized glycol chitosan was conjugated to 5β-cholanic acid via EDC NHS chemistry. The retrieved sample (HGC) was lyophilized and stored for further use. TAC-loaded HGC (HGC-TAC) nanomicelles were prepared using a nanoprecipitation method. The prepared HGC was dissolved in distilled water. Under mild sonication, TAC prepared in methanol was added dropwise to the HGC solution. The drugloaded sample (HGC-TAC) was dialyzed (MWCO: 12 to 14 KDa) against distilled water for 2 days, lyophilized, and stored until use. The amount of TAC in the HGC-TAC nanomicelles was measured using high-performance liquid chromatography (HPLC) (Shimadzu, Kyoto, Japan).
Particle size, surface charge, stability, and FE-TEM analysis
The hydrodynamic particle size and surface charge of HGC-TAC were measured by dynamic light scattering (Zetasizer Nano series, Malvern Instruments, Malvern, UK). The colloidal stability of the nanomicelles in (See figure on next page.) Fig. 8 Treatment with HGC-TAC regulates the transforming growth factor (TGF)-β1/p38 mitogen-activated protein kinase (MAPK)/nuclear factor (NF)-κb signaling pathway in MRL/lpr mice. a, b Western blot analysis of TGF-β1, MAPKs, signal transducer, activator of transcription (STAT) 3, NF-κb, and protein levels from wild-type (WT), MRL/lpr mice treated with vehicle, or MRL/lpr mice treated with an equivalent dose of TAC or HGC-TAC. c Relative protein intensities are presented. The values for the WT vehicle-treated group are set to 1. (n = 6 mice/group). d, e Immunohistochemical staining of kidney sections for TGF-β1 and NF-κB p65 from each group. Original magnification ×400, Bar = 25 μm. f Staining for TGF-β1 and NF-κB p65 in the glomerulus was quantified and expressed as a percentage of positive glomerular area. All values are presented as mean ± SEM. *P < 0.05, **P < 0.01, and ***P < 0.001. ns not statistically significant distilled water, PBS, and 10% fetal bovine serum were assessed using dynamic light scattering over 7 days. The size and morphology of nanomicelles were also measured using FE-TEM operated at 200 kV (JEM-2100 F, Tokyo, Japan).
In vitro release of TAC
In vitro release of TAC from nanomicelles was studied in PBS and 10% FBS for 7 days. Samples were placed in a dialysis bag and kept in 20 ml release medium (PBS or 10% FBS) in a shaking incubator at 37 ℃. Samples were aliquoted at different time points and analyzed using HPLC.
Determination of kidney and plasma TAC concentration
The concentration of TAC in the kidney and plasma was determined as previously described [23]. Briefly, 0.5 mg/ kg of HGC-TAC was intravenously injected. Plasma was collected by retro-orbital sinus bleeding at different time points and centrifuged at 845×g for 10 min. An equal amount of 100% methanol was added to the plasma and centrifuged at 13,500×g for 2 min. The collected supernatant was used for HPLC analysis. To determine the concentration of TAC in the kidney, each kidney was homogenized in methanol using a TissueLyser II (Qiagen, Hilden, Germany). The homogenized solution was centrifuged at 13,500×g for 10 min, and the supernatant was then collected for HPLC analysis.
In vivo biodistribution of nanomicelles
Female MRL/MpJ-Fas lpr mice were intravenously injected with Flamma 675-conjugated HGC dissolved in PBS. The mice were euthanized, and their organs were collected at predetermined time points (1, 2, 3, 5, and 7 days) after a single intravenous injection. Fluorescence intensity was measured using a fluorescence-labeled organism bioimaging instrument (FOBI; NEO Science, Gyeonggi, Korea). The isolated kidneys were dehydrated with 20% sucrose in PBS for 4 h at 4 ℃ and embedded in an optimal cutting temperature compound. Frozen kidney sections of 20 μm were prepared. The sections were rehydrated with PBS at the time of immunostaining and were counterstained with 4′,6-diamidino-2-phenylindole. Images were acquired using a confocal microscope (LSM 800; Carl Zeiss, Oberkochen, Germany).
Cellular uptake of nanomicelles
The human proximal tubular cells were seeded in Lab-Tek ® Chamber Slide and incubated overnight. The media was aspirated, and the cells were treated with HGC-F675 nanomicelles at different time points up to 6 h. Subsequently, the samples were removed, followed by washing with PBS and 4% paraformaldehyde fixation. The cells were stained with Hoechst 33342 and mounted with prolong gold antifade reagent. The fluorescence signal was visualized using confocal microscopy. Cells not treated with HGC-F675 were used as a negative control.
Animal experiments
Mice were maintained in a 12-h light/dark cycle and had free access to standard chow (Damul Science, Daejeon, Korea) and tap water. Eight-week-old female MRL/ MpJ and MRL/MpJ-Fas lpr mice were purchased from the Jackson Laboratory (Bar Habor, ME, USA). Female MRL/MpJ-Fas lpr mice were randomly assigned into three groups and given either vehicle, an equivalent dose of TAC, or HGC-TAC (0.5 mg/kg TAC) intravenously once per week for 8 weeks. Age-and sex-matched MRL/MpJ mice without Fas lpr mutation were treated with vehicle and used as healthy controls (n = 6 mice per group) (Fig. 2a). Experiments were repeated at least twice.
Measurement of urine protein and albumin-to-creatinine ratio
Urine samples were collected in metabolic cages to examine the levels of urinary protein and albumin excretion and ratios of urinary protein and albumin to creatinine. Urine creatinine was quantified using commercial kits from BioAssay Systems (Hayward, CA). Urine protein was assessed using Bradford's method (DC Protein Assay, Bio-Rad Laboratories GmbH, Munich, Germany). Urine albumin was determined using a commercial assay from Bethyl Laboratory, Inc. (Montgomery, TX).
Western blot analysis
Proteins extracted from mouse tissues were obtained by homogenization in ice-cold modified RIPA buffer (150 mM sodium chloride, 50 mM Tris-HCl (pH 7.4), 1 mM EDTA, 1% v/v Triton-X 100, 1% w/v sodium deoxycholic acid, 0.1% v/v SDS) and centrifuged at 4000×g for 15 min at 4 ℃. Western blot analysis was performed as described previously [44]. Densitometry was performed using Scion Image software (Scion Corporation, Frederick, MD). The primary and secondary antibodies used in western blotting are listed in Additional file 3: Table S1.
Quantitative reverse transcription-polymerase chain reaction (qRT-PCR)
Total RNA was extracted using Trizol reagent (Invitrogen, Carlsbad, CA). cDNA was reverse transcribed from 5 μg of total RNA using SuperScript II Reverse Transcriptase as per the manufacturer's instructions (Invitrogen). qRT-PCR analysis was performed using the SYBR green method [45]. The relative level of tissue mRNA was determined by qPCR using a Rotor-Gene Q (QIAGEN Sciences, Germantown, MD). The primers used in qRT-PCR are listed in Additional file 3: Table S2.
Immunohistochemical and immunofluorescence staining
The kidneys were fixed in 4% paraformaldehyde, dehydrated using a graded series of ethanol, embedded in paraffin, sectioned (3 µm), and mounted on glass slides. Hematoxylin and eosin and Periodic Acid-Schiff staining were used to assess kidney histology. Periodic Acid-Schiff and Masson's trichrome staining were performed according to the manufacturer's instructions (Abcam, Cambridge, MA) [46]. For immunohistochemical staining, paraffin sections were dewaxed and rehydrated via a xylene/ethanol gradient followed by antigen retrieval (100 ℃ for 15 min in citrate buffer, pH 6.0) using Antigen Unmasking Solution (Vector Laboratories, Burlingame, CA). Sections were blocked with 2.5% bovine serum albumin in PBS and incubated with primary antibodies overnight at 4 °C and then with the appropriate horseradish peroxidase-conjugated secondary antibody. Sections were incubated in a 3,30-diaminobenzidine reaction solution (Abcam) and counterstained with hematoxylin. For immunofluorescence staining, frozen sections (5 μm) were fixed for 10 min in cold acetone and then stained with primary FITC-conjugated antibodies. Primary and secondary antibodies used for immunohistochemistry are listed in Additional file 3: Table S3. CD68, F4/80, TNF-α, TGF-β, and NF-κB p65-positive stains were quantified in 10 to 15 glomeruli in each section, and the positive glomerular areas were expressed as a percentage of the total area.
Terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) assay
Apoptosis of tubular epithelial cells was detected via TUNEL staining using an ApopTag Plus Peroxidase In Situ Apoptosis Kit (S7110, Sigma-Aldrich, St. Louis, MO), according to the manufacturer's instructions. TUNEL-positive cells were quantified in each section, and the number of positive cells was expressed as a percentage of the total cells.
Transmission electron microscopy
Small kidney cortices were fixed in 4% glutaraldehyde and 1% paraformaldehyde, dehydrated, and embedded in Spurr resin. Glomeruli were localized in semi-thin sections stained with toluidine blue. Ultrathin sections, with one or two glomeruli per tissue specimen, were stained with lead citrate for transmission electron microscopy. Four to ten photographs covering one or two glomerular cross-sections were captured using a JEM-1400 transmission electron microscope (JEOL, Peabody, MA). The images obtained had a final magnification of approximately ×10,000.
Scanning electron microscopy
Small cubes of kidney cortex fixed in 2.5% glutaraldehyde were immersed in 1% osmium tetroxide in phosphate buffer for 2 h. Following dehydration with a graded series of ethanol, specimens were transferred into hexamethyldisilazane for chemical drying. After mounting on aluminum stubs with carbon paste, the dried specimens were coated with gold using an ion sputter coater (SPT-20, COXEM, Daejeon, Korea) and observed with an EM-30AX scanning electron microscope (COXEM, Daejeon, Korea).
Statistical analyses
The results are expressed as mean ± standard error of the mean. The statistical significance of differences was determined using unpaired Student's t-test or one-way analysis of variance followed by post hoc Tukey's (honestly significant difference, or HSD) test. All statistical analyses were performed using GraphPad Prism 9 (GraphPad Software, San Diego, CA). | 2021-04-17T13:44:43.343Z | 2021-04-17T00:00:00.000 | {
"year": 2021,
"sha1": "26510a50ec929cef6077887a71bc6f6ad4163378",
"oa_license": "CCBY",
"oa_url": "https://jnanobiotechnology.biomedcentral.com/track/pdf/10.1186/s12951-021-00857-w",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "26510a50ec929cef6077887a71bc6f6ad4163378",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255814110 | pes2o/s2orc | v3-fos-license | Authoritative subspecies diagnosis tool for European honey bees based on ancestry informative SNPs
With numerous endemic subspecies representing four of its five evolutionary lineages, Europe holds a large fraction of Apis mellifera genetic diversity. This diversity and the natural distribution range have been altered by anthropogenic factors. The conservation of this natural heritage relies on the availability of accurate tools for subspecies diagnosis. Based on pool-sequence data from 2145 worker bees representing 22 populations sampled across Europe, we employed two highly discriminative approaches (PCA and FST) to select the most informative SNPs for ancestry inference. Using a supervised machine learning (ML) approach and a set of 3896 genotyped individuals, we could show that the 4094 selected single nucleotide polymorphisms (SNPs) provide an accurate prediction of ancestry inference in European honey bees. The best ML model was Linear Support Vector Classifier (Linear SVC) which correctly assigned most individuals to one of the 14 subspecies or different genetic origins with a mean accuracy of 96.2% ± 0.8 SD. A total of 3.8% of test individuals were misclassified, most probably due to limited differentiation between the subspecies caused by close geographical proximity, or human interference of genetic integrity of reference subspecies, or a combination thereof. The diagnostic tool presented here will contribute to a sustainable conservation and support breeding activities in order to preserve the genetic heritage of European honey bees.
Background
Honey bees (Apis mellifera L.) are the most important managed pollinators and currently under threat due to a multitude of pressures worldwide [1,2]. The species shows considerable variation across its natural range and is comprised of at least 30 described subspecies belonging to different evolutionary lineages [3][4][5][6]. Europe holds a large fraction of this honey bee diversity with numerous endemic subspecies representing four evolutionary lineages, namely the African lineage (A), Central and Eastern European lineage (C), Western and Northern European lineage (M), and Near East and Central Asian lineage (O) [7,8]. However, this diversity and the natural distribution range of European honey bees have been influenced by anthropogenic factors to an extent that several locally adapted populations are at risk due to introgression and crossbreeding [9][10][11]. Large-scale queen breeding, commercial trade and long distance migratory beekeeping may reduce genetic diversity and can lead to genetic homogenization of admixed populations [9,12] and potential subsequent loss of local adaptations. In fact, it has been demonstrated that locally adapted honey bees have higher survivability [13] from which follows that the conservation of the underlying genotypic variation must be a priority for the long-term sustainability of populations [14]. To conserve the honey bees' natural heritage and thereby its adaptive potential to future global change, there is a need to promote the sustainable breeding of certified local subspecies.
Numerous conservation efforts for native honey bees have been initiated across Europe [9,10,15,16]. The success of such conservation efforts including genetic improvement programs [17,18] depends on mating within the population of interest, which is complicated by the honey bees' mating system where virgin queens mate freely with multiple drones from surrounding colonies [19,20]. Beyond the use of isolated mating apiaries or artificial insemination, successful mating control measures can include different management techniques of queens and drones [21] and regular monitoring of genetic origin and parentage. In some countries and regions in Europe, queen importations are restricted to the native honey bee subspecies [22,23] or ecotypes [24,25]. In such instances, when trading queens or colonies across national borders, queen origin needs to be verified. Additionally, authentication of the genetic origin of bee products in terms of a certifiable native bee label, could help beekeepers to better market their hive products [26]. Thus, to implement effective border control, increase economic value of bee products and to support informed conservation and breeding management decisions across Europe, there is a demand for diagnostic genetic test to reliably infer the subspecies of origin.
With the advances of high-throughput sequencing and genotyping technology in the last decade, reference genomes, whole-genome sequence data, and thousands of individual genotypes are now available for many species. Within these oftentimes massive data sets, it is possible to mine for highly informative single nucleotide polymorphisms (SNPs) that can then be exploited to genotype a larger number of individuals [27,28]. Such genotyping panels based on a selected set of informative SNPs have been developed for numerous species, including humans, and can be used to infer introgression, genetic ancestry, population structure, genetic stock identification, and food forensics [29][30][31].
Different approaches have been used to select informative SNPs from larger genotyping panels or sequence data (reviewed in [32,33]). The most common and popular method for selection is population differentiation as estimated by F ST , which is based on allele frequency differences between populations expressing the variation among populations relative to the total population [34,35]. Principal Component Analysis (PCA) has also been employed to identify informative SNPs, since it reduces feature dimensionality while only losing little information and is particularly advantageous with complex population structures [28,36]. Given a set of informative SNP markers, supervised classification and socalled assignment tests are employed whereby an individual is assigned to predefined classes (i.e., subspecies or populations of origin). Classical applications of assignment testing in population genetics first used supervised parametric likelihood-based approaches [37,38]. Recently, new methods, together referred to as supervised machine learning (ML), have emerged in computational population genomics [39]. The general approach for any supervised ML classifiers is to split the data into a reference (training) set to 'learn' a function that can discriminate between the given data classes [40]. This function is then used to predict the probability of an 'unknown sample' (test) of belonging to any given class (e.g. subspecies). The accuracy of the classification, expressed as the proportion of test individuals correctly classified to their population of origin, is influenced by the properties of the training data set (i.e., number of samples, genetic diversity, levels of population differentiation, degree of overlap in data distribution and quality of reference samples) [41]. ML classifiers aim to optimize the predictive accuracy of an algorithm rather than performing parameter estimation of a probabilistic model, and they have the potential to be agnostic to the assessment of the given dataset, i.e. without assumptions of the processes leading to differentiation, including the evolutionary history [39].
For honey bees, different SNP panels have been designed, for instance to identify and estimate C-lineage introgression in M-lineage subspecies A. m. iberiensis and A. m. mellifera [15,[42][43][44][45][46]. The latter subspecies is native to northern and western Europe and once occupied a large fraction of the European territory, but is now threatened and even has been completely replaced in much of its range [10,47,48]. Moreover, SNP panels have also been developed to infer the level of Africanization and ancestry in honey bees of the New World and Australia [46,49,50]. However, for most A. mellifera subspecies, whose populations have been genetically examined to a lesser extent or not at all, molecular knowledge at this level of detail is still lacking. These subspecies and locally adapted populations or ecotypes appear more vulnerable due to the extant multiple threats to honey bees.
The SmartBees project was initiated with the purpose of developing new tools to describe and conserve honey bee diversity in Europe. We have designed a molecular tool consisting of highly informative SNP markers suitable for assigning honey bee individuals to their subspecies of origin, based on a comprehensive sampling of European honey bee diversity. Based on pool-sequence data from 1995 worker bees representing 22 populations, four evolutionary lineages and 14 subspecies, we selected 4400 informative SNPs employing two powerful and commonly used approaches (F ST and PCA). Of these, 4165 SNPs, for which probes could be designed and which passed the BeadChip decoding quality metric, were genotyped in 3903 individual bees using the Illumina Infinium platform. Final quality control filtering left 4094 reliable SNPs to build a statistical model using machine learning (ML) algorithms for assignment of European honey bees to 14 different genetic origins. The best model was the Linear Support Vector Classifier (Linear SVC) which could correctly assign 96.2% of the tested samples to their genetic origin. Thus, the here presented method accurately identifies European subspecies, which is crucial to support management strategies in sustainable honey bee breeding and conservation programs.
Samples and pool-sequencing
A total of 22 populations representing the four European evolutionary lineages and 14 subspecies have been sampled from their native ranges throughout Europe and adjacent regions (Tables 1 and S1). Each selected population included up to 100 worker bees from unrelated colonies, totaling 2145 samples, which represents the most comprehensive sampling effort for the study of European honey bees to date. The samples from each population were homogenized, pooled and their DNA extracted. Sequencing on an Illumina HiSeq 2500, produced 1.6 billion paired-end fragments (3.2 billion individual reads) with an average read length of 125 bp, and a total genome depth of coverage of 2800x. Sequencing and variant statistics can be found in Table S2.
Selected SNPs
While main evolutionary lineages were easily differentiated with only few SNPs ( Figure S1A), it was more challenging to differentiate closely related subspecies with a reduced number of genetic markers. Given the complex, hierarchical population structure of European honey bees, we employed two powerful and commonly used approaches, PCA ( Figure S1) and F ST , to identify the most discriminant markers to differentiate subspecies of European honey bees (see details in Methods and supplementary materials and methods). Based on the variants infered from the pool-sequence data, we selected 4400 informative SNPs, of these, a total of 4165 SNPs passed the decoding quality metric for genotyping using the Illumina Infinium custom-designed BeadChip, indicating that 99% of the originally submitted probes were suitable for genotyping. The SNPs are distributed across all of the 16 honey bee chromosomes as well as in unplaced contigs (Table S3), with an average distance between SNPs of 64 kb. SNP information and genomic position of the 4165 SNPs selected to differentiate European honey bee subspecies are presented in Additional file 1.
Sample genotyping and visualization
Of the 4165 SNPs, 4094 were successfully genotyped in 3896 individual bees using Illumina Infinium BeadChip technology (Table 1). With only 71 SNPs never producing any data, the genotyping success rate (SNP validation) rate was 98%. The average call rate per individual was 0.87, varying among samples of every subspecies from 0.84 in A. m. cypria to 0.89 in A. m. adami (Table S4). More than one-third of the samples have a call rate exceeding 0.9.
The genotype data of the individuals from the pool sequencing is visualized in a t-SNE plot [51] that reduces high-dimensional data to a two-dimensional map where Table 1 Samples individually genotyped for subspecies classification (N TOT = 3896) consisting of individual samples from the pool sequencing (in bold, N = 1998, excluding 62 outliers) and new independent samples (N = 1908). Samples were collected from their native range and labelled based on previous studies, morphometric analysis or local knowledge (see Methods sections and Table S1). 70% of pool sequencing samples (N = 1391) were used as training data for building the model, while the remaining 30% (N = 597) together with the independent samples (N Total = 2505) were considered as out-of-sample data for subsequent validation Figure S2.
Sample classification using machine learning
We employed machine learning (ML) methods to build a model for the classification and assignment of European honey bees to its subspecies of origin. Out of the tested ML algorithms, the best performing model was the Linear SVC (Table S5). The model calculates the prediction probability for a sample to belong to any of the 14 reference populations. Each test sample was classified into the subspecies which showed the highest prediction probability ranging from as low as 0.29 to 1.0 with a median of 0.98 ( Figure S3). A confusion matrix was used to summarize, describe and visualize the performance of the Linear SVC classification model on a set of test data (out-of-sample data, N = 2505) for which the true values (subspecies) were known. For the lineages, the model is capable of predicting all samples with 100% accuracy ( Figure S4). For the subspecies, the confusion matrix revealed that for most of them the model accurately predicted the ancestry of the test samples (N = 2505), with only a few exceptions (Fig. 2a). The accuracy ranged from 65 to 100%, indicating that some subspecies are easier to distinguish than others. In total 96.2% of test samples were correctly predicted, while 95 individuals (3.8%) were misclassified, i.e., predicted by the model with a different subspecies than the labeled one (true values), for instance: four A. m. ligustica bees were predicted as A. m. carnica, two "A. m. carpatica" bees each as either A. m. carnica or A. m. macedonica, and 23 A. m. cecropia bees were predicted as A. m. macedonica.
The model predicts the probability that a given sample belongs to one of the 14 subspecies under study. On this basis, the test samples were assigned to a certain subspecies based on the highest prediction probability, even if the probability was low (see above). Therefore, with the purpose of increasing the certainty of classification we set a probability threshold, so to ensure that only samples very likely belonging to any of the 14 subspecies were assigned, while test samples with low prediction probabilities were considered unassigned. In Fig. 2b, we show an example of setting a probability threshold at 90%. By setting this threshold, we increased the proportion of truly assigned samples from 96.1 to 99.6%, while the misclassification rate fell from 3.9 to 0.4%. However, 407 of the test individuals remained "unassigned", for instance, 22 out of the 23 A. m. cecropia bees predicted as A. m. macedonica were no longer considered misclassified but enter the unassigned category.
Discussion
In this study, we performed a large-scale and comprehensive sampling following a standardized procedure, and aimed to capture as much of the honey bee genetic diversity in Europe as possible by deep-sequencing of pooled populations. Further, we applied two powerful Fig. 2 Confusion matrix for test samples (out-of-sample data, N = 2505) showing the (rounded) percentages of truly assigned individuals (diagonal) and percentages of individuals assigned to a different subspecies (misclassified; upper and lower triangles). a Assignment based on the highest prediction probability classifies each of the test individuals to a subspecies, while b using a probability threshold of 90% some samples are considered "unassigned" and excluded from the confusion matrix SNP selection methods [32,33] to address diversity at different levels of differentiation (lineages, subspecies, populations). Subsequently, these ancestry informative markers were employed to build a model to classify samples of European honey bees into subspecies.
The considerable honey bee diversity poses a challenge when it comes to providing a discriminative tool applicable across Europe. The four European lineages were easily distinguished genetically with only 200 SNPs due to their ancient divergence [52], but difficulties arose at a lower hierarchical level of differentiation. Subspecies from the same evolutionary lineage diverged only recently [53] and are, thus, genetically very close. Moreover, there are some areas in Europe where A. mellifera subspecies variation has not yet been exhaustively described, while in others human-mediated introgression contributes to blurring the natural boundaries between subspecies [42,48,54]. National breeding programs can also disrupt the natural gene flow and may contribute to changing the genetic background of the original subspecies [11,12,55,56]. In fact, in our study applying a stringent filtering option we only identified few unique SNPs that were exclusive to one population. Similarly, other population genomics studies have found a high degree of allele sharing across and within evolutionary lineages [7,53]. In contrast, we found variation in the average call rate per individual between subspecies which may, in part, be explained by the presence of null alleles (alleles producing no signal), suggesting sequence variation or subspecies-specific deletions within the probe site. Probes that did not work for certain subspecies (i.e. missing data), in fact, contain valuable information and even enriched our model.
We employed a machine learning (ML) approach to build a model for subspecies classification. ML takes advantage of high dimensional input and provides an improvement of prediction accuracy in a model-free approach [39,40]. In this way, subtle differences can be revealed which was particularly relevant in our study, due to the high number of closely related subspecies we wanted to discriminate. Our best performing model was Linear SVC, member of the family of Support Vector Machines (SVMs), which are known to generalize well because they are designed to maximize the margin between any two classes (subspecies) [57]. Typical biological applications of SVMs include protein function prediction, transcription initiation site prediction and gene expression data classification (reviewed in 57). In the field of population genetics, a thorough ML approach to select the best model is generally not yet commonly implemented, although specific models have been developed for ancestry inference [58,59]. Here, we employ a comprehensive ML approach based on genotype data for honey bee subspecies diagnosis.
Despite the comprehensive sampling effort, the careful SNP selection and the application of the latest classification methods, some limits remain in the diagnostic system. For instance, within the C-lineage we have experienced problems in differentiating samples according to the alleged subspecies. Such misclassification of individuals can be explained by various factors coming together: (i) this lineage is of comparatively recent origin [53] and (ii) consists of multiple highly interrelated subspecies within close geographical proximity (see Figure S1D); (iii) the taxonomic status of some populations has not yet been fully resolved [60][61][62]; and (iv) the genetic background of some populations is being altered by introgression due to human interference [63]. Furthermore, labelling errors of the out-of-data samples could not be ruled out as an additional source of misclassification, especially if we refer to those samples for which the model predicted a different subspecies with high probability. Supervised ML relies on the qualities of the reference data for classification, thus, in the future, we aim to refine the training data to improve the model prediction accuracy and reduce the misclassification rate.
It is also important to note, that by setting a probability threshold for the assignment of any subspecies, the misclassification rate was reduced, for some subspecies considerably. While such a threshold increases the confidence in subspecies prediction, it also implied, however, that quite a few individuals were left "unassigned". What threshold is used as a cut-off for subspecies classification depends on the specific circumstances and the application. For example, for the conservation of a small endangered population the threshold might be set lower in order to maintain genetic diversity, than for instance in a pure breeding line under selection for specific traits.
Overall, earlier methods based on morphometry, mtDNA variation, microsatellite loci, or even SNPs have been effective in differentiating between evolutionary lineages and, to some extent, between subspecies of the same lineage [22,42,45,[64][65][66][67]. Yet, our diagnostic tool is the most comprehensive tool to date to reliably classify European honey bees into subspecies in a single analysis. Moreover, the advantage of our approach is that it is a dynamic tool that can be updated to include more subspecies by genotyping new samples and adding their data to rebuild a classification model using ML with additional subspecies. Ongoing research indicates that this approach is applicable to A. m. siciliana from Sicily. Furthermore, individual bees from South Africa tested with our system were rejected as being of European origin (i. e., low prediction probability to any of the subspecies). This dynamic tool, therefore, could easily incorporate new populations to be discriminated, and would even have the potential to be optimized to differentiate populations/ecotypes within subspecies, or to evaluate the degree of introgression.
Conclusions
The main finding of the study is that our model can classify bees into each of the European subspecies with high accuracy. Consequently, as the bees included in this project were collected in a vast area ranging from Russia and Armenia in the East to Portugal in the West, and from Malta in the South to Scotland in the North, we conclude that much of the natural diversity of European honey bees can still be considered extant, in spite of human interference since more than 150 years. The in situ conservation of this genetic heritage is our duty [68], and we believe that the honey bee subspecies diagnostic tool presented will make a useful contribution. It is of value in an array of applications: for beekeepers who want to know the subspecies of their bees; for conservation managers in Europe, where subspecies diagnosis is essential to monitor the hybridization rate of colonies within conservatories; for veterinarians to control queen trade; for bee breeders to certify the subspecies origin of their queens; and for beekeepers to authenticate their bee products.
Pool-sequencing samples
For this study, in total 22 populations have been sampled, all within their native range (Tables 1 and S1), and are referred to as different subspecies and genetic origins according to the classification of Ruttner [8] [69], "A. m. carpatica " Foti 1965 [60], and "A. m. rodopica" Petrov 1991 [61]. There exist some uncertainty and unresolved taxonomic status of some populations, and subspecies descriptions in literature have not always been performed according to the standards laid down in the International Code of Zoological Nomenclature (ICZN) [62]. Thus, different views are found in literature to what is to be considered a subspecies or ecotype. In this paper, we do not aim to resolve or justify any classification. Finally, we considered 14 subspecies/ genetic origins (listed above) for our diagnostic tool, which were used as categories in the machine learning classification model.
Each selected population included up to about 100 (ranging from 86 to 100) worker bees from unrelated colonies that were used for subsequent pool-sequencing. Effort was undertaken to cover the entire distribution range of any subspecies, while taking into account within-subspecies variability when appropriate. We focused on collecting representative samples for each subspecies by primarily sampling from beekeepers that were known not to import bees in order to minimize the risk of including hybrids. Moreover, we only chose one worker bee per apiary to avoid related individuals and to include as much diversity per population as possible. Also in order to secure the subspecies-origin of the collected samples, in some cases (where possible), a morphometric analysis was performed and/or we relied on already genotyped bees [55,65,66,[70][71][72]. Detailed information on sample origin and respective references are presented in Table S1.
DNA extraction, library preparation, and pool-sequencing
The heads or thoraxes of up to 100 bees (Table S1) from each pool were homogenized, DNA was extracted from all samples by using a magnetic bead-based purification method (NucleoMag® Blood 100 μL, Macherey-Nagel, Germany). Subsequently, sequencing libraries of each pool-DNA were constructed with the TruSeq DNA PCR-Free library preparation kit and sequenced on an Illumina HiSeq 2500 platform. Bioinformatic processing, including trimming, mapping and variant calling of the generated pool sequence data, was performed using best practices and standard software (details in supplementary material and methods). The pipeline for the analysis of the pool sequence data is available at https://github. com/jlanga/smsk_popoolation.
Selection of ancestry informative markers
Several studies have selected a limited number of SNPs to differentiate between the main evolutionary lineages [15,45,46], however, for closely related subspecies more markers and a more refined selection approach are needed. Thus, we used two different approaches (PCA and F ST ) [28,34] to identify and select informative SNPs, in order to capture the most discriminant markers at different levels: (i) SNPs to differentiate the four main evolutionary lineages, (ii) SNPs to discriminate subspecies within evolutionary lineages, and (iii) SNPs to identify specific populations within subspecies (e.g. ecotypes).
First, we created a matrix with the minor allele frequencies for each SNP and sequenced pool, which was used to perform PCA to select SNPs that differentiate the main evolutionary lineages ( Figure S1A). Second, PCA was performed separately on the subsets of pools from each lineage in order to select informative SNPs to discriminate subspecies within each lineage (Figure S1B-D). We used the FactoMineR R package [73] and custom-made R scripts to select at each hierarchical level the SNPs with the highest contributions to the significant PCs. Using this procedure, 300 PCA-informative SNPs were selected for discriminating the four evolutionary lineages, 200 SNPs for the M-lineage, 600 SNPs for the O-lineage and 1100 SNPs for the most complex C-lineage ( Figure S1D). Preliminary simulations using allele frequencies from the pool-sequencing revealed that this approach was highly effective in discriminating lineages and subspecies ( Figure S1).
To select additional SNPs that can differentiate between pools, pairwise F ST values [74] between all population were calculated for each SNP with two settings (loose and stringent options) using PoPoolation2 [75]. The loose setting option will return more SNPs with less certainty and lower quality, which in turn potentially reduces genotyping success. This drawback is counterbalanced, since the loose option increases the chance of identifying highly informative population-specific (unique) SNPs. For either setting option (loose and stringent), the pairwise F ST values of each pool against all other pools were summed up for each SNP and then ranked according to the highest summed F ST value. A fixed and unique SNP in one pool is expected to have a maximum sum of 21, which means this variant is only present in this specific population. A reasonable tradeoff between unique and reliable SNPs was achieved by selecting the top 20 SNPs with the highest summed F ST from the loose option and the top 80 SNPs from the stringent option for each pool. With 22 pools, a total of 2200 informative population-specific SNPs were selected using F ST .
Overall, 4400 ancestry informative SNPs were selected based on PCA and F ST (Table S3). These highly informative markers are not only important for the assignment of individuals to subspecies as presented in this study, but, because of their varied allele frequencies in different populations, they can be used, for instance, for classification of new subspecies and for further follow-up studies.
Probe design
Probes for the 4400 selected SNPs were evaluated for genotyping on the Illumina Infinium platform using Illumina's DesignStudio® software which requires as input the flanking region of 50 bp up and downstream of each SNP. SNPs were discarded if no probe could be designed in the flanking region or if the probes had more than one hit when aligned to the honey bee reference genome. The final list of 4197 SNPs was submitted to Illumina for probe design and production. The SNPs are distributed across all of the 16 honey bee chromosomes as well as in unplaced contigs (Table S3; Additional file 1), with an average distance between SNPs of 64 kb.
Validation samples and genotyping
A total of 3958 individual bees were genotyped for the selected SNPs, including 2050 same individual worker bees that were used for pool sequencing, as well as 1908 newly collected individuals (Table 1). These new additional samples were received from several different sources and of variable quality, including whole honey bees in ethanol, honey bees squeezed on FTA cards, tissue samples from flight muscle or purified DNA. These originated from SmartBees breeding apiaries [76] and from colonies examined for Varroa-sensitive hygienic behavior within the SmartBees project [77]. The samples were genotyped using the custom-made BeadChip array Infinium iSelect XT 96. The results were analyzed using Illumina's GenomeStudio® software, and the genotypes of each sample were exported for further analysis. For an initial visualization of the genotyping results, we created t-distributed stochastic neighbor embedding (t-SNE) manifold plots. This technique visualizes highdimensional data by giving each data point a location in a two-or three-dimensional map [78]. Outliers and samples that were labeled as one subspecies, but were clearly grouped with another cluster, were removed, in total 62 samples, leaving N = 1988 pool sequence reference samples. This was done with the objective to create a highquality and representative reference data set for subspecies assignment.
Sample classification using machine learning (ML) algorithms
In order to build a model to classify and predict the subspecies assignment of unknown samples of European honey bees, we employed ML methods using the scikitlearn python environment [79]. First, the 1988 genotyped individuals from the pools were shuffled, then 70% of them (N = 1391) were used as training data. The remaining 30% (N = 597), together with the additional newly collected individuals (N = 1908) were considered as out-of-sample data (N Total = 2505) for subsequent validation (Table 1) [40]. Different supervised ML algorithms were tested, including RandomForest, LogisticRegression, SupportVector Machine (SVM), and Linear SupportVectorClassifier (SVC) ( Table S5; detailed information on model selection in supplementary materials and methods). Briefly, the genotype data was converted to a matrix compatible with machine learning (one-hot encoding) [80]. Class information such as lineage and subspecies of each sample was added to the matrix, which was used to train the different machine learning models to predict the sample ancestry. Linear SVC was one of the best performing models according to average accuracy estimated using cross-validation and was finally selected (Table S5, Figure S5).
After training the Linear SVC model, it was used to classify out-of-sample data (N = 2505). Samples were classified according to the highest prediction probability belonging to any of the subspecies. A confusion matrix [81] was created to summarize and visualize the performance on out-of-sample data for which the true values are known. Each row of the matrix represents the true class, while each column represents the predicted class based on the highest probability for each subspecies. The resulting percentages compare a list of expected values with a list of predictions from the model.
In order for the model to be applied in practical conservation and breeding, we defined a threshold of 90% based on the observed distribution of the prediction probabilities ( Figure S3), which are in accordance with values found in bee literature [43,82]. If the prediction probability for any given sample is less than the threshold of 90%, it is considered "unassigned", while if it exceeded the threshold it was assigned to the respective subspecies. (Table S1). All authors substantively reviewed the manuscript contributing with important comments that much improved the manuscript and approved the final version.
Funding
The SmartBees project was funded by the European Commission under its FP7 KBBE programme (2013.1.3-02, SmartBees Grant Agreement number 613960) https://ec.europa.eu/research/fp7. MP was supported by a Basque Government grant (IT1233-19). The funders provided the financial support to the research, but had no role in the design of the study, analysis, interpretations of data and in writing the manuscript.
Availability of data and materials
All sequence data from the pools analyzed during the current study have been submitted to the NCBI Short Read Archive (SRA) under the BioProject accession number PRJNA666033: https://www.ncbi.nlm.nih.gov/sra/?term= PRJNA666033. The pipeline for the analysis of the pool sequence data is available at https://github.com/jlanga/smsk_popoolation.
Ethics approval and consent to participate Not applicable.
Consent for publication
Not applicable.
Competing interests JM, RON and RV were GenoSkan employees at the time the project was designed. GenoSkan, now owned by Eurofins Genomics, was an SME project partner in the SmartBees project. At present, JM is a bioinformatician at Eurofins Genomics, who is the genotyping service provider of the SNP chip presented in this study. | 2023-01-15T14:31:48.682Z | 2021-02-03T00:00:00.000 | {
"year": 2021,
"sha1": "6afe41ecf21230d8561e7b606d6f7b48f2cce3d9",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12864-021-07379-7",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "6afe41ecf21230d8561e7b606d6f7b48f2cce3d9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
14703171 | pes2o/s2orc | v3-fos-license | Ultrasound measurements of the liver: an intra and inter‐rater reliability study
Abstract Introduction: Ultrasound is an easy and inexpensive method to rapidly assess the size of the adult liver. The literature addressing reliability of liver measurements using ultrasound is poorly reported and inadequate. In this study, intra and inter‐rater reliability of multiple measurements of the right lobe, left lobe and entire adult liver were assessed. Methods: Two examiners acquired ultrasound images of the liver in multiple positions. Fifteen measurements were taken from each set of images by each examiner. One examiner repeated the images and measurements. Results: Results demonstrated high intra‐rater reliability for all measurements (ICC's 0.67–0.97). Inter‐rater reliability also demonstrated high reliability (ICC's 0.71–0.94) for nine of the fifteen measurements (six representing the right lobe, one representing the left lobe and two representing the entire liver. Further analysis using paired samples t‐tests and Bland Altman plots were performed on these nine measurements. Conclusion: From this study, the most reliable measurements are suggested to be MCL Dome to tip and MCL Max AP for the right lobe and Midline Max AP for the left lobe. The only measurement to truly encompass both lobes (Max Trans) was not shown to be reliable.
Methods
Ethics approval was sought and granted from the ethics committee of the University of South Australia prior to commencement of the study.
A sample size calculation was performed using PASS 11 (NCSS, Utah USA) which determined a sample size of 12 participants was required with two observations per participant to achieve 80% power to detect an alternative hypothesis intra-class correlation coefficient (ICC) of 0.7 against the null hypothesis that the ICC is 0.1 using a t-test with a significance level of 0.05.
A sample of convenience of twelve participants was recruited via email from the staff and students of the University of South Australia. Participants were excluded if they were unable to read and comprehend the information sheet or were under 18 years of age. An information sheet was provided to potential participants. Once they had agreed to take part, written consent was obtained before their ultrasound.
Two qualified sonographers (JC & CP), one of which was the principle investigator performed the ultrasounds. Each participant was asked to Ultrasound measurements of the liver: an intra and inter-rater reliability study Key: Image 1: With the transducer in a longitudinal orientation. An image of the liver was taken at its perceived largest longitudinal diameter. Image 2: With the transducer in a transverse orientation. A panoramic transverse image of the liver was taken to encompass the maximum transverse diameter. Image 3: The right clavicle of each patient was measured with a ruler and the midpoint determined. This was deemed the mid-clavicular line. An image was taken with the transducer orientated longitudinally into the mid-clavicular line. This anatomical position best represented the right lobe of the liver. Image 4: The midline of each patient was determined using the xiphisternum as an anatomic marker. An image was taken with the transducer orientated longitudinally along the midline. This anatomical position best represented the left lobe of the liver. Image 5: A transverse image of the liver was taken at the confluence of the three hepatic veins with the middle hepatic vein horizontal in the centre of the screen.
lay supine on an examination bed with their body rotated 45 degrees to the left away from the examiner. The participant's abdomen was exposed from the upper hips to the sternum. Three image series were performed for each participant. All images were taken with the participant in a state of inspiration. The first image series was performed by the first examiner, the second image series was performed by the second examiner, and following a short break, the first examiner performed a third repeated image series. Each image series consisted of five ultrasound images, from which a total of 15 measurements were made (Table 1). Three measurements were representative of the whole liver, 6 measurements were representative of the right lobe of the liver and six measurements were representative of the left lobe of the liver.
Both examiners accessed the saved images at a later date to perform the measurements. Linear measurements were performed using the machine's inbuilt callipers. Area measurements were performed by tracing the outline of the liver on the screen using the machine's continuous trace function and track ball which resulted in the machine automatically calculating the area of liver on the screen. The distance around the edge of the liver was calculated using the machine's continuous trace function and track ball.
The measurements chosen were based both on measurements seen in the literature and measurements developed by the authors. The three measurements of Max Long, Max AP and Max Trans (measurements 1-3) were described in multiple studies. However, none of these studies gave detailed information on the measurement technique. 1,8,9,10,11,12,13 The measurement of the liver from dome to tip in the right midclavicular line (measurement 4) is an adaptation of measurement described by Gosink & Leymaster (2005), 14 Sapira and Williamson (1979) 15 and Kratzer, Fritz and Mason, et al. (2003). 16 The measurements of maximum longitudinal liver diameter left to right across the most superior portion of the screen and then the anteroposterior diameter in the midline of this measurement in the right mid-clavicular line (measurements 7 and 8 ) and midline (measurements 12 and 13) were taken from a study by Niderau, Sonnenberg & Muller, et al. (1983). 17 The maximum anteroposterior measurements of the liver in the right mid-clavicular line and in the midline (measurements 9 and 14) were described in a study by Niderau and Sonnenberg (1984). 18 The remaining measurements of the area and perimeter of the liver in the right mid-clavicular line (measurements 5 and 6) and midline (measurements 10 and 11) and the anteroposterior dimension of the liver at the level of the three hepatic veins (measurement 15) were developed by the authors.
The first examiner performed measurements on the first saved image series that they had taken for each participant and recorded their results. The second examiner performed measurements on the images they had taken, the second saved image series. One week later, the first examiner performed measurements on their repeated image series, the third saved image series.
Intra and inter-rater reliability was initially assessed for each measurement using ICC's. Values of 0.7 and below were considered as having poor agreement. Values of 0.7-0.8 were considered to have strong agreement, and 0.81 and above very strong agreement. These initial analyses flagged nine measurements as having strong to very strong agreement for both intra and inter-rater reliability. Bland Altman plots were then performed for the nine flagged measurements to assess the limits of agreement, mean bias in the measurements (paired samples t-test) and assess for any patterns in the bias. Analyses were undertaken using Medcalc 12.
Results
There were 11 female participants and one male participant mean age (SD) 36.3 ± 12 years (range 19-56 years). Participants were not asked questions regarding medical history and the liver was not formally assessed for abnormalities during this study.
Measurement
Mean SD An incidental finding of cystic liver lesions was noted in one participant. This participant was retained in the trial but was advised to contact their general practitioner. The mean measurements can be seen in Table 2.
The results of the intra and inter-rater reliability are demonstrated in Table 3.
Nine measurements had intra and inter-rater ICC results of 0.7 and above. Two of these were representative of the whole liver (Max Long, Max AP), six were representative of the right lobe (MCL Dome to Tip, MCL Area, MCL Perimeter, MCL Max Long, MCL Mid AP, MCL Max AP) and one was representative of the left lobe (Midline Max AP). Further investigation was made of these measurements by way of Bland-Altman plots and t-tests. The results are demonstrated in Table 4.
The limits of agreement for intra-rater reliability for linear measurements ranged from 1.61 cm-3 cm whilst the limits of agreement for inter-rater reliability for linear measurements ranged from 2.7 cm-3.7 cm. The limits of agreement for the area measurement was 34.6 cm 2 . Significance t-tests for both intra and inter rater reliability were performed as a formal test of bias. P values of less than 0.05 demonstrate a statistically significant departure from zero bias. This was demonstrated in two intrarater measurements of the right lobe, namely MCL area P = 0.021 and MCL Perimeter 0.043.
Discussion
Due to the anatomical position of the liver in the body, measurements taken in the right mid clavicular line were representative of the right lobe of the liver, whilst measurements in the midline were representative of the left lobe of the liver. Measurements of the entire liver were representative of both right and left lobes; however it must be noted that measurements of the maximum anteroposterior and maximum longitudinal diameter of the liver would almost always be found in the right lobe as was the case in all participants in this study. Circumstances in which this was not the case would arguably be the result of obvious liver abnormality. Representative measurements of each lobe of the liver have been included in this study as it is well known that the lobes of the liver can react differently to disease processes. For example, in cirrhosis there is often marked atrophy of the right lobe of the liver; at the same time there is often caudate hypertrophy. 19 As expected, measurements of the right lobe of the liver were larger than the left lobe due to normal anatomical configuration of the liver. Measurements of the right lobe were shown to be more reliable than measurements of the left lobe, with all six right lobe measurements showing ICC results of 0.7 and above and only one of the measurements of the left lobe of the liver deemed to have sufficient intra and inter-rater reliability after initial ICC analysis. Two of the three measurements of the entire liver were shown to have sufficient intra and inter-rater reliability and these were both of those measured in the right lobe (Max AP and Max Trans). This is thought to be a reflection of the shape of the liver, with the right lobe being a bulbous rounded shape rather than the often sharply tapering shape seen in the left lobe. As a result, even if measurements were taken in a slightly different plane there would be less variation in right lobe measurements. In contrast, the sharply tapering left lobe may result in wider variations of the measurement values with the same degree of variation in transducer plane.
The intra rater reliability for all measurements (Rt lobe, Lt lobe and entire liver) was good, with the poorest ICC being 0.68 (Lt lobe Midline mid AP). It is important that inter-rater as well as intra-rater reliability be of sufficient magnitude if these measurements are to be used in a clinical setting by different measurers, and hence only those measurements that had sufficient intra and inter-rater reliability were used for further analysis with Bland Altman testing.
Bland Altman analysis is very important in determining the usefulness and reliability of measurements such as these which are being developed to be used in clinical practice. The reason for this is the determination of the limits of agreement. If the variation in measurements as depicted by the limits of agreement is too high, the measurements would not be useful in a clinical setting.
Liver size has been shown in the literature to change significantly with disease process. Raeth, Johnson & Williams 1984 showed a median normal liver MCL measurement to be 11.3 cm and a median abnormal measurements of 16 cm 20 indicating a median change of 4.7 cm between normal and pathological livers. Similar changes in size were noted in multiple other studies in the literature .14,17 Comparatively, a study by Lewis, Philips & Slavotinek 2006 21 showed the liver volume to reduce by an average of 14% following six weeks of a low calorie diet in patients with fatty infiltration of the liver. As a result of size changes documented in the literature, limits of agreement up to 3.5 cm for linear measurements were determined by the authors to be acceptable. It should be noted then in using these linear measurements, that a change noted in liver size of 3.5 cm or less could be considered operator difference rather than liver change. The p-values for the intra-rater MCL area measurement and MCL perimeter measurements showed a rejection of the null hypothesis of zero bias. The limits of agreement of the MCL area measurement of 25.5 cm 2 for intra rater and 34.6 cm 2 inter rater were arguably too high to be useful clinically.
The right lobe measurement with least variability was the MCL dome to tip measurement (limits of agreement up to 2.7 cm between raters), meaning that this measurement could be used to detect differences of dimensional changes in the right lobe of greater than 2.7 cm. For the left lobe midline max AP measurement, based on limits of agreement for inter rater reliability, this measurement could be used to detect differences of dimensional changes in the left lobe of greater than 3.1 cm. Measurements of the entire liver showed the Max AP measure to be the one with the lease variable showing limits of agreement of 3.5 cm between raters.
Three measurements showed a pattern of bias in their intra rater reliability (Max AP, MCL dome to tip, MCL perimeter) with all patterns showing the error to increase with increasing liver size. Only one inter-rater measurement (Max Long) showed a pattern of bias which showed the error increasing with decreasing liver size.
Due to the size and location of the liver the MCL dome to tip and MCL maximum long measurements often rely on estimation of the liver borders as the dome and tip are unable to be imaged on the screen. This is amplified as the size of the liver increases. This was the case in multiple images taken during this study. In this instance, the tip of the liver was measured as the point farthest right in the image. This measurement also becomes inaccurate in the presence of a Riedel's lobe. When the tip of the liver cannot be imaged in the same picture as the dome, the measurements arguably become somewhat reflective of the sector width of the transducer and the ultrasound machine. The implications and impact of this is an area for further research. The MCL Max AP measurements also demonstrate good reliability, without the estimation problems encountered with the MCL dome to tip measurement. In addition, the MCL max AP measurement offers an alternate and orthogonal plane to the MCL dome to tip measurement.
This study was looking at absolute agreement and the sample size of 12 was determined by power calculation. Although adequately powered, the sample size of 12 might be considered small. The study was limited by the use of a sample of convenience. The study was strengthened by the fact that each participant was re-scanned for each set of measurements. This meant that the images as well as the measurements were repeated for each set of data. The biggest source of variation in these measurements is likely to be due to the use of a different scanning plane given that the image achieved and the transducer angle used is very operator dependent.
From this study, the most reliable measurements are suggested to be MCL Dome to tip and MCL max AP for the right lobe and Midline Max AP for the left lobe. Prior to application of these measurements in clinical practice, further research is required into the validity of these measurements, their ability to discriminate between normal and abnormal livers and their ability to detect changes in liver size over time. | 2018-04-03T03:45:22.565Z | 2014-08-01T00:00:00.000 | {
"year": 2014,
"sha1": "5c9d8210b92f4ea94603aa877ca1703462d4c99e",
"oa_license": null,
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/j.2205-0140.2014.tb00026.x",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "5c9d8210b92f4ea94603aa877ca1703462d4c99e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.